Gigaom brings you our unique analysis and commentary on the present and future of AI.
To argue against artificial intelligence--and again a narrow AI, not a general artificial intelligence, but the kind we know how to build now--to argue against it, to say that we really shouldn't build it or that it is somehow a bad technology is akin, in my way of seeing things, to making a case for ignorance. Artificial intelligence, at its core, is about making more informed decisions, about everything. To say that this is bad is a hard case to make. How can the simple idea of, "Let's study data about the past so that we can make better decisions about the future," be threatening to anyone? And if it does threaten something, isn't that something that needs to be threatened?