What it is: Artificial intelligence researchers argue that AI could fall into three categories: weak, strong, and super. Weak AI is any technological system using techniques that appear intelligent, such as machine learning. Strong AI is essentially the human mind replicated with computers. Super AI is a system that’s intellectually superior to humans.
What it does: Currently, Weak AI is becoming ubiquitous. It is applied in everything from video games, to marketing, to translation software, to investment banking. Meanwhile, Strong AI is further off, although experts disagree on whether its inception will take years or decades. Super AI is more remote still. Some skeptics argue that Strong and/or Super AI will never be invented.
Why it matters: While there is a lot of popular discourse around AI, the term loses its meaning when it can refer to everything from Siri to a true superintelligence. Having clear categories for different types of AI makes it possible to have more intelligent, informed conversations about any given AI technology. Specifically, it helps sort through the endless hype that currently floats around anything digital labeled “intelligent.”
What to do about it: At present, unless you helm the research division of a large enterprise, Strong and Super AI are conversation pieces, not action items. However, Weak AI tools are everywhere, and, chances are, if you’re not automating or enhancing any of your processes with machine learning, you could be, and your competition probably is.
Many actors are working on AI of various kinds. Weak AI tools are now widely available and continue to improve. Two prominent examples are IBM’s Watson™ suite, as well as AWS with AI. As for the other varieties, given the volume of basic research required to bring them about, development is mainly government-based, with China and the USA being notable leaders in the field, although Elon Musk’s OpenAI is also a contender.
- Weak AI continues to allow the automation and enhancement of tech support, internet security, data cleaning, language processing, and other facets of business.
- Weak AI tools allow human decision-makers to spend more time on higher-level tasks rather than being bogged down in repetitive work, as GigaOm CEO Byron Reese explores here in this Voices in AI podcast interview with Marcus Noga.
- The continuing development of Weak AI will spur innovation as machine learning tools uncover previously unexplored patterns in existing data.
- Strong AI’s effects on business could be massive and are difficult to predict, and the same is even more true of Super AI.
The debate is ongoing. Technologists such as Jaron Lanier argue that Weak AI tools are not truly intelligent and that the gap between Weak AI and Strong AI is larger than most predict. GigaOm CEO and publisher Byron Reese points out that we’re still ignorant about some fundamental features of the human mind, and are thus far from replicating them. At the same time futurists such as Elon Musk and philosophers like Nick Bostrom predict that the eventual emergence of Strong and Super AIs is inevitable and that this necessitates research into how to effectively and ethically harness their power. Also, huge strides in AI could be made without a strict replication of human cognitive functions.
Although most AI researchers believe that at least Strong AI will be developed, there is wide disagreement on the timeline. Many speculate on the potential arrival of the Singularity, a future event prophesized by inventor Ray Kurzweil. This is a hypothetical moment at which humanity produces a Strong AI that is capable of bettering itself, thus setting off a feedback loop of incredible technological progress, including human-machine fusion.
Arguably, industries are already stepping beyond Weak AI in some domain-specific applications. For instance, many chatbots can currently convince a human user that they are also human. This is a classical test of AI known as The Turing Test – whether, by human judgment, a system seems to possess humanity.
Also, the next steps towards stronger AI may not require simulating the human brain, and, if so, could be close to fruition. For example, it’s possible that, in the near future, an AI system could determine which of an industries’ processes could be automated, and then proceed to automate them in one fell swoop. Such a system would appear to simulate the versatile self-training features of a human brain, while not actually replicating them outright. This might be called “Pseudo-Strong AI.”