There is a term that everyone needs to become familiar with: AGI.
AGI stands for “Artificial General Intelligence”, and is meant to make a clear demarcation from classical “AI” of the past 50 years. That “traditional” or “classical” AI was more like expert systems than today’s AI. Because of its specificity, It has also become known by the monikers “narrow” or “weak” AI. AGI, in contrast, is generalised in its application — it is not purpose built towards any specific endeavor, but rather, capable (and competent) at learning and mastering any endeavor towards which it is aimed.
A little more detail here on classical AI. Traditional AI has a reputation for doing certain “narrow” tasks with superhuman (better than any human alive) competency, within the realm of a single domain or task (descriptions of this type of AI include “narrow”, “brittle”, “vertical”, “weak”, “specialized”, “purpose-built,” etc.). Of course, things like complex numerical calculation are “narrow.” [important note: modern AGI candidates, trained into more human-like paradigms and behaviors, regularly & disturbingly make errant guesses where a simple $2 pocket calculator would have solved the same problem trivially]. AI solves for games like Chess & Go are also narrow, and somewhat easy for us to grasp, given the rigid rules & logic that govern these game systems.
But recently, AIs have been aimed at, and conquered, far more “fuzzy”, complex, real-world problems:
- make a medical diagnosis based on X-ray / CT / MRI scan data (this is called AI radiology).
- diagnose skin cancer from a simple photograph.
- calculate optimal insurance rates for auto- & medical-insurance policies for people based on all the particulars of their lives. (this, in corporate-speak, is called “risk assessment” and “risk management”)
- craft the best defense for a criminal case. cite most applicable precedent and case-law.
- optimize sub-second trading (“high frequency trading”, i.e. HFT) strategies for stocks & bond trading.
- and, of course:
drive a car safely from point A to point B. - and, of course:
teach robots how to walk on 2 feet. - and, of course:
optimize the victory margin of a nuclear war, in 2 scenarios: a) first strike, and b) retaliation. (SkyNet) (see: CyberWarfare) (isn’t that last one ironic?) (there is no victory in nuclear war. ever.)
It is of note that each of these applications, though impressive, is still deemed a “narrow” or “vertical” application. That is to say, the autopilot AI in your Tesla can’t play Chess, and the Wall Street trader-bot AI can’t diagnose an X-ray. So each of these AIs is “custom designed” to solve a very particular problem set. For this reason, they are most often called “narrow” or “weak” AI.
Well beyond this, and some days, months, or decades into the future, a theoretical “AGI” or “strong” AI implies an AI that is adaptable to ANY task. This AGI, instead of being programmed towards a specific problem set or goal, instead has only a singular, elegant, all-encompassing meta-goal: to learn. And in doing so, to adapt and master ANY task that it is pointed at.
So that’s AGI : Artificial General Intelligence. Generalised in that a single codebase / piece of software / “agent” can be aimed at any problem without retraining or recompiling. That’s not to say it will master it on its first try (see: 40,000 years of training in 8 days: how 10 virtual years of faceplant failures leads to a platoon of lethal AI robot samurais). That is to say that it has the innate ability to learn and master new skills at a terrifying, utterly inhuman rate of speed.
That’s the goal. And that’s the theory. Now, this AGI moon-shot doesn’t come without its challenges. I’ll articulate merely a couple of them here.
Key Hurdles that a true AGI must clear
There are many technical challenges to birthing an AGI; you can read about the basic recipe here. But there are two aspects of the AGI requirement that are more philosophical, and thus of particular interest. It is implied (in certain, not ALL circles) that an AGI should have two additional qualities, beyond an ability to learn and a generalised competence.
Those being: natural language comprehension, and an innate common sense. I’ll go into a little detail on each of those, here:
a) an ability to understand natural human language — any language, any dialect, any and all slang and accents — including subtleties such as veiled metaphors, jokes, and symbolic references. (is this asking too much? I know many humans who take most spoken words literally, who are wholly immune to subtexts and symbolism… why should we suddenly expect AIs to be “seeing the meaning behind the words?”… but I digress.)
b) an ability to apply “common sense” atop its purely logical analysis of any given problem. This includes the interpretation of subtle, complicated, and confusing language structures.
Here, for instance, is a Q&A that confounds most modern AI (and AGI candidates) on both fronts:
Q: “Did you leave fingerprints?”
A: “I was wearing gloves.”
The answer does not directly answer the question with a Yes or No, but any reasonably intelligent human would immediately understand it as an implied “No.” Yet the logic chain behind that “common sense” deduction is somewhat complex: The gloves covered the hands, therefor the fingers, therefor making it impossible for the fingers to leave prints on some unmentioned surface. Easy for human. Very abstract for a machine.
Another sentence, same type of problem:
“I didn’t put the trophy
. in the suitcase because
. it was too big.”
“What was too big,
. the trophy or the suitcase?“
Thanks, btw, to AI gadfly Gary Marcus, from which these examples come (at least, in my imperfect memory) and whose favorite hobby these days appears to be proving that the current crop of ChatBots do not, in fact, have any form of common sense whatsoever.
The pure logic of the sentence makes the answer ambiguous and confounds many (most?) modern AIs. Human common sense, however, draws an obvious conclusion: only one of these objects can serve as a “container”, and we’re all familiar with large trophies and overstuffed suitcases.
So those are two hurdles that may be the last components of AGI to be fully solved. Now, let’s look at some curious failures that are happening along the way. Specifically, we’re going to examine a subset of one of the biggest Achilles Heels of modern AI : Accuracy, Hallucination, and in the cases we’re about to examine, mathematical error and an inability to acknowledge that error (is that AI ego? we shall see…)
AI’s Got 99 Problems,
and Math is definitely One
…and so is accurately drawing hands, and rendering text that isn’t in Klingon glyphs… but for now, let’s take a deep dive into the curious case of the superintelligent AI that can’t even perform basic math calculations accurately…