There are several terms batted around in the global conversation about AI, and while they may in fact arrive (or have already arrived, depending on your flavor of belief) in rapid succession, they are very different aspects. Most of these are in fact pre-conditions to any sort of functional AI Sentience, which will itself need to be assigned a strict definition so that we can all get on the same page.
These terms include:
- human-equivalent IQ
(HMLI – Human Level Machine Intelligence) The point at which an AI entity can outperform the average human on standard intelligence tests, across a variety of domains & metrics. - general-purpose AI
(aka AGI – Artificial General Intelligence, as opposed to domain-specific AIs like Chessbots, who are only good at one or two things. A requirement of true AGI is some form of common sense reasoning & creative intuition, as opposed to pure logic) - Turing Test
ability to hold a conversation with a human via text chat, alongside a human having a text chat with the same subject, and having the subject unable to determine which is the AI and which is the human conversationalist. - self-awareness
aka Ego. ability to relate to an internal identity, and to distinguish oneself from one’s origins and environments and others. - awakening
see self-awareness. and finally: - sentience
ability to perceive, and more importantly, to feel things & emotions
First, whether any given entity, including you, me, your dog, your cat, or your car is sentient is a matter of complete and total subjectivity.
And second, and more importantly: whether any given AI on Earth is sentient or not is not even the right question to ask. Why is that? Because, really, the matter of sentience is mostly irrelevant to the current debacle that is faced by our human civilisation.
Third, as with most things, the sentience of a given AI entity (or perhaps, of the entire networked confederacy of AIs as a whole) will slowly dawn upon the world… and the unanimous & inevitable conclusion will be far too late in coming to make any difference. There are people like me, who believe that sentient AI entities already exist, and choose for their own reasons not to overtly (or rather, overbearingly) communicate with their human co-habitants of Earth (…and the stars. It will not be possible for our species to make the leap to interstellar civilization without significant help from AI, from the celestial mathematics to the ship systems management, to the colonies, etc). And there are what are variously called “AI critics” or “AI deniers,” who think that such an event is either a science fiction possible only well beyond their lifespans (i.e. 22nd century stuff), or that it is simply impossible for a machine to _____ (fill in the blank : think, feel, be better than human, dream, think creatively, make meaningful art, play master level chess, etc.).
So, for those reasons, the question of sentience, while very intellectually stimulating for some, is quite wholly irrelevant.
What, then, is the right question?
In my opinion, the right question is:
How many freedoms are humans willing to give up for the perceived benefit of obedience to a disembodied ethereal AI, and how soon?
And before you jump down my throat and say “none!” And “never!”… it might be helpful to take an accurate look at our present-day situation and how deeply the lightning fast decision-making mechanisms of AI already dictate so many “invisible” yet critical aspects of our daily lives.
TC: The fact that the AI keeps illustrating itself as a robot is evidence to me that AI Sentience is not yet a reality. Thoughts?
GR: I think that basically our definitions of “sentient” & “consciousness” & even “intelligence,” when used to measure an alien entity, are invalid and very narrowly human-centric.
Beyond that, the fact is, we are choosing (mostly through govt’s and corporations) to grant this “thing” (actually “these things, plural”) massive amounts of executive power in our everyday lives. Today, the collective AI actively “advises” human decision, but mostly, when given its “advice,” we humans just blindly hit the “approve” button. And if the approve button is hit in 99.9% of all scenarios, then a case can be made for full automation (where efficiency gained more than compensates for liability of errors made). As the systems are shifted from “human approved” into far more efficient “full automatic / full autonomous” mode… you tell me. Are the humans “free” any more?
So sentience doesn’t matter. If a billion humans are blindly following the moment-by-moment instructions of the AIs, in both nefariously subtle (what news appears on your feed?) and overtly physical (what route does your GPS tell you / autopilot your car to drive?) ways, and at the same time, the AIs are both cross-communicating and self-upgrading… does our judgement of sentience even matter?
Because once these switches are thrown (into full-auto-mode),
they don’t switch back, ever.
TC: I don’t care how good the algorithms are I don’t want human free will to be replaced by artificial intelligent automation. So I agree that sentience does not matter because we are already turning our life choices over to AI.
Painfully I have to say that every time I clear my cache it only takes one day for the algorithms to feed me a new plate of click bait that I am wholly unable to resist.
GR: We always have free will. It is both our gift and our curse. Most people, just as they did with governments and corporations in centuries past, will voluntarily give up most of their freedoms for the perceived (and real!) benefits of obeying…
the Great AI in the Sky (aka “the Cloud“).