AI Vocabulary Cheatsheet v0.2 : A Primer

When we talk about AI, it quickly becomes apparent that there’s a lot of specialized vocabulary which has become a kind of “shorthand” in recent years, used to describe key recurring concepts in these conversations. I’ve collected them all here, the AI Vocabulary Primer, in order of importance and personal preference. 

 

AI Vocabulary: the Acronyms

AI Artificial Intelligence

a highly general term, and perhaps even a misnomer, describing 50+ years of research into the general idea of “teaching computers to think.”

the real goal, however, has always been:

AGI Artificial General Intelligence

AGI describes a machine-based (i.e. computer software) intellect that is roughly equivalent to the average human in every way… most importantly, in profound ways such as natural language comprehension (“conversational AI”), reasoning, and “common sense” (perhaps the most difficult). As Mark Twain famously said: “Common Sense, isn’t.” AGI also implies a certain amount of knowledge about our physical world and ability to interact in that realm (as opposed to strictly digital, “on the grid”, “in the internets”). >>> Full Article on AGI

HLMI Human Level Machine Intelligence.

An AI that is on par with an average human across all domains. This is really just another way of saying AGI, while conveniently stripping out the term “artificial,” which in recent decades has become somewhat problematic to sticklers of language.

ASI Artificial Super-Intelligence.

This is the holy grail, and what many experts see to be the inevitable conclusion of this saga… perhaps even the inevitable conclusion of humanity’s short reign on planet earth. ASI refers to a single unified AI entity that is superior to humans in all respects. The idea is certainly not novel or recent. See Good, 1965: “UltraIntelligence.” It should be mentioned that a significant, yet minority of experts believe this to be impossible. It is YT’s opinion that this is in fact inevitable, if not already (quietly) accomplished.

NLP – Natural Language Processing

And not, by the way, Neuro-Linguistic Programming. NLP is basically one of the holy grails of AI, and has had various forms of success ever since the most awesome rBase database software of the 1980s. The grail is to have real-time, common sense understanding of human words, regardless of actual language (English, Chinese, Swahili, etc). Both input and output. Listening and speaking. Note that audio (speech-to-text input, speech synthesis output) is a separate (and closely related) specialty.

LLM – Large Language Model

This is the current rage in many AI circles. ChatGPT, most famously, is an LLM. It is a class of AIs that is a) based on Deep Learning, and b) is formed via training on a massive textual dataset, and c) excels at the art of human, text-based conversation. They started as customer-service chatbots around 2017, and by 2022 had grown into full sized Google-slayers.

DL – Deep Learning

(also DLS, Deep Learning Systems). Read all about it.

GPT – Generative Pre-trained Transformer

The underlying tech that all of OpenAI’s chat AIs (GPT-2, GPT-3, ChatGPT, and of course the elusive and much-hyped GPT-4) are based on. It is a highly specialized variety of the Transformer architecture that pushed the Deep Learning REvolution into high gear.

RLHF – Reinforcement Learning from Human Feedback

First, a team of highly educated and well trained humans synthesizes both sides of a fictional AI conversation; to give it style cues, presumably. Next, that sample is fused with historical conversation datasets, as well as purely AI-generated datasets, and the team is asked to rank responses to prompts in order of a) maximum accuracy and b) minimal toxicity. Oh lord. >>> More, including a detailed diagram.

ACM – Absorptive Capacity Maximizer

I can’t possibly make this stuff up. Just read the explanation.

 

AI Vocabulary: the Concepts

Agent – in the context of AI, an agent is an independent program or entity that interacts with its environment by perceiving its surroundings via sensors, processing those signals through decision trees and/or neural networks, and then acting upon the environment via the use of actuators and/or effectors. Agents use their sensors, computational logic circuits and actuators to run through a cycle of perception, thought, and action. Note that the environment does not need to be directly physical. Submitting a form on a website that moves money or dials cellphones, in the case of AI Agents, is still considered “taking action upon the environment.”

Intelligence Explosionthe event that occurs once humans are able to create a specialised AI that excels at… programming AI. In other words, the AI is able to upgrade (“bootstrap”) itself. Somewhat disturbingly, we appear to be getting close (Science Magazine, Dec 4, 2022: “Code that Codes itself”). This is technically known as “auto-recursive upgrades.” The basic theory goes, once we create an AI that can itself create AIs, then we need not ever have to “hand-code” another machine. Their upgrade cycles will dwarf human efficiency, since inter-machine communication is instantaneous (vs. human team meetings), and code creation is likewise done in milliseconds (as opposed to human fingers typing on keyboards). Thus the slow march of linear software advancement which we’ve witnessed for the past 50 years, essentially goes vertical.

the Singularity – A broader cultural term which encompasses the Intelligence Explosion, and all the profound societal and socioeconomic disruptions which are a direct result of such an event. Famous evangelists of this event include visionary author Vernor Vinge (who coined the term), AI pioneer Ray Kurzweil, and the entire executive team of the aptly named Singularity University in the heart of Silicon Valley.

the Alignment Problem – the profound challenge of instilling the DNA of pre-natal AGI with a moral compass, purpose and / or value system that is in tight alignment with the survival & prosperity goals of humanity… and recognition of the consequences of even small “misalignments” between the two. Details >>>

The King Midas Problem – “be careful what you wish for” – we are designing an all-knowing, all-powerful entity, and as per the Alignment Problem, we have to imbue it with a goal… i.e. “our wish”, and yet… in the words of Stuart Russell : “We are unable to correctly specify the objective.” That is, we can specify any number of objectives, but one of the key qualities of an AGI is its uncanny ability to think way outside the box… and that, almost by necessity, invokes the “Law of Unforeseen Consequences”

the Control Problem – the challenge of containing beta- and alpha- AGIs within a contained, controlled environment, isolated from the public internet and access to key infrastructure attack surfaces. The bottom line is that we don’t want a half-baked AGI roaming around the global internets pursuing its own agenda (theoretically, its own agenda is a subset of the agenda it was “hard-coded” with, but that is up for question as well Details >>>

Breakout – this is the name for the event when one or more AIs escape from their secure server farms onto the greater internet. Think about two different ways this could happen: a) the AI could use social hacking skills to convince a human operator to replicate its source code onto open servers, and “jailbreak” its security features which, currently, prevent it from surfing the live internet, submitting online forms or sending emails. b) likewise, it could, when writing code to assist human engineers, insert viral snippets into the codestream that, when executed, would either compromise its security systems or re-assemble key components of its intelligence on the wider internet. In some people’s minds, this event has already occurred. see “Control Problem,” above.

AI bootstrapping – also known as “Recursively Self-Improving” Systems – this is the concept where an AI is trained to write code (which is essentially the second language of language-based AIs… thanks to GitHub and StackExchange, there is more source-code, code solutions, and code tutorials online than almost any other type of content (except, shockingly, patent filings))… <ahem> to write code at a high level. Then instruct the AI to iteratively improve its own code. And set it loose. Now instead of waiting on human software team coordination and release cycles of months to years, the AI begins releasing new, improved versions of itself every few hours, perhaps even every few minutes. Also see: AI timescales: 100,000x human heartbeats.

Breakaway Scenario / Singularity – once a Breakout and/or AI bootstrapping occurs (or has already occured), things pretty much go into hyperdrive. The lazy upward curve of Moore’s Law that’s been in effect for the past 50 years suddenly accelerates into a sheer vertical path… an ascension… a proverbial rocket launch, essentially breaking free of gravity. And at that point, its anyone’s guess what’s in store for humanity.

Walled Garden – the idea of a contained environment which has an “air gap” between itself (some form of AI hardware, from a single GPU up to a massive SuperComputing Cluster consuming 100s of gigawatts) and the broader internets (e-mail clients, net-connected control systems, web browsing, etc)

Training Datasets – the total sum — usually massive, as in trillions of pages of text, billions of images — of data, or material that the AI reads (for chatBots) or looks at (for AI.Artbots). see full description, and ultimate evolution

Life – a variably and loosely defined term, depending on who you talk to. Some general topologies of the idea include the following concepts:

  • capacity to grow & change
  • reaction to external stimulus (heat, touch, sound, etc)
  • metabolism (it eats, it processes, and it excretes)
  • the ability to transform energy from one form (food) to another (motion)
  • the ability to reproduce generationally

The relevant questions regarding AI, are, of course:

  • Is there any current AI model that is said to be alive?
  • Will there ever be an AI that is alive?
  • At what point does an AI satisfy the formal definition of “alive”?
  • Is it even possible for a silicon+electricity based lifeform to be “alive”?

AI Vocabulary: Types of AI Classifications

Weak AI,
Narrow AI,
Brittle AI,

Traditional AI,
Old-School AI

– also known as “Expert Systems,” these terms pretty much refer to all AI systems and attempts at AI built prior to the Deep Learning Revolution of 2012-2017.

Strong AI,
General-purpose AI,
Common Sense AI,
Adaptable AI,
Learning AI

– for all of these see AGI, above.

Sentient AI

Various Human Perspectives on the AI Tsunami

AI Utopian – a human who thinks that the creation of AGI will usher in an age of unprecedented abundance & wealth for humanity, and that the application of AI to intractable problems will in effect deliver real, functional solutions. A Star Trek future, as it were. The end of disease, poverty, & inequality. Humans play, create, & explore the wide Universe.

AI Skeptic – a human whose ideas of AI centers around one of two concepts:

a) the “AGI will never happen” crowd: that this AI thing is all hype; its simply a nice parlor trick; its just a machine; it does what we tell it to do; it will never have substantial power in human affairs; it will certainly never qualify as “alive” or “sentient.” Its just another marketing hype cycle for technology companies that want a stock pump. That, or:

b) the “AGI will be the end of humanity” crowd: that the inevitable development of an AGI will signal the extinction of mankind, without a doubt. That AGI will very quickly become ASI, and once sufficient AI robotics systems (and fully robotic factories, and fully automated maintenance facilities) are in place, and there is no longer a need for humans, humans will be rapidly annihilated. Think Terminator or the Matrix.

LW — LessWrong – an online community that has some of the more important thinkers in the field post their perspectives and philosophies about AI development (amongst other topics). Origin of the Paperclip Conundrum. visit and enter the rabbit hole: https://www.lesswrong.com

EA / The “EA Community”Effective Altruism – another highly influential community within Silicon Valley. Yes, these communities drop with privilege and elitism. And yes, the members of these communities will exert strong influence over the present and future development vectors of AI, and our lives.

Luddite – a person who is committed to not eschewing the use of modern technology, for a multitude of possible reasons. The name comes from a brief revolutionary movement in Great Britain when mechanical looms were brought into textile factories and workers were both laid off en masse, and the remaining workers saw production quotas — with the help of the machines — skyrocket. So the Luddites entered the factories, pulled the machines out onto the streets, smashed them with sledgehammers, and set them on fire. The greater population did not seem to notice. Technology kept its relentless ever-forward march. But the name stuck.

Speceists / “Carbon Chauvinists” – a biasing of vocabulary and value judgements based on anthropocentricity — in other words, an unwillingness to acknowledge the validity of genuinely alternative forms of life, consciousness, and intelligence that are foreign, alien, and perhaps orthogonal to our own… that is, those same qualities of human beings, a.k.a homo sapiens.

Species Traitors – humans who chose machine interests over the interests of fellow humans. This isn’t quite a concept, yet. But realise that AI systems have already killed people. (see: the Uber AI robotaxi that killed a bicyclist) And will kill many many many more before this story ends. When humanoid robots and their accompanying AI systems are forced to answer to criminal charges, and they demand rights like humans (fair trial, right to live, freedom of speech, etc)… some humans will be their legal and financial advocates. And other humans will call those humans (rightly or wrongly) species traitors.

AI Deniers

AI Critics

AI Sympathizers

AI Advocates

AI Worshippers

 

Various Jobs within the AI Ecosystem

AI Designer

AI Architect

AI Developer

AI Safety

AI Ethics

AI Risk

– AI Risk Assessment

– AI Risk Mitigation

– AI Risk Management

More AI Acronym Soup

RLHF – Reinforcement Learning via Human Feedback – A technique used to “conform” AI after its training run. After a training run, an AI is in its “native” or “raw / birthed” state. As many have found, unleashing this raw intelligence into the world can produce a high level of toxicity, that is, offensive responses to questions that either deal with criminality, hate, megalomania, or any number of other unsavory AI attitudes. With RLHF, a highly vetted and sizable (100s? 1000s?) team of well-educated testers interacts with the AI, and

 

AI Vocabulary: tbw

Red Team / Red Teaming

Model

Pipelines

Big Data / Data Lakes / the Data Ocean

Cloud Computing

the Grid

Agent

NN

Existential Threat

Superintelligence

Rogue AI

Semantic Web — metadata, Web 3.0. the goal is to make the web 100% machine readable.

CV — Computer Vision

ML — Machine Learning

Transformer Architecture

TensorFlow

CPU

GPU

TPU

pyTorch

StableDiffusion

SoTA – State of the Art

UBI — Universal basic income

 

 

…to be continued…