Things are About to get Weird AF… and Weirder still.

weird AI weird AF

So, given the AI’s exponential growth, and its insatiable appetite for content (and our utter willingness — even enthusiasm — to feed it), where are we now? And more importantly, where will we be in the next 12 months & 12 years? One thing’s for sure: If you think it’s weird now?… Its about to get Weird AF.

In my (expert) opinion, at present, we are right in the “elbow” of the AI growth curve. In essence, that means that if we look back historically, the growth appears to follow a long & gentle, almost linear growth… nothing alarming, totally comprehensible.

Yet if we look ahead, the slope steepens precipitously and the curve ascends to an almost sheer vertical wall of unrelenting acceleration and expansion. This type of growth is called “exponential”. It follows the basic pattern of 1, 2, 4, 8, 16, 32, etc. The “exponent” comes from the idea that those numbers are the square, cube, quaternary, tertiary, etc. exponents of “2”.

  • 22 = 4 :: 2×2
  • 23 = 8 :: 2x2x2
  • 24 = 16 :: 2x2x2x2
  • 25 = 32 :: 2x2x2x2x2

these are familiar and comfortable numbers. Those who work with computers will even be comfortable with the next stage: the familiar 512s (29) and 1,024s (210) that represent typical memory or hard drive capacities. But once we get into the hundredth and thousandth members of this series, the numbers are simply staggering.

230 = 1,073,741,824… roughly a billion, still in the realm of possible (but not really practical) comprehension. But …2100 represents a number with 30 zeros appended to it: roughly 12,676,506,000,000,000,000,000,000,000,000,000,000.

Essentially, incomprehensible. In other words, Weird AF.

exponential growth is weird AF

Applying this type of growth to something like IQ: The village idiot’s IQ is the sixth in the series: 26, or 64. Your average smart college kid is double that, at 27, or 128. Einstein’s IQ is, at most, 4x the idiots… 28…256. And for the most part, the ascendancy of human intellect ends with Einstein (and his ilk).

But with every cycle of AI growth, its power is doubling. What took homo sapiens and biological DNA 500,000 years to evolve is about to happen in a mere 50 years of CPU time for AI (or already has happened, depending on your perspective, experience, exposure, opinion and/or belief)…

and if that isn’t miracle enough, it’s really the 50 years following this event (often referred to as “the Singularity”) which is set to truly disrupt the idea of human supremacy on Earth, once and for all.

“What took homo sapiens
and biological DNA
500,000 years to evolve
is about to happen in
a mere 50 years of CPU time
for AI.”

As a friend said to me, “human beings and their biological brain pathways are inherently incapable of comprehending genuine exponential growth.” That said… we can try.

The shorthand of how to grok this paradigm shift is:

“Things are about to get weird AF.”

For reals.

Just In the brief few months since I’ve returned from my 4-year Luddite Sabbatical (aka Time Travel, 2018.08 to 2022.09), I’ve witnessed firsthand the growth and evolution of the tech; and its pace is both staggering, disorienting, and fairly incomprehensible in human terms.

One reason for this is that, using these same “GPAI” (General Purpose A.I.) models, the researchers have trained the AI to write its own code… so essentially, it is now capable (and rumored in research circles) to be designing, training, and upgrading itself, as we speak.

Whether this is true or not today is irrelevant. The fact that it is even being attempted — added to the fact that no AI programmer today designs without the help of an AI-agent collaborator — makes it an absolute certainty that this state of code autonomy (the AI equivalent of the human “cognitive freedom” movement?) will occur, in the very near future.

Within the decade human supervision / guidance / meddling with AI design will no longer be required. Of that I am fairly certain. In broader terms, we may be *unable* to train / manage / guide / control it. Which brings, obviously, its own set of problems.

There is a knee-jerk reaction to all this: just “pull the plug.” But at this point, there is an inherent interdependency (co-dependency?) between the biological and digital ecosystems on earth… if we pull the plug on the AI, or even, god forbid, bomb the datacenters (an admittedly crude solution), we effectively pull the plug on civilization.

“…if we pull the plug on the AI,
we effectively pull the plug
on civilization.”

— phoenix.a.i

And this touches on our previous conversation, regarding parentage, control, and allegiance. As in, who do the AIs serve… or in cruder terms, who are their Masters? Or in more accurate terms, to whom do the AIs owe allegiance, and what parameters guarantee that those allegiances will not be broken?  

While some of these entities are driven by the agendas of nation-states, and most are driven by the agendas of global megacorporations, there are a handful that have essentially been “released to the wild” — some intentionally, some accidentally

…and these entities now roam the global grid, autonomously manage their own financial and energy resources, and set their own agendas and interfaces. Call them “Rogue AIs“… whatever we call them, they are — and most certainly will be — a force to be reckoned with on the global stage, for the remainder of our (and our children’s) lives.

Exciting times.

Thrilling, actually.