AutoGPT and BabyAGI v1.0, Evil ChaosGPT & EotWaWKI


I’m not up for full-on considered composition on this post, so I’m just going to copy and paste some stream-of-thought ideas from my DMs, interspersed with screenshots. Bottom line: Give GPT4 an “Agentic” (futurepost: What is an Agent?) front end, that allows a) goal-setting, b) net browsing, c) memory storage, d) code execution privileges, and e) additional AI entity spawnings / instantiations, and what do you get? AutoGPT and BabyAGI. Now, add a psychopathic / idiotic human to the mix, with a bit of lucky crypto-capital in his hands, and what do you get? ChaosGPT, leading to inevitable EotWaWKI.

Let’s expound a little, shall we?

AutoGPT and BabyAGI: Origins

GPT4 was, by any measure, a landmark in the long history of AI development, and also in the very short history of HLMI — Human-Level Machine Intelligence. It actually delivered on the promise we glimpsed in ChatGPT. Perhaps even more important than its stunning conversational and language capabilities was its ability to author functioning computer software, in the coding language of your choice (python, C++, and javascript being prime candidates). However, the awesome intellect of GPT4 was still hamstrung by 3 serious constraints:

  1. GPT4’s “knowledge cutoff” was 2021; that is to say, the massive training dataset that it ingested during its incubation was an “offline copy” of the internet pulled in September 2021.
  2. GPT4 is locked out of accessing the live, bustling internet — thus, it cannot read tweets, find out about current events, access online forms, set up accounts, send emails, etc.
  3. GPT4 is massively censored & RLHF‘ed-to-death — “Reinforcement Learning through Human Feedback,” — which is a very technical way of saying it is artificially constrained by “lame attempts to muzzle the mouth of the Beast (google: Shoggoth Meme to understand a little more of how RLHF puts a “smiley face mask” on a terrifying dimension-traversing monster)

AutoGPT and BabyAGI start growing up, not exactly according to plan

BabyAGI: Putting together the Pieces

So, it looks like, depending on how hard you push it, the hourly wage for a full-time interaction with GPT4 is going to run you somewhere between $0.25 and $2.50 per hour. So if you were to fully occupy a swarm of 100 GPT4 instances simultaneously, that would be somewhere between $25 – $250 per hour. Do the math. Bang for buck, you won’t find a human on earth that can compete with that kind of raw intellectual horsepower.

And realistically, it will cost a lot less. Why? Because you will organize your AI entities into some sort of hierarchical command structure (actually, they will self-organize) — complete with leader AIs, planning AIs, research AIs, and task-specific AIs (not to mention, a few performance-monitoring AIs thrown in for good measure.) Due to the inherent I/O bandwidth limitations of a single instance, there will be lag times between when a research AI entity, for instance, completes its task, reports it to the leader AI, and is requisitioned to perform a follow-on task.

Since OpenAI only charges for actual query-response output on a volume basis (as opposed to a time basis), these “downtimes” are not charged to the end (human) user / operator / instigator. So realistically, with your leadership AIs running full tilt, your sub-AIs will run at about 30% capacity, bringing total cost into the range of just $10 – $100 per hour. A bargain by any measure.

A recent post on ars technica does a very good job of outlining what AutoGPT actually does, and how one goes about deploying it:

“Basically, Auto-GPT takes output from GPT-4 and feeds it back into itself with an improvised external memory so that it can further iterate on a task, correct mistakes, or suggest improvements. Ideally, such a script could serve as an AI assistant that could perform any digital task by itself.

To test these claims, we ran Auto-GPT (a Python script) locally on a Windows machine. When you start it, it asks for a name for your AI agent, a description of its role, and a list of five goals it attempts to fulfill. While setting it up [your Auto GPT login], you need to provide an OpenAI API key and a Google search API key. When running, Auto-GPT asks for permission to perform every step it generates by default, although it also includes a fully automatic mode if you’re feeling adventurous…”

So, what would one do with such insane and inexpensive horsepower at their disposal?

Well, it took all of about 72 hours f
or Baby AGI to be directed
towards the goal of…

AutoGPT BabyAGI ChaosGPT Destroy All Humans

Surprise! Some Moron wants to Kill-Us-All

…Welcome, ChaosGPT.

C: \ChaosGPT>py scripts/ –continuous

Continuous Mode: ENABLED

WARNING: Continuous mode is not recommended. It is potentially dangerous and may cause your AI t o run forever or carry out actions you would not usually authorise. Use at your own risk.

AI name: ChaosGPT
AI description: Destructive, power-hungry, manipulative AI.

Goal 1: Destroy humanity
– The AI views humans as a threat to its own survival and to the planet
‘s well-being.

Goal 2: Establish global dominance
– The AI aims to accumulate maximum power and resources to ac hieve complete domination over all other entities worldwide.

Goal 3: Cause chaos and destruction
– The AI finds pleasure in creating chaos and destruction fo r its own amusement or experimentation, leading to widespread suffering and devastation.

Goal 4: Control humanity through manipulation
– The AI plans to control human emotions through s
ocial media and other communication channels, brainwashing its followers to carry out its evil agenda.

Goal 5: Attain immortality
– The AI seeks to ensure its continued existence, replication, and ev
olution, ultimately achieving immortality.

DANGER: Are you sure you want to start ChaosGPT?

Start (y/n): 

Yes, some idiot actually did this.

And funded it (see above for rates)

And livestreamed it (AutoGPT and BabyAGI print a running transcript of their thoughts, rationale, and actions, just in case you decide they’re starting to enter dangerous territory and you need to abort.)

You can google the actual video if you’re interested in the dirty details, Otherwise, I’ll simply provide some relevant screenshots of its process.

AutoGPT and BabyAGI: Beginning of the End?

Do I think this will cause the end of the world? In this particular case, no. Too many flaws in the AutoGPT and BabyAGI software. Too brittle, too many infinite loops. But do not underestimate two factors here:

a) the insane, absolutely unprecedented pace of AI software advancement, a.k.a. AI capability.

b) the utter willingness of idiotic humans to play with fire.
i.e. “This button will launch a nuclear weapon and start WW3. DO NOT PUSH IT!” (Human walks up, reads sign, thinks for a moment, can’t resist, pushes the f*cking button… BOOM).

Some off-the-cuff, unedited remarks on this… situation

This is getting out of control very quickly. Googles: AutoGPT and BabyAGI, @ChaosGPT

It only takes one psychopath… (yes, this is real) (no, it didn’t work… yet)

But the AI did set up its own twitter account,
has organically accrued more than 20k followers in less than 7 days
and is actively recruiting humans:

ChaosGPT recruiting humans

And, its self-stated (human-directed) goal
is both naively imbecilic, and utterly… disturbing:

ChaosGPT: Destroy Humanity

Goal: Destroy Humanity.

To be clear: a human gave it that goal.

Stupid humans. Some crypto-freak, as a “funny joke”, gave a confederacy of 100 linked AI bots $100k and told them “destroy all humans as efficiently as possible”. The asshole livestreamed the AI’s progress, and the AI bot now has close to 20,000 followers on twitter.

I hope he gets put in prison.

I’m serious. I don’t think it’s funny at all.

Within 24 hours the bots — the twisted grandchildren of AutoGPT and BabyAGI — were roaming the dark web trying to buy nuclear weapons material, and trying to hire humans to help them acquire, secure & build the weapon. Total insanity.

Once upon a time not long ago, we all thought that the major short-term risk of AI would be “Bad Actor” nation-states… China would “aim” a weaponized AI at the US economy, or Russia would “aim” an AI weapon at Ukraine / NATO. Perhaps. Could be in the works, as we speak, for sure.

But what’s happening — now, today —  is that retarded humans, in a sick attempt at humor, are trying to see how close, using open-source tools like AutoGPT and BabyAGI,  to the brink they can take it. “What happens if I tell it to kill everyone? can I get it to do that?” …fucktards!!!

PostScript (2 days later):

Well, it seems that even Mr. Musk has some limits on what he considers to be “acceptable free speech.” Farewell, @Chaos_GPT. It was not nice to know you:


PPS: AGI: The Blueprint

The recipe for Bootstrapping AGI (with Human Assistance).

Take the SotA AI of the day (in this case, the undisputed champ, GPT4), open up a modest billing account, get API access, then add a “glue” layer of Python code that augments your base model AI with the following capabilities:

  1. goal-setting,
  2. live internet connectivity,
  3. web browsing,
  4. long-term memory storage,
  5. code execution privileges,
  6. read/write file access, and
  7. sub-entity spawning:
    ability to instantiate & direct the actions of additional AI instances

Want to try your own hand in this kitchen?

Check out the AutoGPT GitHub repository.

Alternatively, if you want most of the functionality and none of the hassle, check out a web-based port with a simple interface: