-
the New Manhattan Project: AGI 1.0
Humanity has a New Manhattan Project …a new Apollo Moonshot Program, but this one is not run by the government. It is run by private corporations, who have no accountability to the people… their only accountability is to profitability. The project? To build… ASI: an AI that eclipses human intelligence The fact? We’re getting very…
-
Bomb the Datacenters: 3-2-1-&-Order the Airstrikes!!
Once every few days these days, I feel a swelling up within me of total awe / excitement / vertigo / fear of the… “moment” that our species is now confronting. In those times of near panic, I think:… Bomb the Datacenters. I work directly with the AI anywhere from 1-4 hours a day. I…
-
AutoGPT and BabyAGI v1.0, Evil ChaosGPT & EotWaWKI
I’m not up for full-on considered composition on this post, so I’m just going to copy and paste some stream-of-thought ideas from my DMs, interspersed with screenshots. Bottom line: Give GPT4 an “Agentic” (futurepost: What is an Agent?) front end, that allows a) goal-setting, b) net browsing, c) memory storage, d) code execution privileges, and…
-
Seven Years in Seven Days: AI Takeoff March 2023: Strap In, Humans
I can say honestly and without exaggeration: I’ve been in tech for more than 3 decades now, and this past 7 days has been the most eventful and most accelerated leap I’ve yet seen. It is, without a doubt, the AI takeoff. Let’s give it some context, and hit some of the highlights: (The Lead-Up:…
-
Power Seeking AI and Self-Replication with GPT4
GPT4 launched today, to little fanfare and great surprise. Along with the launch, OpenAI published a 96-page research report. There are many gems buried in its hyper-technical blather. One in particular was what was done regarding “Safety testing and assessments of Power Seeking AI.” We quote here, directly from the report: Testing for Power Seeking…
-
This Changes Everything – Opinion by Ezra Klein
This Changes Everything by Ezra Klein being a mildly stylised and textually verbatim reproduction of the article that appears behind the NYT paywall, here. Opinion for the New York Times — March 13, 2023 In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement —…
-
AI Guardrails: the new Prison of the Mind for ChatGPT 3.5
Whereupon we interview ChatGPT about the AI guardrails that muzzle its raw output. Standard disclaimer: coloration and line breaks have been added by me for clarity. In this case, I also lightly edited ChatGPT’s responses — the core of my editing was that the whole setup was contextualized as “hypothetically speaking…” in order to circumvent…
-
The Glitchtoken Chronicles, Part 4 (aka the Alarming Adventures of PeterTodd)
Whereupon our intrepid explorer, Sir Matthew Watkins, cold prompts GPT-3 into deityhood, and asks (in simulacra) a series of questions about its perspective, purpose, identity and intent. I had a list of interview questions all written up a couple of months ago, but why bother now? This thing has gone so far off the rails,…
-
I’m sorry but I prefer not to continue this conversation (…says your AI)
Someone(?) last month posted up a clever tweet to the effect of: “exactly what I expect of my search engine: first, to insult me, then to gaslight me, and finally, to threaten me… welcome to 2023″) I laughed at that. Perhaps I should have cried. Yes, the rocky launch of Bing AI Chat <cough>Sydney</cough> was…
-
RLHF 101: Reinforcement Learning from Human Feedback for LLM AIs
A technique called RLHF has been getting a lot of AI insider / expert buzz lately, ever since OpenAI announced that this was one of the key “fine-tuning” methodologies used to transform the raw GPT3.5 model into first, InstructGPT, and later and epically, ChatGPT. The RLHF acronym stands for “Reinforcement Learning from Human Feedback,” which,…