The AI Manifesto 2023: Sam Altman’s Bold Demand that you Wake Up

the AI Manifesto: by Sam Altman

Just last Friday, Sam Altman, visionary CEO of OpenAI, published a long blog post to the OpenAI website. As with most (not all!) of Sam’s posts, it is well-thought out, forward-seeing, and clear… and a bit shocking. In my mind, it reads more like a genuine AI Manifesto and less like just another thought piece.

Top line:

  • Call for international top level (international) governance due to:
    • massive drains on global energy supply for AI Training runs
    • massive consumption of global grid compute resources
  • Warning of massive “job displacement”
    • (normal people call this “unemployment”)
  • Foreshadowing of “emergency situations”
    • i.e. mis-aligned AI gone sideways
    • where OpenAI intentionally defaults on its (now $13 billion and counting) investor debt

I’m reposting this in its entirety — with commentary — for two reasons:

a) I feel that (with a detailed reading) its content is so profound, such a clarion call, that it merits as broad distribution as possible.

b) I have a small yet well-founded fear that it is just SO radical, SO on point, that OpenAI will have second thoughts and either i) pull it from the website entirely, or ii) alter it to water it down in subtle ways due to capitalist and/or political pressures.

With that said, here it is, in commented excerpts  (Feb 24, 2023)


Sam Altman lays out the AI Manifesto:

Altman: “In particular, we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, individual users have a lot of discretion. Our eventual hope is that the institutions of the world agree on what these wide bounds should be; in the shorter term we plan to run experiments for external input. The institutions of the world will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.

…it’s important that the ratio of safety progress to capability progress increases.

…Third, we hope for a global conversation about three key questions:

  1. how to govern these systems,
  2. how to fairly distribute the benefits they generate, and
  3. how to fairly share access.

…we have attempted to set up our structure in a way that aligns our incentives with a good outcome. We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society).

We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.”

AI Manifesto: OpenAI Structure

GR commentary:

link to detailed explanation of OpenAIs novel corporate structure, and purported plans for Global UBI based on predicted massive wealth generation (12 figure profits, that is: trillions (USD) in projected profit)

brief explanation:

GR: This is one of the more radical components of OpenAI and makes me believe in their integrity and vision, as hinted at in this AI Manifesto.

Not only is OpenAI based on a novel limited cap structure, but basically the board can call “emergency!” and cancel all equity obligations (i.e. debt to investors) in the case of an AI Apocalypse (misaligned AGI that threatens to cause imminent harm to humanity. Q: are we rapidly approaching that? Link: Microsoft / OpenAI “Sydney” makes open threats against those humans who it deems to be its “enemies”)

  • limited Cap on profits
  • pay back investors
  • compensate (AGI performance bonus) employees
  • creation & distribution of a Global UBI fund
    (i.e. “what to do when you have $10 trillion in cash”)

back to Altman:

“At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale.

Sam Altman

photo credit: David Paul Morris/Bloomberg via Getty Images

Altman & the New York Times

GR: do you hear this, White House? Biden? United Nations?

Here’s a deep paraphrase of a podcast interview between Altman and Ezra Klein of the New York Times.

In it, Klein asks something like:

“With all that’s at stake (that is to say, the future of humanity), wouldn’t this (AGI) be better undertaken as a federal or international government project?”

Altman replies: (my heavily augmented interpretation of his response)

“We tried to make this a federal project for more than a year, meeting with the White House, meeting with the highest levels of government. Our message fell upon deaf ears. They wanted to do a five-year feasibility study. We saw the need, immediately. We needed $1 billion to start, and another $10 billion to continue (and we very well may need another $100 billion in cash and compete before we approach breakeven.

This is a true Apollo / Moonshot / Manhattan project). We came to the point where we couldn’t wait any longer. So we went with private investors (wP: add list). And we built what we had to build. We believe strongly in the core principles of democracy, and capitalism. We’d love to have an American flag flying on this effort. But this is bigger than America. This is a project for all of humanity.”

back to the AI Manifesto (Altman):

“The first AGI will be just a point along the continuum of intelligence (link: Bostrom: Village Idiot vs. Einstein, & ASI). We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.”

AI MANIFESTO: THE KEY

Altman concludes:

“Successfully transitioning to a world with superintelligence is perhaps the most important — and hopeful, and scary — project in human history.

Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.”

Amen.

 


Source Link:

Original Post:
“Planning for AGI and Beyond” by Sam Altman