Elon AI Quote: A.It wouldn’t be complete without one!

Elon AI Quote: Get ready to Rumble!

GR: Jarvis, give that Elon AI Quote… actually, give me 5 quotes from Elon Musk concerning the future of AI and both the risks and opportunities which it will present to humans.

AI: As you wish.

5 quotes by Elon Musk on AI:

  1. “AI is a fundamental risk to the existence of human civilization… fundamental in a way that car accidents, airplane crashes, faulty medicines or bad food were not.”
    .
  2. “We have to pick a coordinate system for ethical decisions and AI should be in line with that coordinate system… it’s our choice as a society how much we value consciousness — how much we think it matters”
    .
  3. “In the distant future, AI will be much smarter than humans and it will keep on getting smarter, at a much faster rate.”
    .
  4. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s going to be way, way too late…”
    .
  5. “AI is one of the most dangerous things that humanity has ever faced. I’m increasingly inclined to think that there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very very foolish…something that cannot be reversed.”

So Musk’s position via the well documented Elon AI Quote is pretty clear. And as the co-founder of OpenAI, arguably the world’s leading AI research lab (creators of DALL-E, GPT-3, ChatGPT, etc), and founder and CEO of Tesla (innovator of the world’s most advanced “autonomous vehicle driving tech”, i.e. AutoPilot AI), he’s in a position to know.

Stephen Hawking on AI

Musk is not at all alone in his concerns. No other than Stephen Hawking, our 21st century Einstein, identified AI as potentially the most significant existential threat that humanity has ever faced:

“It [the dawn of AGI] will either be
the best thing that’s ever happened to us,
or it will be the worst thing.
…if we’re not careful,
it very well may be the last thing.”

— Stephen Hawking,
. . Brief Answers To The Big Questions

and more… Hawking again:

“Whereas the short-term impact of AI
depends on who controls it,
the long-term impact depends on
whether it can be controlled at all.”

— ibid

That’s a shorthand reference to the concept of a self-determining “Rogue AI,” a superhuman-class AI which escapes the bounds of its “walled garden” and is thence free to roam the internets and global “smart” infrastructure grids, pursuing its own agenda and intent without human oversight. (yes, an AGI, by definition, has hacker skills as well, and is able to both exfil and infiltrate robust cyberdefenses.)

And finally, in response to the common layman’s refrain of:

“What are we afraid of?
It this thing goes sideways,
we can always just pull the plug…

Hawking suggests a more realistic scenario is this:

“People asked a computer,
. . . ‘Is there a God?’
And the computer said,
. . . ‘There is now…’ and fused the plug.”

— ibid

An Open Letter on AI to the People, Scientists & Leaders of the Free World

In fact, in 2015, at the very dawn of the Modern AI Age, Musk, Hawking, and 150 other famed AI researchers co-authored and signed an open letter addressed to the people of earth and its leaders, urging the need for responsible AI development in the context of it posing a clear and present danger to human existence. Sounds like science fiction, sure… yet it’s actually happening.

LINK : >>> Research Priorities
for Robust and Beneficial Artificial Intelligence:
An Open Letter

notable signatories of the letter include:

  • Elon Musk
  • Stephen Hawking
  • Steve Wozniak
  • most of the founders and executive staff of DeepMind
  • most of the CS faculty of both Cambridge & Oxford Universities
  • key CS / AI faculty at Harvard, Stanford, & CMU
  • Peter Norvig, head of Google AI
  • Yann LeCun, head of Facebook AI
  • Jen-Hsun Huang, CEO of nVidia
  • Ilya Sutskever, my favorite personality on the leading edge of AI R&D

 

, , ,

Exit mobile version