The Origin of AI : A Fairy Tale, Part 1

Machines: The Loom. Binary threads. the Ancient Origin of AI.

Once upon a time…

waaay back in the 1800s, some men became fascinated with the machines they were building: sewing machines, weaving machines, even wheeled machines. It seemed that everything people and horses could do, the machines were better, faster, and stronger at. Plus, the machines never tired. Sure, they needed repaired every once so often but that was trivial compared to the time and expense of re-hiring and re-training a new human worker. And so began: the Origin of AI.

Around the same time scientists were getting a very basic understanding of how the human brain worked: electrical signals transmitted between nerve cells. At that time they were realising that if you put a low-voltage signal into a certain part of the brain, a certain muscle would twitch; a certain limb would move. Man as machine: stimulus, response. The logical extension of Newton’s perfectly logical universe. Simple. Elegant. A charming notion.

1885: Conceptual Origin of AI

Two of those men (Bain & James, if you must know) looked far into the future, and had an idea:

Perhaps we could make a mechanical model of the human brain. Perhaps we could construct a machine where every nerve cell is modelled as a piston. We’d just connect all the pistons, and the machine would, magically, think. Sure, it would take a great number of pistons… perhaps 10,000 or more of them, all wired together in a tight web. And it would be painfully slow: neurons were measured to transmit and receive perhaps 200 signals a second; the pistons might be able to do 4. But nonetheless, they thought, if they could wire those pistons together in just the right way, we would have a machine that served as a crude model of a human brain.

They named their invention the “neural net.” Little did they know, more than a century later, their conceptual invention would come to be seen as the actual origin of AI. But, lacking the money or the wherewithal to purchase 10,000 mechanical pistons, they moved on to greener pastures.

Fast forward 4 generations, give or take.

1955: Computational Origin of AI

Around 1955, humans invented the integrated circuit. And at an amazing speed, across the next 10 years, those basic computers grew from 10s ro 100s to 1000s of transistors… Transistors, or what one might see as “digital pistons.” About that same time, someone dusted off an old book, and thought: “What’s this? A ‘neural net’? Perhaps with this new computer, finally, we could build such a ‘thinking machine.’

Optimism was high. Famously, in September of 1955, leading scientists suggested that across the course of a summer, with some meager funding, they could build such a thing:

“We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956… The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Amongst other signatories, the authors included two who would be come to known as the Forefathers of AI:

Marvin Minsky & John McCarthy

Alas, it was not to be.

They did, however, invent the term “Artificial Intelligence.” (factcheck) And the coining of that term ended up being one of their most notable contributions to the field.

1990-2011: The First “AI Winter”

Every 10-20 years or so, across the ensuing decades, one team of scientists or another would once again attempt to build the “mechanical brain.” First with 10,000, then 100,000, then 10,000,000 transistors. But aside from some very very primitive functions (1+1=2), their creations were utterly braindead, lifeless, and without meaning.

But building a neural net was not the only tactic that humans were taking in their great quest to build a thinking machine. Others speculated that a machine could be trained like a dog or a child: given enough logical rules, it could approximate thought. General rules were made, and specific rules were made. The goal was to generalise all of life into a “book of rules” that could be coded and programmed into the electronic brain. The codebase got larger and larger.

There was some limited success.

For instance, in the are of
teaching computers to play chess.

  • Deep Blue!
    • The opening salvo: AI 3, Kasparov 2
      The Human Champion Falls
    • (but… we still have sovereignity in the game domains of
      “Go”, Poker, Starcraft & Diplomacy… for now)
  • Expert Systems!
    • (like fusion before breakeven: more In than Out)
  • Big Data!
    • Mechanical Turk
    • Clickworker Armies
  • The Cloud
    • AWS, Azure, Google, etc.
  • HPC!
    • who needs supercomputers when you have the Grid?

2012-2023: AI Modern History

I intend to add a coherent narrative to tie all these puzzle pieces together in a sort of chronological story. For now, I give you the segments that together outline most of the story (click the links!)

2023+ AI Future History

So that’s the apocryphal Origin of AI. Now, let’s peer into the near and distant future, and what this Tsunami is really all about:

, , ,

Exit mobile version