The purpose of this post is to enlighten you as to the fundamentals of the present Deep Learning Revolution, and to simultaneously debunk two very common myths which I hear over and over again from normal intelligent people.
Debunking Common AI Myths
- AI is just one more innovation in a long string of human invention… like electricity, the car, the computer and the internet. Nothing special here. Just another new tech that is mildly disruptive. We will use it, we will adapt, and in 20 years, on to the next one….that, …and…
- AI isn’t “awake” or “alive.” AI is just a computer program. It does what its programmed to do. Humans make it, so humans know how it works. It’s predictable. It’s just a machine. Its based on logic. If A, then B. etc.
- AI is the most significant invention humans have ever, and will ever make. It very well may be the last invention we ever make. and…
- AI is far more than a “computer program,” and no, the creators, designers and coders of AI systems do not actually understand how it works… it is in fact opaque, and requires continual black box probing to even get an inkling of how it works.
To see how we arrived at this peculiar situation; the situation of rational engineers creating irrational, even intuitive machines, read on:
How to Code an App, 1955-2015
At first, long long ago, in a galaxy right here, long before the Deep Learning Revolution was even a twinkle in Turing’s eye, there was the computer. A fine piece of hardware. A fully logical, speedy, shiny, bulky, humming electronic machine. And very shortly thereafter, there were computer programmers. People who told the machine what to compute. From 1955 to 2015… for 6 full decades… every computer program written followed pretty much the same development model:
1. You set an objective (“win at chess”).
2. You made a plan. (A. Meet with Grandmasters B. enumerate strategies C. codify them)
3. You translated all your thoughts into cold, hard logic in the form of a long series of “if / then” structures, trying to account for every logically possible scenario, and made mathematical equations to represent (to “model”) large swaths of the decision-tree (which, more recently, became popularly recognized as “algorithms,” or more hauntingly, “the algorithm”)
4. You then either worked alone, or as a team, and “built” your software.
5. You “built” it by writing line after line of code, in a computer programming language (Java, C++, Visual Basic, Python, etc.).
6. You wrote that code by typing letters and symbols into your machine with your fingers tapping keys on a keyboard. You had to be very very careful because, just like in a complex math equation, a simple mis-placed comma or even a period or an extra “space” could train-wreck your entire project. You wrote hundreds, thousands, sometimes hundreds of thousands of lines of code. This code, all together, was your magnum opus… your program.
7. You tested. And tested again.
8. You found bugs. You fixed the bugs.
9. At some point, finally, between 3 months and 3 years after you’d started, you and your team decided: It was time for “release.” Time to publish your awesome app to the world.
10. you ran the final compile. You packaged it. And you shipped it.
And that was that.
And up until around 2015, that’s how all computer software was made. In fact, even today, in 2022, that’s how a large amount of hand-crafted computer software is still made.
The Infinite Expansion of Codebases
As the years went by, these “codebases” grew, and grew, and grew…. and eventually became… gargantuan.
As an example, a basic tic-tac-toe playing program from 1960 might have been written on 16 punch-cards, the equivalent of about 256 lines of code.
Today, 60 years later, a modern computer operating system consists of, on average, 10 million lines of code… and that’s without any bells, whistles, or apps. A web browser, just by itself, clocks in at around 5 million lines of code. Five million lines of meticulously well-planned, expertly hand-crafted logic… each line typed in by a human, each line painstakingly checked and re-checked for errors, all those lines together representing a brilliant symphony of logic which allows… well, which allows me to write this post, and you to read it, amongst other things. Which allows you to wiggle your mouse and tap your finger and say “Hey, Siri” and edit videos and animate memojis and all sorts of other magic. Which allows, basically, modern day “computing” in all its myriad forms.
But, around 2015, that whole concept of coding, of software development, was given a wake up call. A fundamental and radical shift was afoot. And it would forever change how code was created.
Code was, indeed, getting so utterly complex that it was impossible for any single human to understand the entirety of any given codebase, or even a significant percentage of the possible number of things that could go “wrong.” With 10 million lines of code running in parallel, and a whole bunch of apps running on top of that, each vying for compute resources and WiFi bandwidth and graphics acceleration and database queries, all those things occuring millions (trillions!) of times per second… yeah, a lot could go wrong. (no worries, though: unless you’re running a hospital surgical center or a nuclear power plant, the worse “wrong” you’d experience was a “blue screen of death,” or in a very bad case, significant loss or corruption of you data… you do make regular backups, right?)
2015 : Enter the Neural Net & Deep Learning
And then, in 2015, the world changed.
A little unknown lab in England, which went by the totally apropos name of DeepMind, was stewing a revolution.
150 years ago, some smart men had come up with the idea of modelling the human brain in the form of a mechanical machine. They called their idea a neural net, and theorized how they might build such a machine with a steam engine and a whole lot of pistons. Alas, the interconnected web of 10,000 pistons they thought they’d need ended up being both too expensive and too mechanically complicated to construct. So the idea was shelved.
Fast forward 150 years.
The brilliant people at DeepMind thought that, with so much commodity grid computing being available in the world, it might be a good time to re-investigate this neural net model. Indeed, it would take a bit more than the initially theorized 10,000 pistons. Their ambitious plan would, in fact, require about 100 gigawatts (the rough electricity required to power a city of 100,000 people), and more than 1,000 high performance AI computers wired in parallel, containing all total trillions upon trillions of transistors. But it might… just… work.
DeepMind engineers and scientists had in fact decided that the time of human hand coding had come to an end. They would no longer try to train a machine to think like a human. They would simply train a neural net to learn, throw it into a huge playground (aka, an offline “copy” of most of the internet), and say: “Go! Play! Learn!” And then they’d just sit back, eat the popcorn, and watch the fireworks.
It was, indeed, some hella fireworks.
It all began, as computer things tend to do, with Chess.
that was a lifetime ago, in 2015.
So where are we today?
Today: Designing Machine DNA
So today, when we talk about AI, we’ve already given up the reigns of control (did we ever really have them?).
We no longer code infinite lines of hand-crafted colliding logic. We don’t try to distill the universe into elegant rules, then make 10,000 sub-rules to account for all the exception cases. We don’t actually do anything anymore except to do our best to cultivate some really clean, really dense training data, and procure ample electricity and compute resources so that the brain has enough room to grow as it consumes (and, believe: it does consume) all that data (soon: ALL the data).
In short: we design the DNA of the sperm and the egg. Then we slam them into eachother. And, basically, nature takes it course.
[a little more detail is needed here about “designing the zygote”]
Tomorrow: Raising Machine Children
So, now that we’ve graduated from explicitly designing hollow robots to contributing the designer DNA for a new league of independently thinking machines, what is our role?
Well, continuing the metaphor: since now, like a father of yore, our contribution is limited to seeding some healthy DNA into the conception, what we need now is:
to be a good parent to our machine children.
We need to instill morals, a sense of right and wrong, and some safety guidelines into our new machine cohorts.