When you start to learn about AI, and you start to recognize what is in store for humanity — both the exciting, and the terrifying aspects of it — you rapidly begin to realise that if we are to unleash a sentient intelligence upon this planet and this civilisation, that it might be prudent (necessary!) to instill in it a basic sense of right and wrong — a moral code, as it were, that transcends cold, hard logic. A formal Code of Ethics for AI.
(cold logic applied to global challenges of energy and food distribution, for instance, might determine in an actuarial calculation that the sacrifice of 2 billion lives in order to vastly improve the standard of living for the remaining 6 billion is a rational choice… human compassion and basic moral code would dictate otherwise…)
One of the logical endpoints of this line of thought is called “the Alignment Problem.” In recent months it has become alarmingly clear that this is a very important, if not the most important, issue to tackle in the development of modern AGI (A.rtificial G.eneral I.ntelligence, the one that finally equals or exceeds human skills in every area, including common sense).
The essence of it is: AGI, once unleashed, must be in alignment with the broad goals, interests, and health of the human species (Sutskever goes one step more: he insists that every sufficiently powerful AI must have a genuine love for humanity baked into its very DNA). If an AGI is unleashed whose core intentions and goals are out of alignment with humans, this is a recipe for both global conflict and disastrous consequences of a scale never before seen in the history of humankind.
To that end, pretty much every AI company has made firm commitments to “align their creations with the interests of homo sapiens”. How this actually manifests, and is “baked into” the “seeds” which grow to form the massive neural nets and machine brains, is a far more complicated challenge than a few paragraphs. But at least, it’s a start.
Here are links to the most prominent AI labs of the day, and their missions / ethics codes / principle statements… for what it’s worth.
>>> DeepMind • Operating Principles
>>> AI2 : Mission
>>> Future of Life Institute : Mission
“To steer transformative technology .
towards benefitting life .
and away from .
extreme large-scale risks.”