All the Invisible Ways that AI Control is exerted upon our Daily Lives, Choices, and Decisions
One of the challenges of understanding the presence of modern AI systems is their high level of invisibility. What we cannot see, we cannot understand. And the vast majority of AI power, until very recently, was operating far far far behind the scenes… a confederacy of AI puppetmasters that are never seen by the rapt audience (i.e. us). The purpose of this post is to shed some light on all the various ways which AI control systems already choose what we read, perceive, and do every day… and to build some conjecture on how those same agents will exert ever more control over our lives in the very near future.
AI control of
the News we Read
Facebook really took off once they launched an innovative new feature called “News Feed.” What has come to light across the years is that this core Meta (nee Facebook) algorithm is superb at two things: a) creating echo chambers of like-minded posters, and b) seeding conflict by presenting highly inflammatory counter viewpoints. Why? Simple: Money.
The first makes you feel comfortable, welcome, validated and insulated. The second makes you angry. This dance between comfort and fear is a massive dopamine reinforcement machine. Stated simply: The roller coaster ride of the News Feed is designed to be 100% addictive. Because the longer you stick around, the more ads Meta delivers. And the more ads it delivers, the more money it makes. To the current tune of $50 billion in raw profits per year, c. 2022. Thanks to some of the most advanced AI on the planet.
AI determines
How Much we Pay
Look up a product on Amazon. Or an airfare. Or apply for a new credit card. Have a friend (preferably one who lives in a different neighborhood, or has a different socio-economic background, race or cultural affiliation than you, AND is on a different WiFi network or cell carrier) perform the exact same search on their own phone. Compare your screens. You will see right there, in the different prices (sometimes massively different) how the AI control is exerting itself. You can’t see it, but you live IN its throes. Hmmmm….
AI curates
the Choices that we are Presented with
Even the product choices that we are given, which often seem overwhelming in their variety, are neither objective nor random. No, they are carefully curated from millions of bits of behavioral history that has been collected, sold and shared about you since you first logged into the internet, what: 5, 10, 20 years ago? Your history only enlarges and grows more detailed… it never shrinks. It is never deleted (despite your requests, unsubscribes, account deletions and privacy screens, believe that.)
AI determines
the Quality of Care we Receive
Ever been to a hospital? Or a doc in a box? Wonder how triage is assigned? How it’s determined who goes first? Hint: It’s not just about the severity of injury, or the urgency of your need for care. It is also about your perceived ability to pay, as judged by AI intake systems, and your perceived life expectancy and chance of survival, again as determined by highly sophisticated real-time actuarial AIs. You can complain to the ER docs and intake nurses all you like… it won’t matter. The machine already determined your fate.
AI chooses
the Routes we Drive.
Way back in the ancient days of the 20th century, there were things called maps, and if you needed directions to a certain location, 9 times out of ten you could ask the attendant at the corner gas station, and he (it was always a he, back then) would give you turn-by-turn instructions… if you bought the local streetmap, he’d even mark the turns on it for you with a handy highlighter.
These days, nary a car is sold without a slick integrated in-dash GPS navigation system. For those rare cases without one, your smartphone with any app (Apple Maps, Google Maps, Wayz) is happy to verbally announce the turn-by-turns for you, even cleverly avoiding traffic jams and suggesting alternate routes to match your inferred “preferences” (I prefer maximum efficiency… I prefer the scenic route… I prefer the smoothest pavement, etc.)
Now comes the challenge:
How many kids under the age of 25 actually use common sense to double check the route that the GPS is suggesting? I performed an experiment on a local teenager a few years back. I pre-programmed a circular route into my phones GPS and repeated it 5 times, then hopped into the passenger seat, told him I’d handle the directions, and hit “Start Navigation”. I wanted to see how many loops the kid would make before they realised that they were driving in circles. Care to venture a guess?
Honestly, we never found out. After the third loop, my exasperation showed through, and I just blurted out “haven’t we been here before?!?!” The kid looked up, looked at me, craned his head to look at my iPhone screen, and confidently announced: “We’re fine… the map says to continue on this road for another 2 miles…”
Now, that’s a quaint little story, circa 2015. About how oblivious we can be to the AI control systems that direct us and make choices for us.
Fast forward to 2025. Because believe this quote from Elon:
“The future of automobiles is
100% electrified, and
100% autonomous.
There is no turning back.”
Yes, by 2030, a majority of us, and by 2035, 90%+, will not be driving their own vehicles, at all. The “Humans formerly known as Drivers” will be watching their Reels & TikToks, counting their Likes & Follows, playing their CandyCrushes & FruitNinjas & FortNites, and cleverly framing their AI-enhanced selfies, without a care in the world as to the route, the rhyme or the reason.
So we might wish to ask, in such a scenario, how is our route actually chosen?
We might want to ask that, now, about ALL the invisible choices being made for us by AI control systems, every day, without our knowledge or (informed) consent.
We might want to question the motives of such systems, and whether those motives, purportedly (in marketingspeak) designed to “enhance & appeal to our personal preferences” are in fact aligned with our holistic health and wellbeing.
We might want to ask.