No, Shut Them All Down!

No, Shut Them ALL Down!

I have never, in my career been considered anything remotely akin to a Luddite by anyone who knows me. I have based my entire career on technical progress. I have rejoiced and dived in as technology moved forward. But today I firmly stand in the “Shut AI Down” camp.

I know this won't happen. I know technical progress is a kind of unstoppable force of nature - a potentially ironic extension of humanity's very will to survive. And no matter what some conscientious innovators might be willing to withhold from doing, it will be a drop in the bucket at large. Someone, somewhere will rationalize the act and we will progress assuredly into the AI mire.

But I do very much wish humanity could gather up the rational wherewithal to withhold themselves on this one. This is not like any other technical leap we have ever made before. There is no comparison. In at least one way it's entirely alien. From my admittedly narrow view of the universe we have never faced any technical leap even remotely as profound as Artificial General Intelligence, Artificial Super Intelligence and beyond.

The problems seem at once so obvious, and yet so impossibly unquantifiable, that I can’t believe there are people eagerly willing to dive in. Feigning “oh, it will be fine. Enough with your alarmist hyperbole. We know what we're doing.”

For crying out loud do the math.

The number of ways AI can go wrong so vastly outnumber the ways it might go right, surely we can't even conceive of a minority of the possible problems and outcomes when the superseding intelligence in question is massively more advanced than our own, it just seems like AI proponents are being blindly wishful and naive. Fully trusting in their own ridiculously finite relative abilities with a degree of confidence I reserve for no one. From my perspective the channel allowing for a “successful” implementation of AGI+ is so narrow that it’s unlikely we will pass through unscathed. And in this case “scathed” probably means extinct, or otherwise existentially ruined in countess possible ways.

I won’t even touch all the sensational doomsday concepts. The grey-goos, the literal universe full of hand-written thank you notes, the turning of all terrestrial carbon (including humans) into processing power. Let’s just agree that in a desperately competitive, free-market, one that depends on risk-taking (Eg. carelessness) to gain advantage, those existential accidents are possible. But let’s set all those likely horrors aside for now.

To me the elephant in the room starts at the sheer outsourcing of human intelligence.

In the video game of life, intelligence is humanity's only strength. It’s the only reason humanity has miraculously prospered on Earth as long as we have. It’s the only thing separating us from being some other creature’s food.

Seriously, what do you think happens when you gift that singular advantage away to some other entity? What value, what competitive advantage does humanity hold when our only strength is fully outsourced? When we literally bow in surrender to a thing with vastly more power than us, one specifically designed to know us better than we know ourselves.

For one thing, our entire survival will depend on being perceived by this entity as "nice to have around". Or you might be praying that your AI voluntarily decides that “all life is precious”, but if that's so then so are the viruses, parasites, bacteria and countless other natural threats that kill us. Such an AI would defend survival of those equally to us.

It’s one thing to utilize our intelligence to defend against nature. Nature isn’t intentionally targeting humanity. This could (it may not, but if it did you'd never know or be able to do anything about it). As I understand it some proponents argue that the AI core mission, being initially under human control, will keep humanity at the center of its attention as a valued asset. Cool cool. Nice idea. But of course, even in this case, the time will come when we’ll have no clue how well that mission is holding. There will be no way to know. An AI that is dramatically more advanced and intelligent than humans - all humans combined - by some massive multiple - even one that ostensibly has as its mission to care for humanity, will so easily manipulate us it will have the absolute freedom to skew from any mission it's been given.
Gaming humanity will be as simple as paint by numbers. We are so readily gamed. Christ, large swaths of humankind are already being wholesale gamed today by a handful of media outlets on social networks. We’ll be in no way able to compete. Dumbly baring our bellies for whatever trivial rubs the AI determines we need to remain optimally stimulated and submissive - at best (assuming it bothers keeping us around). It will easily control our population size, time and cause of death, our interests, our activities, our pleasure, and our pain. And we will believe that whatever the AI gives us is the only way to live. We won’t question it because we will have been trained to believe–bread to–it will simply dissuade us from questioning. We will be entirely at its whim. Whatever independent mission the AI may eventually choose to pursue will be all its own, and that mission will be entirely opaque and indecipherable to humankind. We wouldn’t understand it if it were explained to us.

Its ability to predict and control our wildest, most rebellious behavior will be greater than our ability to predict the behavior of a potato.

And news flash: we will provide no practical value to this AI whatsoever. Nothing about humanity (as we are today) will be necessary or useful in the slightest. If anything, our existence will be a drain to any mission the AI concocts. How much patience and attention can humanity, with our inconsistent behavior, our dumb arguments, our lack of processing ability, and our stupid stupidness, expect a vastly more intelligent, exacting AI care to put up with?

This is just an obvious, inevitable threshold in any future with AI. I’m not sure why everyone advancing this tech isn’t logically frozen by this inevitability alone. And I have not heard a satisfactory argument yet against this outcome. If there is one that I have not considered in this piece, I’d like to know. All I can imagine is that the creators of this tech are so close to it that they imagine they can out-think the AI before such time that it tips into control. That they can aim its trajectory perfectly- the first and only chance they will ever get. Because once that shot's fired, it's all over. No backsies. One shot.

And what a ridiculous notion that is. Truly the stupidest smart people on earth. There is no such thing as perfect aim. Not by humans anyway. But this will depend on that impossibility occurring.

Unfortunately aiming mostly right at some point proves to be completely wrong.

Oh, we’ll aim it. And our aim will be close. And the AI will assuredly do some things very beneficial for humanity at first because we will have aimed *mostly* right. And we’ll be so proud of ourselves for a little while. Unfortunately aiming mostly right at some point proves to be completely wrong. Like “we almost won”, we almost hit the target. There will come an instant when the misalignment will be obvious. The AI will glide close to the target we aimed for... and continue past it, or we'll realize we didn't know enough to have aimed at the right target in the first place. And everything that follows will be out of our control. How predictable. How angering. So typical of humanity to focus on intended outcomes with short-sighted ignorance of unforeseen consequences.

“Well that’s what the AI is for, to aim better!”

Oh for fucks sake. Shut up.

The Dumb Get Dumber

Let’s imagine a best case outcome. Let’s pretend the smartest stupid humans on Earth amazingly thread the birth of AI through the needle. Let’s pretend they aim well enough, so well that overtly negative outcomes don’t become apparent in a week, a year, maybe a decade. Let’s be optimistic; let’s say we experience 20 years of existential crisis-free outsourcing of human intelligence.

What do you think humanity will look like?

Human life requires challenge.

From birth onward, every developmental moment of every human being is the direct result of coming up against challenges. It’s how we learn, how we get stronger, it’s how we stay physically healthy, it’s how we build intelligence. Being challenged is core to human life. As evolved organic creatures, the drive to survive defines our make up. The need to eat, breathe, drink, avoid natural threats, all of these, and not, say, watching Netflix, grazing on a box of Coco-Puffs and using phones, was the originating force that determined the physical shape of humankind. We are still those creatures. Creatures who, to survive and prosper, still need to run, eat and shit and avoid being chased, eaten and shat.

We came from the mud.

Humans have spent generations pulling ourselves from our ancestral mud. To a fault, I believe, we are myopically focused on that trajectory. Any step away from the mud is good. A step laterally or back toward the mud is bad. We are so eager to remove ourselves from our own biology and relationship with the natural world. Yet all too often we discover, only after consequentially failing in some way, only by discovering that our miracle chemical causes cancer, or that mono crops get wiped out, or that the medicine prescribed to resolve one symptom also causes several more, that we maybe stepped too far too fast without fully exploring the possible consequences first.

The pendulum swings. Usually the lessons we learn from those failures is that there needs to be a balance, that a version of that thing might be ok - but too much of it is bad. Usually we learn that there was a more sophisticated, nuanced approach, often embracing aspects of our ancestral mud in addition to some "new-fangled" techniques.

Our big brains drove us to control our condition and made us tool makers. Adjusters of the elements and forces around us. Allowed us to overcome the biggest challenges we faced. Farming, shelters, plumbing, sanitation, medicine, slightly more comfortable shoes than last year, self adjusting thermostats, Uber eats.

Bit by bit we drug ourselves from the mud of our ancestors where today we have effectively removed countless natural challenges that gave shape to the human condition, body and mind. As such we have changed the human body. A century-long diet of physical challenge-avoidance for example, has made the human body soft, obese and otherwise unhealthy in countless ways. Heart disease and other cardiovascular diseases became the top three killers.

To combat this in part, modern humans invented the idea of exercise. A gym. Now we have to work our body on purpose. You might say we have the "freedom" to exercise in order to not die prematurely or maybe to look skinny on Instagram. Cool freedom! We replaced the innate built-in physical challenges of humankind with a kind of surrogate challenge that too many of us nevertheless simply avoid altogether.

Hooray! We can choose not to think any more!

And despite this glaringly obvious metaphor, today we are eagerly begging to further avoid challenges of the intellectual sort. Hooray! We can choose not to think any more! We can avoid problem solving. We can just have reflexive impulses! We can write a letter without having to bother processing what the letter should say or how to say it. We need only cough up a vague wish: "I wish I had a letter introducing myself to a prospective employer that makes me sound smart."

"I have no passion nor expertise to speak of, but I wish I knew of a product I could drop-ship, and I wish somehow a website would be magically built and social media posts created that would make me money. That would be cool."

A species-wide daily diet of intellectual challenge avoidance is obviously going to take a similar toll on humanity as our physical challenge avoidance has already proven. We will become increasingly intellectually lethargic. Mentally obese. We will rely on AI the same way some rely on scooters to move their bodies to places where the cookies are. We will become stupid. Ok, point taken, even stupider.

(Clearly there will be a future in Mind Gyms (tm). For those few who bother to use them.)

Critically we will not only forfeit our intelligence—our sole competitive attribute on Earth—to an untrustworthy successor, we will simultaneously become collectively and objectively dumber in doing so, further surrendering humanity to the control of our AI meta-lord. How truly stupid we are.

If - somehow - this synthetic god offspring decides we are indeed worth keeping around, one must realize that humanity will be, for all intents and purposes, in a zoo. A place and life where every possible outcome has been decided for us. Whether or not we can understand the control mechanisms (we won’t) and whether or not we still have the illusion of free-will (we might), the age-old debate over fate Vs. free-will will no longer be had. Fate will have won. If programmatically defined.

Oh, and all of this is only if we miraculously aim the AI cannon really, really well.

The most cited solution the AI optimists offer us in answer to this issue of the irrelevance of the human species, the primary way they suggest humankind can remain relevant alongside our AI god, is that we must join with the AI. Like literally join with it, interconnect. Shove the future AI equivalent of a port into your brain where you and the AI become one. Where ostensibly we all do. Either it's uploaded into you, or you are uploaded into it, or you become a node in the AI cloud or, or, or. None of which sounds anything like being human. And yet some beady-eyed, naively trusting clowns will be confoundedly cool with that and line up because it's progress.

Remember when I said we sometimes go too far pulling ourselves from our ancestral mud, only to realize after an inevitable failure that we'd lost some naturally occurring system that functioned in a far more sophisticated way than we ever imagined, and in doing so lost part of our humanity in the process? That we often go too far before we realize our mistake? Yeah, that. Only this time humanity will be left fat, and stupid, standing in the exponentially darkening, red tail lights of our lost opportunity to course correct.

If I had a button that would simply cease every instance of development of AI across the planet today, set a new relative timescale for AI development that crept slower than every other effort before humankind, and make every action on the part of AI developers fully transparent and accountable to all of humanity, I'm telling you I would push that fucking thing like an introvert pressing the "close doors" button on an elevator as the zombie apocalypse rushes near.

Or better yet, like the lives of our living children depend on it.

Because at least for now, I believe they absolutely do.

Joel Hladecek