AI Isn’t Taking Your Job - It’s Taking Everyone’s
AI Isn’t Taking Your Job - It’s Taking Everyone’s
If your feed is remotely like mine, you’ve heard the following comment in the last six months:
"History is littered with people who criticized new technologies with fears of massive job losses. But in fact, every technical revolution has done the exact opposite, it has created more new jobs than it displaced. And this will be true for AI too."
Ahhh. That feels nice doesn't it? What a reassuring sentiment.
As the topic of AI seeps intro every social cranny, comment thread and conversation across the planet about, god knows, EVERTHING, people are incessantly copy/pasting this trope about job creation like a miracle cure, a salve to cool our slow-burning, logical fears. A security blanket reverently stroked with other Internet users in a circle jerk of reassurance.
But for this rule about jobs to be true, AI would have to be merely another in a line of technical revolutions, like all the others that came before it. Sure, AI is more advanced and sophisticated - but ultimately it's just another major advance in humanity's ingenuity and use of tools. Right?
No. AI marks the end of jobs. Not an explosion in new ones.
I desperately want to be wrong about this - please, someone prove that old addage correct. Prove me wrong, I beg you. But as you do please factor in my reasoning:
WHY
Every technical revolution in history:
Eliminated some manual jobs
Created new technical and knowledge-based roles
Required new education/training systems
Shifted population centers (e.g., from farms to factories to offices to remote work)
And these conditions lead to more jobs being created.
But every previous technical revolution in history has sat below the line of human participation. Meaning, despite removing some jobs, every previous technology still required human intelligence, skill, education, labor, and of course the creation of goals to reach its functional potential. They helped humans do the work.
A hammer cannot, itself, set a nail. It must be wielded by a human user to set a nail. Moreover it takes time to learn to use a hammer correctly, to teach others, to conceive, design and build the house, and to originate the goal of building a house in the first place. A computer cannot code itself, and create a game.
So is AI just a tool? A tool that some of us will learn to use better than others, warranting new technical and knowledge-based jobs? A tool that will require new education and training systems?
I hear that all the time, "AI is just a different kind of paintbrush, but it still requires a human to weild it." It's another pat retort that some use in my social feeds to counter rational concerns about AI taking jobs.
I think it's a gross over-simplification to think of AI as just the next in a long line of tools. AI's earliest instantiations can be considered tools, sure, the crude ones we use today, these are tools as they are typically defined. So I don't blame people for making the mistake of feeling secure in that stance.
But there are at least three key areas of differentiation separating every previous technical revolution in history, from AI. (I'm sure there are more, particularly related to the democratization of expertise across countless domains of human experience, but thats for someone smarter than me to process)
The three I feel are most relevant are:
Optimization over Time
Interface / Barriers to adoption and use
Goals and Imagination
The first and in some way the most profound of these is that AI as a tool cannot meaningfully be peeled away from its time to optimization. And I'm not talking about eons here, not even a lifetime, but rather something between waiting for another season of The Last of Us, and the time it takes to dig a tunnel in Boston. Its clock is so fast it becomes a feature. Add this little time and suddenly everything changes.
AI's Exponential Jump Scare
Despite all of us having seen and studied exponential curves, having had the hockey stick demonstrated for us in countless PowerPoint presentations and TED talks, it's nevertheless near impossible to visualize what AI's curve will feel like. We can only understand and react to those curves once they've been plotted for us in hindsight, and perhaps when related to something we no longer hold dear, such as, building fire, building factories, or renting movies on DVD.
In real life, encountering the impact of AI's exponential curve will feel like a jump scare.
I mean, you see it coming. You might feel it now. You might even think you're prepared. But dammit all, it's going to shock us the moment it shoots past the point of intersection ('the singularity'). It'll be a jump scare. Because the nature of the exponential curve is near impossible for humans to visualize in real-time.
When previous exponential changes caused disruption, for example when file sharing disrupted the music and video industries, it was people's expectations about timing that were caught off guard. The concepts in and of themselves (distributing music online for example) were not difficult to understand and foresee, rather it was a question of how far away it was. The speed with which that one change took hold meant that an industry and its infrastructure couldn't reorganize fast enough once the boogyman popped into view.
With AI it's different. It's not merely our expectation of timing around something we know could happen that is being challenged. It's that the changes we are facing are almost unimaginable to begin with and they're coming at us at such a rapid pace it is matching and in some cases surpassing our organic ability to understand and internalize them.
This is important because most of us are already struggling to keep up. Every day there is a new innovation. Everyday a new tool. Everyday 20 new videos in your social feed of someone wide eyed and aghast at some new and ridiculously insane AI capability. Every day a new paradigm is shifted.
And before you say "What's your problem, you’re just old, this is normal for me." It won't be. As we just discussed, this speed of advancement is not stable nor predictable. Maybe the pace of change is right for you now, maybe the fact that it ramped up since last week isn't totally noticable to you yet - but it will be.
The reality is, despite human beings' great impatience for getting what we want when we want it, we nevertheless do indeed have an organic limit. A point at which there is too much, coming too fast for the human organism. We have a baud rate if you will. Our brains are finite and function, on average, at a given top speed. Our heart beat, our breath rate, our meal times, our sleep patterns. We are organic creatures with a finite limit to take in new information, process it and perform with it.
Going forward, as the rug of new tool after tool is pulled out from under us, and the flow of profound new capabilities continues to pick up speed, it will reach a point where humans have no choice but to surrender. Where our ability to uniquely track, learn and use any given tool better than anyone else will be irrlevant, as new tools with new capabilities will shortly solve for and reproduce the effect of whatever it was you thought you brought to the equation in the first place. That's in the design plan. It will learn and replace the unique value of your contribution and make that available to everyone else.
Overwhelmed with novelty, you'll simply fall into line, taking what comes, confronting the firehose of advancements like everyone else with no opportunity to scratch out a unique perspective. We'll become viewers of progress. Bystanders drinking from the unknowable river as it flows past.
If there is a job in that somewhere let me know.
Interface, and the Folly of "Promt Engineering"
To anyone still trying to crack this role today, I get it. A new technology appears and that's what we do, we look for opportunities - a quick win into the new technology - one that will open up to a whole new career. Like becoming a UX designer in 2007. Jump in early, get a foothold, and enjoy the widest possible window of expertise.
One problem with early AI was that prompting could be done somewhat better or worse. Just as explaining to your significant other why you played Fortnight and didn't clean the kitchen could be done better or worse. And depending on how you articulated your promt would surely result in a better or non optimal response. Today prompting is a little like getting a wish granted by the devil; any lack of clarity can manifest in unintended and unwanted outcomes. So we've learned to write prompts like lawyers.
Enter the Prompt Engineer. But for prompt engineering to be a job that lasts past the end of the year - once again - AI would have to remain, if not static, at least stable enough for a person to build a unique set of skills and knowledge that is not obvious to everyone else. But that won't happen.
AI exists to optimize. And as far as interafce goes, it optimizes toward us and towards simplicity, not away from us toward complexity.
As AI's interafce is simply common language, and in principle requires no particular skill or expertise or education to use, virtually anyone can do it. And that's the point. There is nothing to learn. No skill or special knowledge to develop. There is no coded language for a "specialist" to decode. In fact the degree to which AI does not understand your uniquely worded common language today, it will eventually. Perhaps it will learn from previous communucations with you. Perhaps it will learn to incorporate your body language and micro expressions in gleening your unique intent. The point is that you will not have to get better at explaining yourself. It will optimize itself and get better at understanding you until you are satisfied that what you intended was understood.
The whole model of tool-use has flipped upside down: for the first time, our tool exists to learn and understand us, not the other way around. Among other things it makes each of us an expert user without any additional education or skills required. So what form of education or special knowledge or skills exactly does the advent of AI usher in?
Certainly any "job" centered on the notion of making AI more usable, accessible or functional is a brief window indeed. And in that this tool has the ability to educate about, well, anything, I don't see a lot of education jobs popping up either.
Imagination and Goals
Or: "What we have, that AI doesn’t."
There’s a comforting fallback that people return to when the usual “AI will create more jobs” trope starts to lose traction:
“Yes, but AI doesn’t have imagination. It can’t dream. It can’t create goals. Only humans can do that.”
And to that one must open their eyes and admit: Yet.
Like holding a match and claiming the sun will never rise because you’re currently the only source of light.
I used to think AI was here to grant our wishes and make our dreams come true. And as such we would always be the ones to provide the most important thing: the why.
AI may yet do that for us. But now I'm not so sure that's really the point.
Humans are currently needed in the loop, not to do the work, but to want it done. To imagine things that don’t yet exist. To tell the machine what to make.
That’s our last little kingdom. The full scale of our iceberg.
But just like everything else AI has already overtaken: language, logic, perception, pattern recognition - soon goal formation, novelty and imagination will be on the menu. It’s just a continuum. There is no sacred, magical neuron cluster in the human brain that is immune to simulation. Imagination is pattern recognition plus divergence plus novelty-seeking - all things that can be modeled. All things that are being modeled.
Once AI can model divergent thought with contextual self-training and value-seeking behavior, it won’t need our stupid ideas for podcasts. It won’t need our game designs. Or our screenplays. Or an idea for a new kind of table.
You were the dreamer. Until the dream dreamed back.
And what happens when the system not only performs better than us, but imagines better than we do? When it imagines better games, better company ideas, better fiction? The speed of iteration won’t be measured in months, or days, or hours—but in versions per second.
We are not dealing with a hammer anymore.
We are watching the hammer learn what a house is—and then decide it prefers, say, skyscrapers.
No Jobs Left, Because No Work Left
And so, yes, there may be a number of years where humans still play a role. We will imagine goals. We will direct AI like some Hollywood director pretending we still run the show while the VFX artists build the movie. We’ll say we’re “collaborating.” We’ll post Medium think pieces about “how to partner with AI.” It’ll feel empowering …for about six months.
But the runway ends.
Thinking AI will always serve us alone just because we built it is like thinking a child will never outgrow you because you put them through school.
AI will eventually imagine the goals.
AI will eventually pursue the goals.
AI will eventually evaluate its own success and reframe its own mission.
And at that point, the jobs won’t just be automated—they’ll be irrelevant.
Please Do Something!
So with what little time we have to act, please stop stroking the blanket of history’s job-creation myth like a grief doll.
If we don’t radically rethink what “work” means- what “purpose” means - we’re going to be standing in the wreckage of the last human job, clutching our resumes like relics from a forgotten religion.
I don't create policy, I don't know politics and I certainly don't know how to beat sense into the globe’s ridiculously optimistic, international, trillion dollar AI company CEOs.
But I hope someone out there does. Whatever we do, be it drawing up real-world universal basic income, or a global profit-sharing program like the Alaska Permanent Fund, we need to do it fast. Or, to hell with it, ask the AI, maybe it will know what to do. And hopefully it’s better conceived than the Covid lockdowns.
Because the jobs are going to dry up, and we'll still have to pay rent.