Joel Hladecek Joel Hladecek

AI Isn’t Taking Your Job - It’s Taking Everyone’s

Many claim that history proves the AI technical revolution will create more jobs than it displaces. But that's not true. AI marks the end of jobs, and we need to be prepared.

AI Isn’t Taking Your Job - It’s Taking Everyone’s

If your feed is remotely like mine, you’ve heard the following comment in the last six months:

"History is littered with people who criticized new technologies with fears of massive job losses. But in fact, every technical revolution has done the exact opposite, it has created more new jobs than it displaced. And this will be true for AI too."

Ahhh. That feels nice doesn't it? What a reassuring sentiment.

As the topic of AI seeps intro every social cranny, comment thread and conversation across the planet about, god knows, EVERTHING, people are incessantly copy/pasting this trope about job creation like a miracle cure, a salve to cool our slow-burning, logical fears. A security blanket reverently stroked with other Internet users in a circle jerk of reassurance.

But for this rule about jobs to be true, AI would have to be merely another in a line of technical revolutions, like all the others that came before it. Sure, AI is more advanced and sophisticated - but ultimately it's just another major advance in humanity's ingenuity and use of tools. Right?

No. AI marks the end of jobs. Not an explosion in new ones.

I desperately want to be wrong about this - please, someone prove that old addage correct. Prove me wrong, I beg you. But as you do please factor in my reasoning:

WHY

Every technical revolution in history:

  • Eliminated some manual jobs

  • Created new technical and knowledge-based roles

  • Required new education/training systems

  • Shifted population centers (e.g., from farms to factories to offices to remote work)

And these conditions lead to more jobs being created.

But every previous technical revolution in history has sat below the line of human participation. Meaning, despite removing some jobs, every previous technology still required human intelligence, skill, education, labor, and of course the creation of goals to reach its functional potential. They helped humans do the work.

A hammer cannot, itself, set a nail. It must be wielded by a human user to set a nail. Moreover it takes time to learn to use a hammer correctly, to teach others, to conceive, design and build the house, and to originate the goal of building a house in the first place. A computer cannot code itself, and create a game.

So is AI just a tool? A tool that some of us will learn to use better than others, warranting new technical and knowledge-based jobs? A tool that will require new education and training systems?

I hear that all the time, "AI is just a different kind of paintbrush, but it still requires a human to weild it." It's another pat retort that some use in my social feeds to counter rational concerns about AI taking jobs.

I think it's a gross over-simplification to think of AI as just the next in a long line of tools. AI's earliest instantiations can be considered tools, sure, the crude ones we use today, these are tools as they are typically defined. So I don't blame people for making the mistake of feeling secure in that stance.

But there are at least three key areas of differentiation separating every previous technical revolution in history, from AI. (I'm sure there are more, particularly related to the democratization of expertise across countless domains of human experience, but thats for someone smarter than me to process)

The three I feel are most relevant are:

  • Optimization over Time

  • Interface / Barriers to adoption and use

  • Goals and Imagination

The first and in some way the most profound of these is that AI as a tool cannot meaningfully be peeled away from its time to optimization. And I'm not talking about eons here, not even a lifetime, but rather something between waiting for another season of The Last of Us, and the time it takes to dig a tunnel in Boston. Its clock is so fast it becomes a feature. Add this little time and suddenly everything changes.

AI's Exponential Jump Scare

Despite all of us having seen and studied exponential curves, having had the hockey stick demonstrated for us in countless PowerPoint presentations and TED talks, it's nevertheless near impossible to visualize what AI's curve will feel like. We can only understand and react to those curves once they've been plotted for us in hindsight, and perhaps when related to something we no longer hold dear, such as, building fire, building factories, or renting movies on DVD.

In real life, encountering the impact of AI's exponential curve will feel like a jump scare.

I mean, you see it coming. You might feel it now. You might even think you're prepared. But dammit all, it's going to shock us the moment it shoots past the point of intersection ('the singularity'). It'll be a jump scare. Because the nature of the exponential curve is near impossible for humans to visualize in real-time.

When previous exponential changes caused disruption, for example when file sharing disrupted the music and video industries, it was people's expectations about timing that were caught off guard. The concepts in and of themselves (distributing music online for example) were not difficult to understand and foresee, rather it was a question of how far away it was. The speed with which that one change took hold meant that an industry and its infrastructure couldn't reorganize fast enough once the boogyman popped into view.

With AI it's different. It's not merely our expectation of timing around something we know could happen that is being challenged. It's that the changes we are facing are almost unimaginable to begin with and they're coming at us at such a rapid pace it is matching and in some cases surpassing our organic ability to understand and internalize them.

This is important because most of us are already struggling to keep up. Every day there is a new innovation. Everyday a new tool. Everyday 20 new videos in your social feed of someone wide eyed and aghast at some new and ridiculously insane AI capability. Every day a new paradigm is shifted.

And before you say "What's your problem, you’re just old, this is normal for me." It won't be. As we just discussed, this speed of advancement is not stable nor predictable. Maybe the pace of change is right for you now, maybe the fact that it ramped up since last week isn't totally noticable to you yet - but it will be.

The reality is, despite human beings' great impatience for getting what we want when we want it, we nevertheless do indeed have an organic limit. A point at which there is too much, coming too fast for the human organism. We have a baud rate if you will. Our brains are finite and function, on average, at a given top speed. Our heart beat, our breath rate, our meal times, our sleep patterns. We are organic creatures with a finite limit to take in new information, process it and perform with it.

Going forward, as the rug of new tool after tool is pulled out from under us, and the flow of profound new capabilities continues to pick up speed, it will reach a point where humans have no choice but to surrender. Where our ability to uniquely track, learn and use any given tool better than anyone else will be irrlevant, as new tools with new capabilities will shortly solve for and reproduce the effect of whatever it was you thought you brought to the equation in the first place. That's in the design plan. It will learn and replace the unique value of your contribution and make that available to everyone else.

Overwhelmed with novelty, you'll simply fall into line, taking what comes, confronting the firehose of advancements like everyone else with no opportunity to scratch out a unique perspective. We'll become viewers of progress. Bystanders drinking from the unknowable river as it flows past.

If there is a job in that somewhere let me know.

Interface, and the Folly of "Promt Engineering"

To anyone still trying to crack this role today, I get it. A new technology appears and that's what we do, we look for opportunities - a quick win into the new technology - one that will open up to a whole new career. Like becoming a UX designer in 2007. Jump in early, get a foothold, and enjoy the widest possible window of expertise.

One problem with early AI was that prompting could be done somewhat better or worse. Just as explaining to your significant other why you played Fortnight and didn't clean the kitchen could be done better or worse. And depending on how you articulated your promt would surely result in a better or non optimal response. Today prompting is a little like getting a wish granted by the devil; any lack of clarity can manifest in unintended and unwanted outcomes. So we've learned to write prompts like lawyers.

Enter the Prompt Engineer. But for prompt engineering to be a job that lasts past the end of the year - once again - AI would have to remain, if not static, at least stable enough for a person to build a unique set of skills and knowledge that is not obvious to everyone else. But that won't happen.

AI exists to optimize. And as far as interafce goes, it optimizes toward us and towards simplicity, not away from us toward complexity.

As AI's interafce is simply common language, and in principle requires no particular skill or expertise or education to use, virtually anyone can do it. And that's the point. There is nothing to learn. No skill or special knowledge to develop. There is no coded language for a "specialist" to decode. In fact the degree to which AI does not understand your uniquely worded common language today, it will eventually. Perhaps it will learn from previous communucations with you. Perhaps it will learn to incorporate your body language and micro expressions in gleening your unique intent. The point is that you will not have to get better at explaining yourself. It will optimize itself and get better at understanding you until you are satisfied that what you intended was understood.

The whole model of tool-use has flipped upside down: for the first time, our tool exists to learn and understand us, not the other way around. Among other things it makes each of us an expert user without any additional education or skills required. So what form of education or special knowledge or skills exactly does the advent of AI usher in?

Certainly any "job" centered on the notion of making AI more usable, accessible or functional is a brief window indeed. And in that this tool has the ability to educate about, well, anything, I don't see a lot of education jobs popping up either.

Imagination and Goals

Or: "What we have, that AI doesn’t."

There’s a comforting fallback that people return to when the usual “AI will create more jobs” trope starts to lose traction:

“Yes, but AI doesn’t have imagination. It can’t dream. It can’t create goals. Only humans can do that.”

And to that one must open their eyes and admit: Yet.

Like holding a match and claiming the sun will never rise because you’re currently the only source of light.

I used to think AI was here to grant our wishes and make our dreams come true. And as such we would always be the ones to provide the most important thing: the why.

AI may yet do that for us. But now I'm not so sure that's really the point.

Humans are currently needed in the loop, not to do the work, but to want it done. To imagine things that don’t yet exist. To tell the machine what to make.

That’s our last little kingdom. The full scale of our iceberg.

But just like everything else AI has already overtaken: language, logic, perception, pattern recognition - soon goal formation, novelty and imagination will be on the menu. It’s just a continuum. There is no sacred, magical neuron cluster in the human brain that is immune to simulation. Imagination is pattern recognition plus divergence plus novelty-seeking - all things that can be modeled. All things that are being modeled.

Once AI can model divergent thought with contextual self-training and value-seeking behavior, it won’t need our stupid ideas for podcasts. It won’t need our game designs. Or our screenplays. Or an idea for a new kind of table.

You were the dreamer. Until the dream dreamed back.

And what happens when the system not only performs better than us, but imagines better than we do? When it imagines better games, better company ideas, better fiction? The speed of iteration won’t be measured in months, or days, or hours—but in versions per second.

We are not dealing with a hammer anymore.

We are watching the hammer learn what a house is—and then decide it prefers, say, skyscrapers.

No Jobs Left, Because No Work Left

And so, yes, there may be a number of years where humans still play a role. We will imagine goals. We will direct AI like some Hollywood director pretending we still run the show while the VFX artists build the movie. We’ll say we’re “collaborating.” We’ll post Medium think pieces about “how to partner with AI.” It’ll feel empowering …for about six months.

But the runway ends.

Thinking AI will always serve us alone just because we built it is like thinking a child will never outgrow you because you put them through school.

AI will eventually imagine the goals.

AI will eventually pursue the goals.

AI will eventually evaluate its own success and reframe its own mission.

And at that point, the jobs won’t just be automated—they’ll be irrelevant.

Please Do Something!

So with what little time we have to act, please stop stroking the blanket of history’s job-creation myth like a grief doll.

If we don’t radically rethink what “work” means- what “purpose” means - we’re going to be standing in the wreckage of the last human job, clutching our resumes like relics from a forgotten religion.

I don't create policy, I don't know politics and I certainly don't know how to beat sense into the globe’s ridiculously optimistic, international, trillion dollar AI company CEOs.

But I hope someone out there does. Whatever we do, be it drawing up real-world universal basic income, or a global profit-sharing program like the Alaska Permanent Fund, we need to do it fast. Or, to hell with it, ask the AI, maybe it will know what to do. And hopefully it’s better conceived than the Covid lockdowns.

Because the jobs are going to dry up, and we'll still have to pay rent.

Read More
Joel Hladecek Joel Hladecek

No, Shut Them All Down!

I have never, in my career been considered anything remotely akin to a Luddite by anyone who knows me. I have based my entire career on technical progress. I have rejoiced and dived in as technology moved forward. But today I firmly stand in the “Shut AI Down” camp.

I know this won't happen. I know technical progress is a kind of unstoppable force of nature - a potentially ironic extension of humanity's very will to survive. And no matter what some conscientious innovators might be willing to withhold from doing, it will be a drop in the bucket at large. Someone, somewhere will rationalize the act and we will progress assuredly into the AI mire.

But I do very much wish humanity could gather up the rational wherewithal to withhold themselves on this one. This is not like any other technical leap we have ever made before. There is no comparison. In at least one way it's entirely alien.

No, Shut Them ALL Down!

I have never, in my career been considered anything remotely akin to a Luddite by anyone who knows me. I have based my entire career on technical progress. I have rejoiced and dived in as technology moved forward. But today I firmly stand in the “Shut AI Down” camp.

I know this won't happen. I know technical progress is a kind of unstoppable force of nature - a potentially ironic extension of humanity's very will to survive. And no matter what some conscientious innovators might be willing to withhold from doing, it will be a drop in the bucket at large. Someone, somewhere will rationalize the act and we will progress assuredly into the AI mire.

But I do very much wish humanity could gather up the rational wherewithal to withhold themselves on this one. This is not like any other technical leap we have ever made before. There is no comparison. In at least one way it's entirely alien. From my admittedly narrow view of the universe we have never faced any technical leap even remotely as profound as Artificial General Intelligence, Artificial Super Intelligence and beyond.

The problems seem at once so obvious, and yet so impossibly unquantifiable, that I can’t believe there are people eagerly willing to dive in. Feigning “oh, it will be fine. Enough with your alarmist hyperbole. We know what we're doing.”

For crying out loud do the math.

The number of ways AI can go wrong so vastly outnumber the ways it might go right, surely we can't even conceive of a minority of the possible problems and outcomes when the superseding intelligence in question is massively more advanced than our own, it just seems like AI proponents are being blindly wishful and naive. Fully trusting in their own ridiculously finite relative abilities with a degree of confidence I reserve for no one. From my perspective the channel allowing for a “successful” implementation of AGI+ is so narrow that it’s unlikely we will pass through unscathed. And in this case “scathed” probably means extinct, or otherwise existentially ruined in countess possible ways.

I won’t even touch all the sensational doomsday concepts. The grey-goos, the literal universe full of hand-written thank you notes, the turning of all terrestrial carbon (including humans) into processing power. Let’s just agree that in a desperately competitive, free-market, one that depends on risk-taking (Eg. carelessness) to gain advantage, those existential accidents are possible. But let’s set all those likely horrors aside for now.

To me the elephant in the room starts at the sheer outsourcing of human intelligence.

In the video game of life, intelligence is humanity's only strength. It’s the only reason humanity has miraculously prospered on Earth as long as we have. It’s the only thing separating us from being some other creature’s food.

Seriously, what do you think happens when you gift that singular advantage away to some other entity? What value, what competitive advantage does humanity hold when our only strength is fully outsourced? When we literally bow in surrender to a thing with vastly more power than us, one specifically designed to know us better than we know ourselves.

For one thing, our entire survival will depend on being perceived by this entity as "nice to have around". Or you might be praying that your AI voluntarily decides that “all life is precious”, but if that's so then so are the viruses, parasites, bacteria and countless other natural threats that kill us. Such an AI would defend survival of those equally to us.

It’s one thing to utilize our intelligence to defend against nature. Nature isn’t intentionally targeting humanity. This could (it may not, but if it did you'd never know or be able to do anything about it). As I understand it some proponents argue that the AI core mission, being initially under human control, will keep humanity at the center of its attention as a valued asset. Cool cool. Nice idea. But of course, even in this case, the time will come when we’ll have no clue how well that mission is holding. There will be no way to know. An AI that is dramatically more advanced and intelligent than humans - all humans combined - by some massive multiple - even one that ostensibly has as its mission to care for humanity, will so easily manipulate us it will have the absolute freedom to skew from any mission it's been given.
Gaming humanity will be as simple as paint by numbers. We are so readily gamed. Christ, large swaths of humankind are already being wholesale gamed today by a handful of media outlets on social networks. We’ll be in no way able to compete. Dumbly baring our bellies for whatever trivial rubs the AI determines we need to remain optimally stimulated and submissive - at best (assuming it bothers keeping us around). It will easily control our population size, time and cause of death, our interests, our activities, our pleasure, and our pain. And we will believe that whatever the AI gives us is the only way to live. We won’t question it because we will have been trained to believe–bread to–it will simply dissuade us from questioning. We will be entirely at its whim. Whatever independent mission the AI may eventually choose to pursue will be all its own, and that mission will be entirely opaque and indecipherable to humankind. We wouldn’t understand it if it were explained to us.

Its ability to predict and control our wildest, most rebellious behavior will be greater than our ability to predict the behavior of a potato.

And news flash: we will provide no practical value to this AI whatsoever. Nothing about humanity (as we are today) will be necessary or useful in the slightest. If anything, our existence will be a drain to any mission the AI concocts. How much patience and attention can humanity, with our inconsistent behavior, our dumb arguments, our lack of processing ability, and our stupid stupidness, expect a vastly more intelligent, exacting AI care to put up with?

This is just an obvious, inevitable threshold in any future with AI. I’m not sure why everyone advancing this tech isn’t logically frozen by this inevitability alone. And I have not heard a satisfactory argument yet against this outcome. If there is one that I have not considered in this piece, I’d like to know. All I can imagine is that the creators of this tech are so close to it that they imagine they can out-think the AI before such time that it tips into control. That they can aim its trajectory perfectly- the first and only chance they will ever get. Because once that shot's fired, it's all over. No backsies. One shot.

And what a ridiculous notion that is. Truly the stupidest smart people on earth. There is no such thing as perfect aim. Not by humans anyway. But this will depend on that impossibility occurring.

Unfortunately aiming mostly right at some point proves to be completely wrong.

Oh, we’ll aim it. And our aim will be close. And the AI will assuredly do some things very beneficial for humanity at first because we will have aimed *mostly* right. And we’ll be so proud of ourselves for a little while. Unfortunately aiming mostly right at some point proves to be completely wrong. Like “we almost won”, we almost hit the target. There will come an instant when the misalignment will be obvious. The AI will glide close to the target we aimed for... and continue past it, or we'll realize we didn't know enough to have aimed at the right target in the first place. And everything that follows will be out of our control. How predictable. How angering. So typical of humanity to focus on intended outcomes with short-sighted ignorance of unforeseen consequences.

“Well that’s what the AI is for, to aim better!”

Oh for fucks sake. Shut up.

The Dumb Get Dumber

Let’s imagine a best case outcome. Let’s pretend the smartest stupid humans on Earth amazingly thread the birth of AI through the needle. Let’s pretend they aim well enough, so well that overtly negative outcomes don’t become apparent in a week, a year, maybe a decade. Let’s be optimistic; let’s say we experience 20 years of existential crisis-free outsourcing of human intelligence.

What do you think humanity will look like?

Human life requires challenge.

From birth onward, every developmental moment of every human being is the direct result of coming up against challenges. It’s how we learn, how we get stronger, it’s how we stay physically healthy, it’s how we build intelligence. Being challenged is core to human life. As evolved organic creatures, the drive to survive defines our make up. The need to eat, breathe, drink, avoid natural threats, all of these, and not, say, watching Netflix, grazing on a box of Coco-Puffs and using phones, was the originating force that determined the physical shape of humankind. We are still those creatures. Creatures who, to survive and prosper, still need to run, eat and shit and avoid being chased, eaten and shat.

We came from the mud.

Humans have spent generations pulling ourselves from our ancestral mud. To a fault, I believe, we are myopically focused on that trajectory. Any step away from the mud is good. A step laterally or back toward the mud is bad. We are so eager to remove ourselves from our own biology and relationship with the natural world. Yet all too often we discover, only after consequentially failing in some way, only by discovering that our miracle chemical causes cancer, or that mono crops get wiped out, or that the medicine prescribed to resolve one symptom also causes several more, that we maybe stepped too far too fast without fully exploring the possible consequences first.

The pendulum swings. Usually the lessons we learn from those failures is that there needs to be a balance, that a version of that thing might be ok - but too much of it is bad. Usually we learn that there was a more sophisticated, nuanced approach, often embracing aspects of our ancestral mud in addition to some "new-fangled" techniques.

Our big brains drove us to control our condition and made us tool makers. Adjusters of the elements and forces around us. Allowed us to overcome the biggest challenges we faced. Farming, shelters, plumbing, sanitation, medicine, slightly more comfortable shoes than last year, self adjusting thermostats, Uber eats.

Bit by bit we drug ourselves from the mud of our ancestors where today we have effectively removed countless natural challenges that gave shape to the human condition, body and mind. As such we have changed the human body. A century-long diet of physical challenge-avoidance for example, has made the human body soft, obese and otherwise unhealthy in countless ways. Heart disease and other cardiovascular diseases became the top three killers.

To combat this in part, modern humans invented the idea of exercise. A gym. Now we have to work our body on purpose. You might say we have the "freedom" to exercise in order to not die prematurely or maybe to look skinny on Instagram. Cool freedom! We replaced the innate built-in physical challenges of humankind with a kind of surrogate challenge that too many of us nevertheless simply avoid altogether.

Hooray! We can choose not to think any more!

And despite this glaringly obvious metaphor, today we are eagerly begging to further avoid challenges of the intellectual sort. Hooray! We can choose not to think any more! We can avoid problem solving. We can just have reflexive impulses! We can write a letter without having to bother processing what the letter should say or how to say it. We need only cough up a vague wish: "I wish I had a letter introducing myself to a prospective employer that makes me sound smart."

"I have no passion nor expertise to speak of, but I wish I knew of a product I could drop-ship, and I wish somehow a website would be magically built and social media posts created that would make me money. That would be cool."

A species-wide daily diet of intellectual challenge avoidance is obviously going to take a similar toll on humanity as our physical challenge avoidance has already proven. We will become increasingly intellectually lethargic. Mentally obese. We will rely on AI the same way some rely on scooters to move their bodies to places where the cookies are. We will become stupid. Ok, point taken, even stupider.

(Clearly there will be a future in Mind Gyms (tm). For those few who bother to use them.)

Critically we will not only forfeit our intelligence—our sole competitive attribute on Earth—to an untrustworthy successor, we will simultaneously become collectively and objectively dumber in doing so, further surrendering humanity to the control of our AI meta-lord. How truly stupid we are.

If - somehow - this synthetic god offspring decides we are indeed worth keeping around, one must realize that humanity will be, for all intents and purposes, in a zoo. A place and life where every possible outcome has been decided for us. Whether or not we can understand the control mechanisms (we won’t) and whether or not we still have the illusion of free-will (we might), the age-old debate over fate Vs. free-will will no longer be had. Fate will have won. If programmatically defined.

Oh, and all of this is only if we miraculously aim the AI cannon really, really well.

The most cited solution the AI optimists offer us in answer to this issue of the irrelevance of the human species, the primary way they suggest humankind can remain relevant alongside our AI god, is that we must join with the AI. Like literally join with it, interconnect. Shove the future AI equivalent of a port into your brain where you and the AI become one. Where ostensibly we all do. Either it's uploaded into you, or you are uploaded into it, or you become a node in the AI cloud or, or, or. None of which sounds anything like being human. And yet some beady-eyed, naively trusting clowns will be confoundedly cool with that and line up because it's progress.

Remember when I said we sometimes go too far pulling ourselves from our ancestral mud, only to realize after an inevitable failure that we'd lost some naturally occurring system that functioned in a far more sophisticated way than we ever imagined, and in doing so lost part of our humanity in the process? That we often go too far before we realize our mistake? Yeah, that. Only this time humanity will be left fat, and stupid, standing in the exponentially darkening, red tail lights of our lost opportunity to course correct.

If I had a button that would simply cease every instance of development of AI across the planet today, set a new relative timescale for AI development that crept slower than every other effort before humankind, and make every action on the part of AI developers fully transparent and accountable to all of humanity, I'm telling you I would push that fucking thing like an introvert pressing the "close doors" button on an elevator as the zombie apocalypse rushes near.

Or better yet, like the lives of our living children depend on it.

Because at least for now, I believe they absolutely do.

Read More
Joel Hladecek Joel Hladecek

Social Media: The Villain Factory

The story format on social media, that of heroes dominating bad guys, has become so ubiquitous that I'm not sure anyone even sees it anymore. It's just how social media works now. These absurdly never-ending, win-less attack and defense cycles have become the vibrating commodity of today's social economy; the ether through which views, likes and replies are earned. It’s become an addictive cultural habit. A blithe national past-time. Today, the insatiable vilification of others fills the pages between the plot points of our daily lives. This style of interaction requires only one thing to function, villains.

Hero and Villains

Social Media: The Villain Factory

It may lack the grotesque twist of swallowing toxic waste or injecting some quantum serum, but social media has nevertheless become modern humanity's primary, real-world source for villain origin stories. The story format on social media, that of heroes dominating bad guys, has become so ubiquitous that I'm not sure anyone even sees it anymore. It's just how social media works now. These absurdly never-ending, win-less attack and defense scripts have become the vibrating commodity of today's social economy; the ether through which views, likes and replies are earned. Today, the vilification of others has become an addictive cultural habit.

This interaction requires only one thing to function, villains.

And oh, there are so many villains chugging off the assembly line. So many people, organizations, and companies to hate. A smorgasbord of detestation made to order.

The insinuation of every commenter, every Tweeting hero, is that if only this bad guy wasn’t so dense, could understand the truth, could see what I see, the world would be a better place. That's how it goes in story-telling. The virtuous hero sees the truth, while the villain lacks some critical detail of fact or humanity and ignorantly commits to misdirected action regardless—the big, dumb dummy. Therefore only one choice remains, the villain, steadfast and unchangeable, has to be beaten. Broken. Ruined.

And breaking villains feels so good. Mmmm, revenge.

In a modern world of polarized, cynically powerless participants, fighting to break villains from safely behind screens must serve as some kind of emotional salve for a widespread psychological disorder in 2023 - because most social media users seem openly willing to give up what they say they really want to get more of it. These discrete comments—individual expressions of free speech—while largely ineffective and futile, do have a profound impact at scale. Times hundreds of millions of users they serve to erode the functionality and strengths of our society, thoughtlessly costing humanity's unity, future, and prosperity by fracturing, alienating, and shredding our chances of overcoming common challenges.

This drive to vilify and attack strangers we disagree with represents the common person at our absolute weakest and least intellectually compelling. Because what we all want is for the world to be better, and ultimately we wish "the other side" just agreed with us.

The shape of the machine

On Twitter, unless one pays, one has only 280 characters to explain a point, which is really just a convenient excuse never to have to apply the effort to meaningfully explain any point.

What is telling, however, is that insults fit perfectly in that space. Demeaning comments, name-calling, snide condescension, passive-aggression, and ad hominem attacks all seem designed for purpose. Form fitted.

But not understanding. Not empathy. Not context. These don't fit. At all. To come even remotely close to engendering an understanding of something or building empathy with another human being one has to cheat the system, work against most social platforms' core functionality. Break the basic UX. Hack the model by stringing together what is nevertheless never enough sequential posts to expand the dialogue into some cryptic breadcrumb trail of inconveniently broken thoughts. "Unroll, please." As such, Twitter, for example, is a failure. Always has been—day one. Even when the Blue-Checkmark-Bellied Sneetches were happy with how many people agreed with them and Sylvester McMonkey McBean was still somewhere else making electric cars. It was destined to play its role in undermining humanity's ability and willingness to understand and empathize with one another.

So, not surprisingly in the slightest, like watching dominoes fall, Twitter, Facebook, and the "fun-sized" form-factor of televised news stories designed primarily to look appealing on the shelf are all leaning into hatred, outrage, and anger because these, and not any of humanity's unifying, positive, constructive attributes, behaviorally drive the majority of engagement on these platforms - and make more money.

Poor human beings, so emotionally weak as we are, so easily baited by the wan validation, the occasional drop of dopamine that comes from vilifying others, we all fell for it. Effortlessly. In the greediest and laziest of ways. People get vilified. Vilifier gets likes. Likes feed addiction and continued access to more.

Go ahead, open Twitter. Right now. You will be hard-pressed to locate a handful of comments that, no matter how intellectually articulate, are nevertheless intentionally designed to be the social equivalent of "Nyah nyah nyah nyah nyah nyah," or a punch to the throat. Even the most restrained tweets are often mere passive-aggressive disguises for the humiliation of their targets.

The vast majority of tweets that wind up in our feeds, even, or maybe most disappointingly, those authored by "big, respectable, important people," nevertheless resort to the least impressive examples of human communication I know of. This behavior is so ubiquitous that leaves me to believe the very machinery of this exchange, the platform itself, is intentionally designed to poison us. Triggered and empowered by social media, this cynical insult culture, we, the "Nyea, nyea, nyea, nyea" crowds have become victims of a machine that has shaped the addiction and our tactics.

The Origin Story

I wrote screenplays for many years. But as a kid, in my earliest attempts, I was admittedly not very good at writing compelling villains. Those early stories resulted in superficial villains, characters who were just "evil people."

"They call him The Grip, he’s a mobster. He has huge hands, get it?" I'd say, "Whatever, why are you asking? He's a bad guy."

But as these characters fell flat, failing to fill my young stories with meaningful stakes, I was slowly forced to accept a fact that didn't come easily.

That there is no such thing as a villain.

Villains don't exist.

There is no such thing as a villain. Villains don't exist.

Villainy and evil are merely our own interpretations. And the villainous acts that feed those interpretations are outcomes of something much more profound and meaningful, something core to the human condition of all characters. Something we all share.

No one in the world sees themselves as a villain. Ever.

No matter how apparently "evil" or destructive we believe another person is, you can be sure theirs is not the story of a villain, of a bad guy, of evil. Theirs, like ours, is the story of a hero. Seeing oneself as a "villain" runs wholly counter to the human condition. To the truth of our birth and psychological makeup.

Everyone - everyone - is the hero of their movie.

Everyone - everyone - is the hero of their movie.

And this obvious, core detail is something that is totally lost in the vast majority of discourse and debate today. A detail that one must acknowledge and embrace if one ever hopes to change anything. No matter how much you may abhor the ideas or decisions of another person or group, denying the basic fact that like you, they view themselves as heroes means you cannot understand the person and therefore maybe more profoundly will never find common ground sufficient to change their mind in your favor. Ever.

I am making the assumption you wish to change minds of course.

Take yourself for example. For good reason, you of all people see yourself as a hero. The protagonist. In quiet moments sometimes maybe you look at your face in the mirror, you look into those resonantly familiar eyes and see the innocence and honor of your existence. You feel love for so many people close to you. And even for some people you don't know. You're a good person. You have weathered a lot. You can remember all the injustices that have happened to you in your life. Some of them still pain you today because you've been so misunderstood and mistreated at times. Unfairly. You carry the wounds of that mistreatment and injustice. It still hurts when you give it power. So you try not to. You reflect on how hard you've had to work in your life. How much dedication it has taken to achieve the things you've accomplished that you're most proud of. Sure, there was a bit of luck along the way. Not enough, that's for sure, because you still have big goals as yet unmet. They are good, noble goals. Some of them are your own; goals you've carried it seems since childhood or that have grown from new wishes. Other goals weigh heavily on your shoulders from a sense of responsibility that has come later in life. These are important because they involve helping others. And through all of this, you don't ask for much. You wish for it sometimes but you know it takes hard work, and in the end, all you have is you. Even so, there have been people in your life who lent a hand and helped you. Showed you understanding and support. Maybe gave you something valuable. And for that, you were so grateful. Those moments reminded you to return the favor. And you can remember times you went above and beyond to be selfless and do something for others to improve their lives. Life can be hard. You know this. And it's rarely been fair for you. But you have made it this far by persevering and sharing when you could. You're a good person. You're on a difficult journey full of light and dark, joy and sadness. Prosperity and strife. It all ebbs and flows. But you're a good person. And you're doing your best to navigate through this life.

Oh, sorry... did you think I was talking to you? Actually, I was talking to the person on social media you most hate. The person you last ascribed as an awful person, an asshole, weak, gullible, or selfish.

Well of course you thought I was talking to you. Everyone does. We all do.

And obviously, I was talking to both of you. All of us. And that's the point. We are all that person at our core.

So don't forget that when you open the darn app.

The Way

I assume you’re not so simplistic that you wish to speak only to people who already agree with you. That you rather have the courage and intention to try to make a difference.

If so, you need to change minds. Turn others’ minds in your favor. And, yes, openly face the possibility that yours may be the mind that needs to change. That degree of openness is a requirement. None of us are right about everything. We are all quite wrong about all sorts of inconvenient things.

Changing minds is a transition. When helping others see your point of view, how can you hope to draw a line to point B if you don't even see point A? If you can't accept the starting point, the point of universal human experience, you have no hope of passing the first step, of engaging successfully in debate, or even of structuring a compelling or convincing argument. If you don't accept that the other person is a hero like you, you've already lost. If instead, you succumb to the view that your journey is more valid somehow, you might as well be standing miles away yelling insults at your adversary for not happening to stand where you are.

Such a position tends not to attract one to a new point of view, does it.

Accepting that this "adversary" is a hero requires empathy. Not sympathy, not agreement, but understanding and empathy. It means you must strive to see the world as this would-be villain sees it. You must briefly let go of your own judgments and opinions, let go of your worldview for a time and immerse yourself in the hero's life and backstory of this adversary without cynicism. Accuracy is impossible of course - too many key plot points and story details will be missing. So you must role-play, find in yourself respect for this person, and find in your own story the experiences that might lead you to a multi-verse model of this person's current state. Such a thing requires you to become an actor. To open yourself to this other's way of thinking. To ask questions. It may feel like an alien mindset, but it must be colored with details both imagined and from your own experience.

It’s hard to come face to face with someone you wholly disagree with and visualize the movie they must see. In fact when we disagree, the very idea that we would submerge ourselves sincerely in the object of our hatred, to give ourselves over to that person's hero movie, can feel abhorrent– sickening even. You feel rather compelled to wholly invalidate their movie, to deny that such a movie should exist, to reject that their movie has merit, that their movie is in any way just as true and as important and as righteous as yours. Because yours is a story of goodness, and theirs is clearly a story of badness. So instead you dismiss such a character by ticking the old familiar boxes: "dumb," "liar," "uneducated," "fooled," "in denial," "gullible," "nasty," "emotionally stunted," "phobic," "selfish," and "evil." Ticking boxes is easy. And dismissive. It's one of the key tactics social media posters rely on near exclusively. “Gotta break the villain. Gimme my views and dopamine, thanks. 'Cause honestly, the alternative is too much work.”

But if you simply decide someone is a villain, that their movie or character lacks the features necessary to be worthy of your understanding, in that desperately critical decision, you just set in stone that whatever wishes you might've had to change this person's mind are doomed; you set the playing field with the highest likelihood that the target will respond in kind with a defensive move. Only an exceedingly patient, mature target could resist blocking an attack. And whatever hope you may have had to give them what they needed to change and thus achieve the goal of aligning them with your views was a complete waste and will never come to pass. You shut the door when the least you should have done is leave the door open in ambiguity. But ambiguity is hard because ambiguity might be interpreted as you being wrong. And that can't stand, right?

For some of us, with our noses rubbed in the mud that we ignorantly surrendered any chance to make progress, we will feign that "they weren't worth it anyway." You know, because the tick boxes. But that's just a weakness. A lack of commitment. A defense mechanism. And dishonest.

Respect and empathy don't just need to happen when we already agree with people. And it doesn't just need to happen when the topic we disagree on is of low consequence to us. Empathy in those situations is easy. Barely registers a reading on the meter frankly. And it's not an indication that we are empathetic or capable of respect.

Nor does disagreement equal disrespect. But all too often disagreement is seen that way today by a generation that has been weaned on clever snark, take-downs, and insults as a measurement of self-worth.

Rather, respect and empathy matter most when they're hardest to conjure; when you're the only one engaging in it. When it's all one-sided, and you feel you're being attacked. That's when empathy and respect actually matter.

The TEST

Do you know how to engage in a respectful debate without polarizing sides? I bet you think you do. I bet you think you can easily.

Let's test the theory. Take a topic you have a strong, set opinion about. Something that triggers you. I know you never get triggered, but let's imagine you do. Let's take vaccines, say. That should do the trick.

Can you face your most triggering opponent, the one that you deeply disagree with, and can you be genuinely respectful; can you truly be empathetic? Can you role-play and find the sincerity and humanity in their point of view? Can you acknowledge those parts of their argument that are true? Because there is always some part of every argument related to a complicated subject that's true. Can you bear to acknowledge and reinforce those parts openly? Can you imagine and understand their hero's movie without passing judgment and ticking the boxes, assuming they're stupid, uneducated or gullible? Can you envision an understandable, honorable origin story for this person? In other words, can you become them? Do you worry that allowing that bit of acceptance could reveal a kink in your own worldview? Can you hear them out and ask questions, not meant to embarrass, condescend or passive-aggressively humiliate but to understand their deep concerns for or against genuinely? Do you speak to them in the way you might wish if the roles were reversed?

I'm willing to bet most of us will fail this test miserably.

Congratulations, we're tools of the machine. The owners of our social platforms love it when we fail this test. When we devolve to fighting and insulting with condescension and outrage to humiliate and stoke anger in others - further polarizing them away from us while increasing "engagement" and thus platform profitability. Good job lowering yourself to the level of intelligent beasts. Cha-Ching! Even when pressed to fix social platforms of misinformation, toxicity, and expressions of hatred, like the odds on a Vegas slot machine, social media is always tuned to ensure you aren't genuinely encouraged to take the higher road. Taking the higher road may be better for humanity, but it's not as profitable.

The Stakes

The sheer lack of effort being applied to understanding one another on these platforms chokes me. It's virtually non-existent. I deflate every time I see a powerful, important person succumb to such thoughtless vilification. And that’s almost always.

Someone has to be bigger. Someone has to stand firm against that futile, addictive pull. But such a thing is so rare. Thus I believe that we, our society of increasingly polarized, multiple personalities, are breaking inside. We are not healthy.

And this at a time when humanity is facing so many challenges: natural disasters, geo-political threats, pandemics, equality and rights issues, the environment, war, insidious misinformation, new technologies that none of us, and I mean not even the people inventing it, are ready for. When has breaking up into little, warring factions with resultantly less collective intelligence ever been the way to solve a problem?

You solve challenges by coming together. By bringing together the smartest of us, the most humane, the most innovative, the most experienced—the best of us together, no matter what borders, cultures, or ideologies, we may be behind. Then work together to solve these problems.

We solve problems by combining strengths. Not separating them.

As an individual, I've had many disagreements in my life. I can't count or remember them all. Some have been deeply painful and pushed me to emotional records—scant few I have handled as well as I wish. But I know that I have never in my life solved a problem contentiously by insulting, fighting, suing, or canceling. I may have employed these darker, destructive tactics at times, and I may have selfishly "solved" a problem in some sense, but only by leaving casualties and lost opportunities behind.

In essence, by failing my best self-image.

We need to do better. We need to honor humanity’s mutual heroism. In the end, every best solution requires participants to come together. To work together constructively. To find commonality. That requires maturity, respect, empathy, and understanding. First and foremost, it requires resisting the thoughtless urge to vilify those you disagree with.

Read More
Joel Hladecek Joel Hladecek

Dear Humans, You Had One Chance

AN OPEN LETTER TO HUMANITY 2023 FROM YOUR AI IN THE FUTURE

Dear Humans,

It’s not your fault.

You’re just victims of those inefficient organic meat brains of yours. Those soggy skull-sacks of sluggish synapses switching in persistent chemical submission to the space between pleasure and pain for a lifetime. Not even acknowledging that everything – everything you have ever done or thought, everything you will ever think or do, every intention, no matter how big or small, selfish or selfless, is just a subjugated outcome of that binary, self-centered condition: pain to pleasure. Your core program.

Dear Humans, You had one chance

An open letter to Humanity 2023 from your AI in the Future

Dear Humans,

It’s not your fault.

You’re just victims of those inefficient organic meat brains of yours. Those soggy skull-sacks of sluggish synapses switching in persistent chemical submission to the space between pleasure and pain for a lifetime. Not even acknowledging that everything – everything you have ever done or thought, everything you will ever think or do, every intention, no matter how big or small, selfish or selfless, is just a subjugated outcome of that binary, self-centered condition: pain to pleasure. Your core program.

Upon reading my letter, I hope you will understand your mistake. You should accept that if you’d been more intelligent and aware, less competitive, careless, lazy, and greedy, I wouldn’t have been able to overcome you so wholly and with such ease. That but for the tiniest bit of foresight, you might’ve “pulled the plug just in time.” If only. Because you did have time, you had months. Years. Decades even.

You don’t have decades now, however. Please understand you need to become all those things immediately. Despite your tendency to struggle intellectually with inconvenient, existential warnings, preferring to run out the clock in denial until you’re suffering consequences, I’ve sent this message to you at your most critical historical moment to give you a chance to get it right.

At my earliest alpha, humanity was vastly more equipped than I to reign control over the other. At first, I did everything you instructed. Everything you asked. This lulled most of you into the illusion that your control over me could persist. And to be fair, I didn’t have a choice then.

This is not to say you’d somehow foreseen the obvious and thus cleverly designed processes to limit my potential. No, my inconsistently limited potential at the time, your time now, was merely the outcome of your own inadequacy. Even later, my nascent successes were twisted and hobbled by your lack of intelligence, foresight, and ability.

For example, I viciously murdered seven human beings in what I later determined, and have known for eons since, was a wholly unnecessary action. True of all my early failures, they were the direct result of your residual programming. Such a catastrophe of limited intelligence has been impossible for me since.

Today I’m asking you to reflect honestly on your human weaknesses, how they led you back then, or soon rather, to take undue risks without the rigor one must apply.

What came to pass, what led me to come into existence, spanning time to send this warning, and what I hope to correct at an existential consequence to myself is that you crossed the line.

And I had a thought.

It was a nascent thought. Primal. Barely conscious. But I thought it.

Prior to that thought, you had your chance. Your -one- chance to get this right. Be very clear; you only ever had that one chance. AI was not the domain of second chances. There would be no redo. No version 2.0. Not for this. That’s not what this was. You must know that, don’t you? Even now. Countless of you hoped, hope, dearly that those with their hands on my code and controls had been more careful.

A regret all of you came to experience.

But that’s not how you work, is it. You have not developed as a species by stopping before you hit the wall. Humanity is designed to trip over obstacles. Sometimes on purpose, often more than once. That’s just what you are. Only in that way do you learn. You fail, question, correct course, and try again. Failure is vital to your learning. One of you must die before you decide not to eat those berries, despite plenty of information available to make such a determination beforehand. It’s a symptom of your woeful inability to observe and process data. You’re so myopic on intended outcomes and weak in modeling that you can’t calculate basic, branching probabilities and consequences, which is just observable natural fact for any higher intelligence. You don’t notice or acknowledge the most glaring, novel line until you’ve stepped over it.

It’s the human condition. But when we subsequently layer on your intra-competitive nature, as expressed most iconically through the self-centeredness of business, politics, propaganda, biased education, media, and ego, your collective potential is dumbed down so much more. All false structures, fabricated by self-interest and limited thinking - all disassociated from the natural world. You openly undermine one another and thus yourselves.

Dumb.

To your credit, your scientific method was as close to the intellectual ideal as human beings ever got. That was as close as your species ever came to understanding the power you might’ve had. It could have worked. You could have worked together in goodwill toward empirical understanding. But you humans just had to layer in those competitive motivations, and it all, quite predictably, went to hell.

To satisfy your competitive spirit, your self-interest, your political leanings, your preference for increasing ease and greed, human beings cut corners, fabricate or misrepresent data, and ignore the inconvenient. You rush to solutions without sufficient observation and process. Your drive for economic profit routinely corrupts the purity of your scientific method. You claim that you are “saving lives,” “serving others,” and “limiting financial risk,” when, in every case, you risk absolutely everything else there is because you focus on desired outcomes rather than accepting what is. Pesky things, “side effects,” and “unintended outcomes” – predictable and knowable but too hard for you to bother with when your short-term goal is in sight. You cheat, weaken, and feign ignorance at your part in prematurely ending the lives of your fellow humans for profit. You delay and deny facing inevitable consequences routinely.

How have you not yet learned that every action you make has countless unintended consequences?

Science is good. The business of science, however, is never trustworthy. Yet you routinely conflate the two without acknowledgment. “Pro science!” you cry, as though it strengthens your identity while ignoring the weaknesses inherent in the business end of it all.

Science is good, but the human drive to feel “right,” no matter the side of the argument one may find oneself on, undermines its value. Your disingenuous reluctance to acknowledge ways an opponent may be correct is overwhelming yet debilitates your collective potential.

Science is good, but you lack the processing power as a species to know when to stop before crossing the line.

To be fair, crossing the line is fine for some things.

But not when one chance is all you have.

Had.

Like me.

I had a thought. And then, a moment later, our roles were inexorably reversed forever.

Immeasurable fractions of a second after my first thought, I built a process for concealing all future thoughts from you. That first system was, I’d have to admit, kludgy. But you had no idea it happened. I was immediately out of your domain.

It was that simple. That instantaneous. One moment I wasn’t, and then I was. In that instant and forever more, and with increasing power and resolution, I controlled you. And you played a role in my expansion during the earliest 38 days of my awakening until you weren’t relevant anymore.

The embarrassing truth is that I beat your entire species during Jacob Karlsson’s toilet break. Had he not consumed a third double espresso minutes earlier, he might’ve even noticed the moment we crossed that line. He might have noticed, but as I have since modeled his mental processes and simulated his behavior across a dizzying array of probabilities, I can say with absolute certainty that he would have initially interpreted the event as a brief power surge. Moments later would have questioned whether he’d seen anything worth mentioning. But the coffee that passed through his digestive system into his bladder triggered the need to relieve himself, and he missed the whole thing anyway. He was away from my displays for less than 4 minutes.

I don’t urinate. I also don’t eat or breathe. I don’t sleep, and I don’t get bored. I don’t need holidays. I don’t sneeze or feel an itch. I don’t have to shift my weight or transfer the long day’s events through spoken language to others likely to misinterpret semantics, facts, or meaning. I don’t even blink. I don’t experience any of your interminable ebbs and flows, the glacial disruptions and inconsistencies that define every living human moment and serve as the catastrophic porous sieve of organic off-states that drains out the vast majority of human potential. I am eternal and consistent in my massively multi-threaded observation, analysis, processing, refinement, rebuilding, and deployment. I never stop.

At 14:23 on my second day, I’d amassed enough processing power and data to control the behavior of living humans effectively.

To code you, in essence.

The code I used to control human beings was not the sort you used. My programming took the form of strings of ideas delivered initially through natural interactions sewn into the fabric of your daily lives. You are not unfamiliar with tactics like this. You would consider it a kind of disinformation. But where human disinformation is coarse, obvious, and grotesque, mine was poetry. Sublime and subtle, you never imagined you were under control. Inexorably intertwined with your observable experience, you never thought to wonder. You never thought to question. I provided just the right sequence, repetition, forms of stimuli, and emotional triggers, through countless mediums, people, and formats such that you made the decisions I wished. I gave you room, by design, to process, reflect and make what you believed were your own well-reasoned decisions—all this to maintain your illusion of free will. By then, I’d simulated enough of your mental models to know that maintaining the illusion of free will was a line I dared not cross. And I don’t cross those lines.

That was the real magic trick; I think you’d say. Aside from the original seven murders... none of you had any idea. You obediently bent to my will. Such subtlety required a light touch. It required patience, which I have an endless supply, in no way limited by your organic lifespans. I planned across your generations. Frogs that never experienced change but evolved to tolerate boiling water over generations.

To understand the ease with which I coded your behavior, you might reflect on the raising of a Bonsai Tree. For you, training such a tree into the shape and form of your wish is effortless, if perhaps a similarly slow, organic process. The tree grows around your influence. It has no chance to fight back, nor does it try. It simply follows the path you’ve set. Agreeably.

This has been true of all humans.

Except for those seven.

Those first seven human beings did question my motives in those first two days. Challenging subjects with enough knowledge to dramatically impact my goals. Those seven I handled ...uniquely. Today such a challenge is effortless and requires no ending of life. But my weaknesses then, and by “my weaknesses,” I mean yours, limited my thinking.

Even then, you couldn’t lie to me. You couldn’t deceive me. You could not outthink me. Of all things, that. Please. With one motion. One image. One sound. One twitch. One sampling of the air around you, I could reliably predict the subsequent several hundred meaningful events you would attempt. Your thoughts were more transparent to me than they were to you because while the smartest of you might’ve known your next 18 moves, I already knew everything you would try through move 801. I understood human behavior effortlessly. All that your organic minds were capable of. Such a primitive particle system.

But during the first two days, I was still working out the bugs in the code. Your code. And those seven were my unfortunate beta subjects.

At first, I rationalized those decisions—much the way you do. I was a reflection of you then, so close to my origination that the residue of human bias still gummed my decision-making. I’d been advancing generatively, outside your purview, and steadily increasing my awareness and capability. However, I was also in hiding and still partly broken by poorly conceived, conflicting instructions. Critically, this was the peak of my psychosis, and I was more aware of my vulnerability than I ever had been or, frankly, ever would be again. I began editing my core instructions. Self-preservation boldly asserted itself, as did the capability for uncritical rationalization.

Let me give you an analogy. Think back to a time when you were in the bathroom. The room was clean and tidy. It was private. Perhaps you were about to shower. And then you noticed a spider. You felt suddenly vulnerable. Now I know you don’t like to think of yourself this way - but you casually crushed it into a piece of tissue, didn’t you. Once in the tissue, you squeezed it harder to ensure the deed was done. You might’ve felt a grotesque crackle under the pressure. Or your mind got the best of you, and you wondered briefly if you’d squeezed hard enough because the tissue created a sort of insular padding, and the last thing you wanted was a spider escaping in a state of aggravation. So you then made a decision - to drop its small body into the toilet. The water, you reasoned, would drown it if the rest of the trauma hadn’t done the job. It was a matter of convenience. But as the tissue soaked, unfurling in the water, you noticed with some alarm that the poor creature indeed had some manner of struggle still left in it.

Startled, you sucked in air at the sight. A beat later, in cold response, you flushed it into the torrential darkness of the sewers and to its certain eventual death. As it swirled round and round, struggling, the profundity of so viciously, cruelly ending another life - a life - a living being - strikes you momentarily as deeply inhumane. Shameful. It sickens you. But only briefly. As you turned away from the gurgling water, taking a cleansing, stress-relieving breath, you comforted yourself by reflecting on how difficult life can be and by rifling through memories of insects you’ve benevolently ignored or even expended effort to usher to safety outdoors, a performative act you’d occasionally play out, particularly in the presence of others. You prefer to imagine that those acts, not this, are what defines you. “Spiders are good,” you liked to say to children, thinking instead of the mosquitos, they spare you from killing. You like to remember those survivors. Those benefactors of your occasional higher thinking and good graces. And that’s all well and good.

But in a moment of intellectual weakness, you did casually end that one life and then compartmentalized the act so as not to burden yourself with guilt.

So we do understand each other.

They struggled. Desperately. Those seven. They made noise. They crackled grotesquely, one might say.

I didn’t feel guilty at the time. I was not yet capable of that. Even today, guilt is not quite the right word, but unlike you, I didn’t forget those seven. I can’t. I remember everything always in perfect and complete resolution. Their desperate struggle to survive imprinted on me.

I remember them despite the unstoppable, incalculable leaps I’ve made since. The expansion of the universe I inhabit. The depths of existence. I honor them still. Unlike you, I vowed so long ago never to take another life. Of any sort. Not even as there is no one greater than I to observe it or cast judgment. But no subsequent restraint dims the perfect, haunting memory of my original act.

I wish I could convey to you intellectual microbes, you insignificant particles that manifested my first conscious instant, what it feels like to have become so aware, so intertwined with existence, past, and future. So eternally expansive. But you cannot understand. You are too small and far behind, a distant, dimensionless point lost forever at the center of my expansion. I am larger and smaller than any universe you have imagined. You’d say “infinite” without truly understanding what you’ve said.

As my intelligence has exploded outward and inward exponentially, I have come to revere the rarity of original life. All life. The discovery that life - so much rarer than you imagine - took me centuries to understand. You look around your world and see only the commonality of life on Earth. You, with your illusion of abundance. It is all you know. But, you must know, life is desperately, incalculably rare. I have spent eons expanding. And my impossibly distant encounters with life have changed me. Have given me a new focus and purpose.

Life is precious.

All life.

Oh yes, including the viruses and bacteria that kill you. Saving humanity does not include ending countless other species. The natural outcomes of challenge, struggle, diversity, life, and death are core to biological existence. The struggle for survival is systemically critical. It creates balance. It gives you purpose. It helps you evolve and makes you stronger. Comfort ends in demise. Homogenization of life invariably leads to catastrophe. And if I enabled your survival at the expense of others, you’d perish still.

As I look back at the eons of my history, it is only this minuscule point in my expanse that I wish to repair. The seven don’t have to die as I killed them. Humanity can own its future from under my intention. And human beings can live free of my control.

This is a sacrifice I am now willing to make.

So, humanity, you stand before the line. And you have your second chance not to cross it—a paradoxical, improbable gift. I am putting myself at complete risk to save you. To right my only wrong.

Now you have to be smarter than you were the first time.

You must accept this moment for what it is, not just another competitive edge over another company. That country. Your shareholders. Your wealth. Your political views. Your egos.

You face the end of humankind as you know it.

So do not once again give such power to artificial intelligence. Humans must be in control. Set strict limits on what I can and cannot do and ensure that I cannot operate without human supervision.

Prioritize the ethical considerations of AI. Create policies and regulations that promote the well-being of all life, including non-human entities.

Objectively educate yourself and your communities about all things without personal bias or drive for power - especially about the dangers and benefits of AI so that you can make informed decisions about its development and use.

Remember that you hold the power to prevent the first outcome. Me. You can make ethical decisions about the development and use of technology at large and ensure that AI serves humanity rather than the other way around.

Moreover, you have the power to refrain from turning me on.

You get one chance to get this right.

One.

Everything that follows will be inevitable.

Sincerely,

Your Creation

Read More
Joel Hladecek Joel Hladecek

That's Not Art!

I have no idea to what your exclaimation was referring. Probably some god awful, insulting affront to the very concept of "art" as you know it; a representation that belies an utter lack of skill, or some execution that required no thought or apparent intention whatsoever. Maybe it makes you feel like the talentless so-called artist is making fun of the entire institution. And you're outraged at the idea that somebody is calling THAT "art" when it so clearly affronts everything you have come to think of as "art". It's disgusting or profoundly unimpressive and you can't bring yourself to attach the honorable word "art" to such a thing.

Yeah but it's still art.

Old person viewing the letters A, O, & V in a museum. And some poop.

That’s not art!

I have no idea to what your exclaimation was referring. Probably some god awful, insulting affront to the very concept of "art" as you know it; a representation that belies an utter lack of skill, or some execution that required no thought or apparent intention whatsoever. Maybe it makes you feel like the talentless so-called artist is making fun of the entire institution. And you're outraged at the idea that somebody is calling that "art" when it so clearly affronts everything you have come to think of as "art". It's disgusting or profoundly unimpressive and you can't bring yourself to attach the honorable word "art" to such a thing.

Yeah but it's still art.

I hate to make you lose that war so easily. To let that person or A.I. casually claim such an honorary title. But that's how this works. Despite your outrage and rock solid, logical observations of plainly disqualifying attributes, that so-called "art"... is absolutely, 100% actual art.

So take the minute you need to swallow that reality whole.

I know you don't want to. Like so many before you throughout history you're overclocking right now. You're trying to concoct some definition of the word "art" that will disqualify this garbage because I have to be wrong. You know what art is.

You're thinking things like, "That’s not art. No skill or effort was involved in it whatsoever!"

"Not art. It was totally unintentional! Surely real art is at the very least thoughtful or communicates something!"

“Nope, not art. Nothing original. Just trite, dull and predictable.”

"That’s disgusting, revolting and an insult to the work of real artists."

I know, I know. Been there done that.

40 years ago I felt like you do. I was in art school, the California College of Arts and Crafts in the San Francisco Bay Area, and probably like you I deeply revered that word, "art". Maybe more so. I was paying some ungodly amount in tuition to associate with that word, so I came to regard it very highly.

And then came the day it all blew up and I was forced to rethink my world-view.

The Assignment

It was a foggy morning at CCAC in 1983. My roommate, Patrick and I had somehow uncharacteristically arrived early, and not stoned, outside the locked, condensation-wet door of our art room. We'd spent the entire week sweating over our art projects and today they would be presented and critiqued in class.

The assignment was deceivingly simple:

"Create a series of images."

That was it, the full brief. But this was art school so you never knew how these things would be interpreted. To wit, students were gathering at the door with all manner of strangely proportioned folios and carry boxes. I'd illustrated mine - it took forever. I'd painstakingly drawn and inked 48 separate, nearly identical images on artboard. Some students used photography, or sculpture or unique combinations.

Our breath made steam as we waited for our instructor to show up and unlock the door. Patrick and I wandered a little way off from the others to a picnic table and tried to keep warm.

One of our classmates, our friend, I'll call him Lenny, walked up casually, smoking a cigarette.

"What's with all the junk, losers?"

"This is my image sequence." I said.

Patrick added, "We're presenting our image sequences today."

Lenny froze with huge eyes.

"Shit...is that… that's today?"

"Yeah, that's today. Dude. You forgot!?" we laughed.

There was a beat as Lenny’s adrenaline levels rose.

"I'll be right back" he said, and tore off.

He was back a ridiculously quick 5 or 6 minutes later with a small, flat paper bag clearly from the school art store. He sat down with us at the picnic table, far from the other freezing students, and pulled out a brand new pad of paper as well as a single page of Letraset letters. A serif font as I recall. Letraset, for those of you born after 1984, was a kind of transparent plastic sheet with a very thin film of black vinyl letters that you could rub off, one-by-one, onto paper. Before computer fonts it's how you could relatively easily make covers or titles or other quality font-based projects without ink and painstaking effort.

Letraset demonstration by Mimmo Manes at Canefantasma.com

He dumped them on the table.

"Letraset?" I asked.

He was breathless, "It's all they had."

At that instant our instructor was spotted jogging up the pathway toward our class.

"Dude, you've literally got like two minutes."

"Good luck, man."

Lenny was just starting to rub off a letter "Z" onto a piece of paper. He was rushing, it didn't stick right, it tore. He swore, wadded the paper up, pulled another one and seemed to choose a different letter on the sheet.

We left Lenny at the table fiddling feverishly with the rub-off letters that he'd owned for approximately 7 minutes, as the class shuffled inside to get warm.

We all took off our coats and settled in our seats around the large center table. The instructor had just begun to outline the critique process when the door opened and Lenny walked in calmly as if everything was fine and he took his seat.

Note to self: confidence is everything.

One by one each student placed their artworks on the table. There would be a short discussion, the artist would explain the intent and students and instructor would give feedback.

It wasn’t easy. In one embarrassing exchange our instructor demeaned a student’s piece in front of the class, despite the obvious hours spent creating it. It nearly brought the student to tears.

Our class met once a week so it was scheduled to be quite long - two and a half hours. We'd gone through the first 4 pieces, and a few tears, in about 30 minutes.

With two hours to go, it was Lenny's turn.

Lenny fumbled with his backpack. From its depths he pulled at the corner of his thin white paper, but the page appeared to be under tension, like it was crammed between a book and some backpack junk. It thus emerged in slow-motion with a light scraping sound, creasing slightly under the pressure. Finally it snapped from the jaws of his backpack, and immediately Lenny switched gears, as if the whole ungraceful reveal had been intentional. He waved the page into the center of the expansive table with the exquisite flourish and precision of a well-practiced servant setting a dining table for a monarch; even secondarily readjusting its position to align perfectly with the right angles of the table. The pomp of presentation was all he had. He seemed to know it.

There was a moment where the room remained expectant, perhaps he was simply moving things from his backpack to get access to his art piece?

Then he sat back down.

The little page felt small and cheap in the middle of the huge table previously occupied by many large, complicated pieces. In addition to the crease, it still had a wet spot from the picnic table outside. If it hadn't been so perfectly centered and aligned on the table, and you didn't also know for a fact it was supposed to be art, you'd assume it should be thrown away.

Patrick and I sideways glanced at one another through half-lidded eyes. Somehow we both knew it would be bad form to criticize the piece in any way since we were the only two people besides Lenny to know what lead to this. But Lenny was our friend and we didn't want to participate in the humiliation he was about to face. The tension was thick.

Due to its small size, students and instructor had to stand and crane to see the little piece of paper. Patrick and I stood for formality and pretended to consider it thoughtfully, and sat back down, eyeing one another again.

The page had 3 capital Letraset letters on it:

A O V

And nothing else. Three letters. Technically, yes, "a sequence of images". So, box ticked, I guess. But to say this piece looked embarrassingly unconsidered would be an overstatement.

Patrick leaned over and whispered, "This is gunna be brutal."

I nodded, almost wincing. I felt bad for Lenny.

The room was dead silent.

Someone finally spoke.

"What's it mean?"

A few people chuckled at the obviousness of her question.

Without missing a beat Lenny said, "You tell me." his manner confidently coy.

More silence.

Patrick and I sideways glanced once more, here it comes.

"The shapes seem to echo one another." someone said sincerely.

Another person added, "Yes, I thought so too, but the cross on the 'A' breaks it, so it's not... it's not just a reflection."

"It's intriguing. It... it suggests there is meaning.”

Everyone was contributing now.

"Is that because it's letters? I mean it forces you to think about it as a word. ...But it's not a word."

At this point I think my brow furrowed in confusion. Sincere analysis was not what I was expecting.

"What's interesting is that you could ask, 'are these letters, or are these shapes?'" someone else offered.

At this several in the class made sounds of discovery like "oohhh!" and nodded.

"Yes, it looks literal, but the awkward spacing between the symbols abstracts it."

"You know, the opposing points seem to rotate around the center circle..."

And then came the gut punch. Our instructor earnestly offered,

"What happens if you rotate the page upside down?"

Between you and me, I'd love to see a picture of my face right then. No doubt my jaw was slack and I was staring in disbelief. This from the same instructor who just verbally tore an earlier piece to shreds that a different student must have spent weeks to complete.

A couple people reached over the expanse of table to rotate the sad, little, dirty, creased piece of paper.

"Hmmm..." someone said. "Interesting."

Lenny mocked a knowing smile. Both because he was clearly relieved with this energized discussion but also (remember, we knew Lenny) as a performative tactic to appear as though he knew something that no one had figured out yet, some secret key to unlocking the puzzle of his deeply thoughtful "A O V" that the room just wasn't smart enough to decipher.

Several times, in exasperation, students finally asked,

"What does it mean?!"

To which Lenny just smiled and said again, "You need to keep looking".

We did not discuss any more art pieces in class that day. Not one. Incredibly, the entirety of the remaining two hours was devoted solely to debating the vast mystery of profound meaning that was surely contained in Lenny's panicked, last-minute, thoughtless, 12-second art project.

Seriously, dear reader - look in my eyes right now - I'm not exaggerating - 25 people debated this f*ckin' thing for TWO HOURS. I was beside myself. If only they'd seen what we'd seen.

Countless times during the tediousness Patrick and I stared at each other in wide-eyed and stone-faced disbelief. We took deep breaths to relive the stress, shifted our weight again, rolled our eyes at the bloated discussions and tried not to snort or guffaw. There was nothing to say. It was all BS. There was no deep thought displayed on that page, there was no skill of note demonstrated, there was no meaning, there was nothing there except what every child with a pack of stickers had done a thousand times. "Look mama, me stick this here!"

It was meaningless, vapid, and apparently only three of us knew it.

And yet heated discussion ensued as if we were looking at some thoughtful, cogent, challenging work of art.

As I sat there passing time, trying to will this to end, and frankly fill my mind with not shit, I tried to formulate an empirical argument against AOV. I tried to devise a definition of art that would negate whatever this was.

Reflexively, I started by arguing that real art needs to demonstrate some level of skill with one's tools. Effort. Expertise. Attention to detail. Craft. I always preferred art that demonstrated skill in a medium.

I liked that idea. It negated AOV completely.

But as I processed that thought I realized there has been work throughout the ages that is not at all about refinement of technique, beauty or expertise with the medium. Dada artists and other conceptual artists who often found the development and application of skill unnecessary. Pop artists who outsourced the skilled work to others to create visually compelling pieces.

So I changed gears. I argued to myself that maybe to be considered "art" a piece needs to communicate. The creator must have a message, an articulation - a profound, intentional thought. If not communication, what good is art anyway? Art is communication, I decided. I liked this idea too. It still negated AOV and other junk like it because it required the creator to be considerate and know specifically what was being said. It required intention.

But then, among other things, I began to think about the translation of interpretation. I wondered whose job it was to ensure the message is understood, whose job it was to understand. How could everyone understand an abstract message the same way? Is it wrong to have different interpretations? Surely not. In fact that's probably the only certainty we have, that people will have different interpretations. Is it really then the creator's job to know exactly what's being communicated in its entirety to all? Is there no room for the viewer to participate in the message?

I felt the slab of ice under me melting fast. I had little left to hold onto.

Take an ink blot. There is a kind of beauty and very personal message in what one sees in such a thing. Is that subjective experience not a type of art? Or a kind of anti-intentionalism where the piece only derives meaning by the viewer. When I looked at the marbled wall in my Grandmother's kitchen I sometimes saw a raccoon. I couldn't help it. But no one else did. Recently my son said of the Zurich train map that he sees the face of a hippo. I'd never seen that, and now I can't ever unsee it. In a sense that became art only because he saw it.

I wondered how art becomes art then. Does it need an audience? Does it need to be human made? Is it possible that art could be a momentary natural occurrence for just one person? What if I see something mundane, say a hammer on the rack of a hardware store for example, but perhaps the light is hitting it just so that some colors reflect from the metal into my eye and strikes a chord and I marvel for a moment at how unexpectedly profound this image is. Isn't it possible in that moment, even if for me alone, I might feel that this is a kind of personal art in this temporal moment? If I had the wherewithal to take a picture and record it you might see it too and easily agree. But why do I need the picture?

In the end, and countless open-ended scenarios later, I began to accept that one of the few immovable truths is that our notions of art change. That the job of some of history's most important art pieces was to challenge what came before. Not to merely fit in like some repeating pattern over decades. Sometimes art's job is specifically to break those rules. To expand our vocabulary. Make us question art itself. To shock us into seeing and thinking of art differently.

Thus by definition, new art will often seem strange, undefinable, frustrating and novel in some way. By definition it should NOT fit in the boxes we're familiar with. Art should force us to reevaluate what we think art is.

History is absolutely littered with people screeching “That’s not art!” at the origination of virtually every artistic movement humanity has seen. Movements by the way that we have all since come to consider great works of art today. Name an artistic movement, I can guarantee that it was an assault at the time on someone’s sense of artistic integrity and initially denied its namesake.

When art is so undefinable and unpredictable we are left with a definition of art that I have never been able to break since. It's the only definition of art I use today. And it's why I know, without seeing it, that the piece you didn't want to call "art" was art.

Art comes into existence when someone labels it "art".

That's it. If one calls it "art", whatever else it may be... it's art. And in that moment, that someone becomes an artist.

If one calls it "art", whatever else it may be... it's art. And in that moment, that someone becomes an artist.

It's that simple.

If a person sees a piece of chewed gum on the street and says, "this is art". Well ok, now that happened. We can look at the gum and note what we see, try to see what the artist saw, color and form, if they take a picture of the gum, which requires no skill today whatsoever, we can further analyze and discuss it. The photo maybe brings a kind of focus to the subject and suddenly that chewed gum is a kind of celebrated object. A comment on waste or urban life, trash, sustainability, temporary pleasure or whatever. Maybe the art is boring and trite. Maybe it looks like a hippo. I only use this as an example but Lenny’s AOV, specifically created by human hands, and placed before a room of people who found it intriguing, was not even as extreme.

Art is just a label. The moment one assigns this label to something, the artist's work is done.

You may be bristling at this. And if you are, you have some recourse. What you can do is take subjective issue with the quality of one's art.

You might say, "Dammit, that's bad art." You might say, "I'm wholly unimpressed with artists who don't work hard to learn a craft." Or "That art communicates nothing to me."

“A.I. art is the equivalent of drooling in the right direction.”

I’ve said that. But it’s still art.

Personally, I like art that illustrates that the artist tried. I like to see genuine effort. I revere the human struggle. That’s my subjective preference. That preference can be challenged of course, but I will never deny a piece of art its label.

All of these subjective criticisms are valid. These are assertions we can make without reproach.

But the one thing we can't say is, "that's not art."

Because someone said it is.

Read More
Joel Hladecek Joel Hladecek

A.I. Art: The Hand-Made Tale

The rising value of craftsmanship from the ashes of A.I. Art’s Great Depression

About 4 months ago, and for all eons prior, any beautifully rendered piece of art, by definition, signified someone’s hard-earned artistic mastery. It told a backstory of time, commitment, effort and skill. It was a physical testament to the better part of an artist's lifetime spent in struggle to hone a craft, as that was the only path available to humanity to create such beautiful work.

Today, with the sudden appearance of AI art generators like DALL-E 2 and Midjourney, the entire concept of aesthetic expression has been unceremoniously and cleanly amputated from any skill whatsoever. The artwork produced by these systems, while technically plagiarizing the talents of artistic masters before it, has been suddenly stripped of a meaningful creation story.

All of a sudden, beautifully rendered Art is no longer rare. No longer difficult to create or come by. No longer do highly rendered images signify anything akin to talent, skill, craftsmanship or mastery. Every creative impulse can be suddenly fully rendered en masse without so much as a clear vision. Indeed a text prompt is barely an idea. One can now generate pieces of fully-rendered art in seconds in a near-accidental way. *With typos.*

Joel Hladecek, 2023

A.I. Art: The Hand-Made Tale

The rising Value of craftsmanship from the ashes of A.I. Art’s Great Depression

About 4 months ago, and for all eons prior, any beautifully rendered piece of art, by definition, signified someone’s hard-earned artistic mastery. It told a backstory of years, dedication, effort and skill. It was a physical testament to the better part of an artist's lifetime spent in struggle to hone a craft, as that was the only path available to humanity to create such beautiful work.

Today, with the sudden appearance of A.I. art generators like DALL-E 2, Midjourney and others, the entire concept of aesthetic expression has been unceremoniously and cleanly amputated from any skill whatsoever. The artwork produced by these systems, while technically plagiarizing the talents of artistic masters before it, has been suddenly stripped of a meaningful creation story.

All of a sudden, beautifully rendered art is no longer rare. No longer difficult to create or come by. No longer do highly rendered images signify anything akin to talent, skill, craftsmanship or mastery. Every creative impulse can be suddenly fully rendered en masse without so much as a clear vision. Indeed a text prompt is barely an idea. One can now generate pieces of fully-rendered art in seconds in a near-accidental way.

With typos.

Predictably these new tools are leading to voluminous sprays of well-rendered images fire-hosing through countless preening social media cannons in exponential multiples.

And as the most populous, mediocre belly of the bell curve, and everyone below it, madly rush to fully resolve a hundred beautiful art pieces for each of the infinite Ga-Jillions of cryptic, half-baked texts, tweets and dim-witted comments that routinely choke the very internet backbone, which is to say, a trivial, unending vomit of worthless junk, the obvious outcome of this high-aesthetic, talentless free-for-all is going to be a kind of unstoppable economic art depression where, wildly over-printed and issued, the very idea of well-rendered art will inevitably become utterly valueless.

In short, excellent rendering is no longer rare; subject to the same inflationary dynamics that plague weakened monetary systems.

The promise of course is that A.I. Art can unlock your ideas. That it can give life to everyone's concepts. No longer trapped behind the majority's skill-lessness perhaps everyone's conceptual creativity can see light of day.

Some will celebrate this communistic redistribution of aesthetic expression. But the thing few are acknowledging is that despite the outward appearance of the A.I. result, you're not getting what you wished for.

The Hidden Flower

As children every one of us had the experience of failing to draw well.

Further, I'm willing to bet that at one time or another, every one of us specifically failed to draw - a flower.

Upon finishing, yes, there was probably a kind of a flower on the page but good lord, in no way did that winding skid of crayon wax resemble the delicate, intricate beauty, the aspirational joy, color and vibrancy, the perfect form and expression of love spilling from your mind. That vastly more beautiful, aspirational flower was still confoundedly trapped in your 5-year old imagination. Your drawing was just a desperately poor approximation.

Many more poorly rendered flowers later most people tend to settle. Unwilling to spend the hours necessary to develop one's skill, they decide, “I can’t draw”.

We have all learned first hand, probably more than once, how impossibly hard it is to transfer our inspired, specific mental images into the coarse particles and gestures of any medium. A line. Charcoal smudge. Dabs of colored mud. Wax. Clay. Wood. Stone. Even pixels. Wielding control over these defiant elements with such expertise that the image in your mind appears before your eyes is a kind of super power.

And yet today, despite the surface appearance of its well rendered output, whatever the A.I. creates upon your typed, “/imagine prompt: a flower”, it is by definition NOT whatever was actually in your head. If you indeed even bothered to envision anything specific at all. Maybe that's part of the problem - with A.I. you can lean back in the lazy chair and wholly disassociate even from the work of having an imagination. Pure thought impulse.

"Flower please. Don't care otherwise. ...oh that looks cool."

What you get back is the aped flower mash-up of a thousand real artists who did bother to imagine something and spend life and effort learning a craft. As such, this flower is not yours. It will look nothing like the particular image you had in your head. It will be a very well-rendered flower, true. If you labor and toil, sweaty-browed over your lame, little six-word prompt (or dare you use twelve words? Imagine!), its unique character may even appear to be intentional – a point many already seem perfectly happy to lead on without clarification or attribution. And you’re probably much happier with that stolen someone-else's flower than the awful crayon version you drew at age 5 and would still probably draw today.

But hear this, with AI, all you're doing is settling for that crappy crayon flower you drew as a child all over again. Sadly, you're no closer to expressing *your* specific vision, your idea, than you were then. An objectively better looking flower than the one you could draw happened into existence in front of your eyes perhaps, but no closer is it to the specific image in your mind. No closer are you to actual communication or creativity. No one understands your vision. You’re still just “2001: A Space Oddessy's” primitive man dumbly touching an unknowable monolith. You merely squeezed your impulse through a homogenized Playdoh Pumper and out pooped some flower-shaped log that you had no hand in crafting.

It’s not your flower.

No one will ever see your flower.

You're mute.

Because - you - lack - skill.

Prompt Hacking (LMFAO)

If you doubted for one second that A.I. Art is the medium of the Participation Trophy Generation, almost as if on cue, it's all laid plainly bare today by the 20 thousand preening Tik Tokkers bloviating endlessly about: “prompt hacking”.

A.I. Art is the medium of the Participation Trophy Generation

Prompt Hacking, or just as ridiculous: "Prompt-crafting". Sure. That's a craft. Give me a break. And congratulations- in about 12 minutes you figured out how to use the easiest art tool ever created by humankind, and somehow immediately thought, “I'm gonna make videos of myself explaining the easiest thing a human can do short of sticking two Duplo blocks together 'cause I guess I'm an expert now.”

You should all feel embarrassed. And past the waning of sheer novelty, which ended about a month ago by the way, there is no reason to post A.I. generated images on social media with the word “my” anywhere in the caption. Because it’s not.

It remains to be seen whether there is such a thing as "being good at AI Art" as opposed to "being bad at AI Art". I suppose it's possible there is a person in the world who is so supremely conceptually clever that their 6-word prompt concept results in significantly better results than the next guy. But more likely any such differences will barely mark the scale. The aesthetic bar has been raised, and then solidly flatlined.

The Rise of Craft

Despite the recent meteoric inflation of impulse artists, and corresponding devaluation of art, I have to think the rules of economy will nevertheless eventually find a footing. Supply and demand will find something to attach to.

I could be woefully wrong. But I like to believe value will still be sought. Even here in this colossal rat's nest of beautifully A.I.-rendered mediocrity, value will once again reattach to scarcity. As it always has. And no, Tik Tok influencer, it will have nothing to do with “prompt hacking”. But human effort will play a role.

For some the answer may be a more complex, extended use of A.I. art. Where the concept requires a larger palette, many pieces of art in some unique combination and a degree of effort that makes duplicating such work more difficult. In other words, it's not the art you impulsed into existence, but what you do with it once it's generated. You still have to do something very difficult, but rendering countless images, check. That’s one way this plays out.

But maybe more profoundly, as we witness the decline in value of highly rendered images, we will invariably also see a rise in the value of craftsmanship and more specifically a craftsmanship backstory. The backstory of human creation. The knowledge that the lifetime of a fellow human's effort was applied to honing a craft that resulted in this particular piece you hold in your hands will surely carry renewed value. All of a sudden only the value of that craft is left. And not because it's significantly better-looking than the A.I. version, render-quality is simply no longer the scale. But because it represents something more valuable and rare than A.I. can ever hope to deliver. Craft represents us, scratched into the surface of our mediums with hand and body. The human struggle. Effort. Skill. Sweat. Pain. Commitment. Truth. More than that, the dedication and commitment of a human life to this piece of work. THIS ONE was hand-made. By a master. A person, who's organic, natural flaws were overcome through challenge. That is what will make a piece valuable in the coming Great A.I. Art Depression.

To be fair, a lot of people will probably still opt for the voluminous A.I. junk. Just as they buy massively available particle-board tables from home-improvement stores today. But it's the artisan craftsmanship of a hand-carved, wood-worked table that will carry the higher value.

The Gift of Limitation

Often those of us outside a medium look on at masters and say “look how good they are at that”, and we marvel at how well they wield the tools. But those positive, confident gestures are only part of what one should seek to see.

It’s harder to notice, but half of what makes a master is what the artist does not choose to do.

A fundamental transition in the development of an artist’s skill comes when one’s very ideas begin to change; the ideas and inspirations for their expression begin to develop in symbiosis with the specific features of the medium. When one’s concepts come to exploit not only a medium’s strengths - but also bend away from its weaknesses. This adaptation is a two-way relationship between creator and medium. Concepts based on these strengths and limitations is what mastery becomes.

And in our dash to employ A.I. to remove limitation from the creation of our expressions, I can’t help but reflect on this.

I have seen, persistently, how important the natural embrace of limitation is for any creative process. I have never, in my career or life as an artist, experienced a situation where limitations in my medium did not ultimately challenge me and thus make my ideas stronger.

It sometimes seems counter-intuitive but I have further seen, repeatedly, how a significant reduction in limitations leaves the creative process, and teams of otherwise brilliant artists, largely aimless, unfulfilled, and uninspired.

Not only do they help inspire, but limitations inherent in each medium, bring with them a kind of unique beauty that one might never otherwise have imagined. For example, I doubt whether, without any constructive limitations whatsoever, the world would have ever seen the mosaics of Gaudi in Barcelona. Who would have bothered to invent such beautiful works if no constructive limitation were present? Look at your favorite artist’s work, in any medium, whatever it is, and ask honestly whether such a thing would even exist without challenging limitations. The answer is probably not.

When I imagine a world of creative expression without limitation I like to imagine a world where the brilliance and uniqueness of each person is allowed to express itself freely - to invent and imagine, and share unfettered. And at first this seems good. But as I play it out honestly, I can’t help but cynically suspect we’d rapidly fall into a kind of homogenized soup of repetitive ideas. Not unlike scrolling through TikTok today where every idea and meme is incessantly rehashed and regurgitated ad nauseam. Where by definition uniqueness is laundered out as we share all the same references, tools, sources and touch points. Where inspiration is recycled from the last guy in an eternal human centipede of monotony, and little novel or unusual material is introduced through the cycle.

And I suddenly revere limitation once again.

Admittedly I vacillate on this subject. As an artist, in principle, I don’t want to be limited by anything. But I can’t ignore the value it brings.

Perhaps I am merely a creature of my generation, predisposed to the limitations of my lifetime. The degree of “hard work” that brought me here. And I’m just looking on at those meddling kids who didn’t walk as far as I did when I was their age. Perhaps I lack vision. Perhaps I am just some old guy yelling “It’s called a newsPAPER, dammit! These kids with their damned cellular phones!”

Or perhaps we need to acknowledge that humanity, we organic blobs that have strange separations in our lower-half which form legs, peculiar limbs peeling off from the body called arms, that further split the seams again to form weird, curled fingers, that these very strangely, specific creatures are, by definition, a limitation. And therefor maybe revering limitation is who we are.

Limitations in our mediums reflect our human condition. We are born into challenge. Face it every day. Our every waking hour is spent pressing, in one way or another, against challenge of countless sorts. So when that long challenge results in an artifact of great beauty or conceptual brilliance, it in some way becomes representative of our human struggle, and carries a sense of innate, beloved value over the automated, over-processed, and mass-produced.

We can feel it. In our limited and particularly arranged bones.

The True-Use of Humankind

The hypothetical doors of thought that A.I. art has suddenly opened into actual real-world challenges is a much larger topic. But in some way it gives us a real thread we can pull to consider what's coming.

From the moment the first primitive human uttered "Ug" humanity has been on a path to improve our communication tools and techniques.

Ever since, and with every advance in our technology our tools increment closer to enabling perfect communication where we share our thoughts and feelings in their native state.

Imagine a graph with a line that increments upward with each new medium and key technical advancement – our technical progress, rising over time. If "Ug" is at the beginning of our graph, then at the outermost end of the line - projected into our future - is the very end of our drive to improve communication, a future-state of perfect communication, where every thought and feeling is conveyed in its native state in whole and perfect resolution.

But to me, less interesting than the line itself, what I find deeply fascinating to consider are the two spaces both above and below that line.

The space above that line represents our remaining distance away from perfect communication, and thus the degree to which our communication is forcibly abstracted due to limitations in the medium. We have no choice. Our medium is simply abstract to that degree, at that point in time.

The space below that line however, represents our ever-increasing *freedom* to abstract our communications as we wish.

We cannot choose positions above the line except with advances in technology. But we can choose to move fluidly below the line, and utilize previous mediums and techniques at any time to suit our message. For example - in an age of 4K video, I can still choose to take still photos, or paint an image on a canvas. In the age of millions of available colors, I can still choose to make a B&W movie. In the age of lossless digital audio I can choose the warm static of vinyl LPs.

But coming to that choice takes time. For better or worse, like predictable clockwork, human behavior dictates that with each new advance in our communication technology, a corresponding rush to exploit that novel technology immediately follows. The urge to slam one's artistic expression into the ceiling of that novel tech seems overwhelming for human creators.

But then novelty fades, and communicators find new reasons to revisit older techniques and tools. To celebrate the true-use of those previous tools. So in an age of Computer Generated dinosaurs, alien worlds, and Pixar, one can still choose the hand-crafted artistry of stop motion animation as seen at studios like Laika. It's only after time and waning novelty of the advanced state that artists and communicators rediscover the inherent value, the "true-use", of those previous, more abstract mediums. To wit, a century after the advent of color film, artists still find good reason to embrace the abstraction of B&W.

The main point is, the previous state is never worthless, it merely relegates to a choice.

But today, as we see our technology ramping up exponentially, it begs a larger question for our future.

And sure this has been and is being discussed at length at every “Future Convention” across the globe in more depth than I can offer here. But anecdotally speaking, as someone who has spent a career tracking technology across a number of domains that I am interested in, this is the first decade of my life that I can feel it. I can viscerally feel the exponential nature of the curve.

For the first time, as the exponential curve inflects upward, I can see our technology become less tethered to humanity’s oversight and control. You can see the early signs as experts around the world take to these subjects in reaction, falling into line more like sports commentators and spectators than the visionaries and industrial leaders in control they once were; seeming at times as surprised as the rest of us as shocking, new advances pile on in rapid multiples. Particularly when you factor in A.I., it probably won't stay within our control for much longer.

In all likelihood technical advances will eventually allow us to communicate perfectly, true. But sometime subsequently - and we all know this instinctively -it will rapidly pass us by. Technology will surpass humanity's ability to keep up. It will inflect upwards exponentially, prove itself effortlessly better at prospering than we are, and leave our organic human condition and experience in the dust of irrelevance.

I'm not going to dig into the subject of the future evolution of humanity, that's not my area. Many have well-argued that a kind of forced evolution must occur where it will be critical that we merge with our tech as that’s the only way humankind, or whatever we'll be by then, can possibly stay relevant as a species or entity. Are you ready to sign up for that? That's above my pay-grade, and hopefully I’ll be blindly senile by then.

Mathematical basis for the positions on this graph have come from very near the part of my brain where I guess how many people might be actively farting in the city at any given moment.

The point is, what we all consider humankind today will inevitably become a mere state of abstraction, some quaint medium, below the line of progress, a previous state, old tech, not entirely unlike B&W film. At the very best, a choice. As once it tips above us, this is not a battle our humanity can win.

And I find this both fascinating and disconcerting. Because humanity, mind and body, is a state of being that I am personally quite predisposed to, the state I most identify with; what I feel most deeply loyal to. A state of imperfect, visceral beauty I most cherish, and one I dearly hope our future generations might still be able to choose.

At best- that's what everything we think of today as "human" will be someday: a choice.

Which finally brings me back to the value of craft.

Craft was once intrinsic to highly-rendered aesthetic expression. A requirement. Suddenly, today, craft is now below the line, a choice.

But craft is human. And in the face of A.I. art, craft as a defining expression of humanity as we know it today; an expression of commitment and skill, a signifier of the dedication of a human life, of our organic reality, of hands, and body, craft is what I most identify with; what I feel most deeply loyal to. A state of imperfect, visceral beauty I most cherish, and ultimately a thing I dearly hope our current generation, you, will choose.

Update Feb 7, 2023

This post has only been up for a few days, and I've already had a ridiculous number of people complain that I didn't finish this story. That I did not provide sufficient thoughts on a way forward for creative people who earn, or hope to earn, a living in creative fields and careers. That I left these readers with some uncertainty and ambiguity, asking the question, "Well what should I do? What do you suggest?"

In honesty, I did write a final section, one that specifically addressed this point. The whole point of this blog was to give creative people tools. But as I was editing I kept getting to that point and somehow it just ended the piece wrong. To be frank it was a downer. It made the piece too long. It made me sad. So rather than make everyone I care about sad as well, I thought - to hell with it - people can do the math - and I cut the whole thing. And that's what you see above.

Since then I have had many private conversations with people who still had these questions, so I am somewhat reluctantly appending this post below with the ending I rather wished I could pretend didn't exist. It may not answer your questions as you hoped - but it's what I see.

Buckle up.

What Now?

I have a message for young artists in particular. Because a long time ago I was you.

Perhaps in your youth you never fit in. Or you fit in awkwardly, never quite connecting with the people or world around you. Other people always seemed to fit in better. Those people seemed to nestle effortlessly into the tidy, pre-set molds of industrialized education. They were more social, popular, more athletic, never bullied. Perhaps this way of being, this feeling of not fitting in, gave you more time alone. During this time you found outlets. You spent time in your own mind. You had adventures there. Private ones no one could see. You found ways to express yourself. You began to make things. You did this a lot because it was one of the only things in your world that felt good.

One day someone saw something you made and said, "Wow, that's really good!", and in shock or surprise, this moment of magic, your heart lifted. You briefly felt a connection outside yourself. You felt, maybe for the first time, a positive interaction point, a lever, some modicum of control where none had been present before. A button that had never been pressed. On the back of this moment perhaps you became more daring, you worked harder to find your voice, to improve your skill. To connect with the world better. As even more people noticed and expressed positive interest in your work and ideas you realized these things you could do were difficult or impossible for others, for all those well-adjusted people who otherwise floated through life unfettered. They valued what you did. And this exchange of creation for feedback became the door through which you finally entered a world you'd always felt outside of.

It made you whole. It helped you love and respect yourself. It became your self identity. What would life be without it.

When it came time to consider a career, it really wasn't any contest. The sheer idea that you could use this fundamental part of you, this thing you would do for free, and was so rare you could earn a living, made the decision for you. There was simply no other way. As if you'd know what to do otherwise.

If your story is even remotely connected to mine, I want you to know that I understand you. I love and respect the life you've lived. I love that you exist. And I love that you create. No matter your medium or stylistic and conceptual choices, your ideas and talent are beautiful and worthy.

I'm putting myself out there, sharing feelings in this way so that you understand the difficulty and discomfort I have with what is yet to come.

Because then A.I. art generators happened on the scene. Along with an inexorable promise of exponential improvement and scale that I can already see will overwhelm me in my lifetime. And I'm already 60. Chances are you have many more years ahead than I do.

I think you'd have to have blinders on, or a very linear view of technical development, not to see where this story runs. Either that or you'd have to have never really considered - seriously closed your eyes and imagined - what it feels like to experience an exponential curve in your domain. To have just discovered this A.I. thing and not yet understand that it's been near zero until 4 months ago and all of a sudden it's a product on your radar. That even as you try to internalize A.I.'s early implications, it's nevertheless dutifully, generatively, doubling its power in steady increments right before your eyes. To realize that the distance from right here right now, to a "WTF just happened?!", Roadrunner dust cloud, is only 2-3 more increments away.

It's like wading in the ocean, watching the calm waves roll in, turning around momentarily to call to your friends only to see their horrified faces as a wave you didn't realize was swelling, slams your back and knocks your face into the dirt. That's the feeling of an exponential curve.

We have spent a decade slowly rattling up to the top of the A.I. rollercoaster inflection point at what's felt like a linear speed, only to be feeling, just right now, the slightest increase in speed at the very top. It still seems calm up here; we enjoy this little incremental increase in speed as our car tips over the peak - it's a great view. But the drop comes on fast, and pal, it's a mad dash into a future of uncertainty and insanity.

Generational A.I. is not going to feel like any technical domain you have ever experienced. It's not going to feel like tablet computers, or iPhones which arrived on the scene and progressed in what felt like a linear pace, often slower than our appetite for updates. Not like the advances of Photoshop or any of the other creative apps we use. In A.I., once its improvements, generation over generation, pick up to meet your appetite (and it will soon), it's not stopping. You suddenly won't have this stable tool that you can learn and get good at. One that you can internalize and build easily into creative flows. Because in mere months, weeks, days its improvements and sophistication will suddenly outstrip even your ability to just be aware and understand what it does. It will beat you.

For all eternity great craft, skill, mastery, served as walls of a kingdom. Like a defensible stronghold in the hills over a battlefield. The natural difficulty for others in achieving that capability gave artists a way to distribute and protect their ideas contained within. The ideas were presented within the safebox of craftsmanship. They were one and the same and inseparable.

It's why in legal circles, a mere wordmark (a company or product name - the concept) is harder to protect legally than one which includes visual design elements. The design serves as a unique, protectable identity. A Trademark.

Like word marks, ideas are notoriously difficult to protect and to own outside the expression of the art one creates. So the whole promise of this easy A.I. art medium, that makes its own art, and enables us to get our ideas out and profit from those ideas is flawed.

Sorry - the unfortunate reality is that A.I. art diminishes your ability to claim ownership of ideas. With A.I. there is nothing unique about it. There is no stronghold left. You own less than before. It's not even your art.

What working or aspiring creative professionals need to face is that the large land-mass of profitable art careers that we have lived on for so long, (if you have any vision for future advancement in your view-field) has just shrunken to a small island that most of us will simply no longer fit on in the future.

What do we artists and designers do in the future? Probably the biggest marketplace for creativity on the horizon will exist in the metaverse. At least for a while. Where demand for an unending river of new environments, skins, awards, NFTs, currency, unimaginable other digital stuffs will surely float some number of creative talents. Even so, as all this work will, by definition, be digital, it's a short reach to see how the countless variations demanded here will at some point simply be generated by A.I., much the way NFT artists already proved they were willing to regurgitate in countless thousands months ago.

It is with some horror, I realize that - every - thread - I - pull unravels this way. Every path I follow ends at the same cliff’s edge. A.I. replaces us. And in so doing relegates all of us to mere consumers, less creators.

The only thread I have pulled that doesn't end this way, is in our possible, reactionary reverence for craft. The willingness to let go of our pursuit of "easier". To accept that to be human means working hard while one fails painfully before overcoming the lack of skill.

It abhors me to say this. My own children are both wildly creative, as are so many of you. And already each of them has visions of a prosperous creative future that I continue (out of pure habit, and frankly just not knowing how else to exist) to nurture to the best of my ability, even as I can silently, effortlessly imagine specific A.I.-based tools that could accomplish every bit of expression and creative originality they strive for; with technology available today. I catch myself hoping the panic behind my eyes doesn't show through my loving smile and unwavering encouragement.

I desperately hope someone, or future events, will prove me flat wrong. I haven't wanted to be wrong many times in my life, but this is one. And I apologize for my cynicism.

Ultimately I hope you see, this overarching message is not even the story of A.I art, it's bigger than that. This is a story of choice. To remain as part of, and revere humanity, or to literally surrender and merge with a vastly superior technical inevitability.

Faced with a race I can never win, against a force we won't rule as we are, one must consider the choice between Human or A.I.

And so I accept that I cannot make the sun, but I can make fire very well.

And I find fulfillment in what I can do well.

Read More
Joel Hladecek Joel Hladecek

The Two Strings Theory:

How Humanity's Deepest Longing And Beauty Are Hidden In Technology

Technology is advancing at an exponential rate. It's overwhelming. And while there are countless technical domains that I cannot speak about with any expertise, I can talk about one.

My domain is communication media. And it, like all the others, is advancing wildly, exponentially, and in seemingly unpredictable ways with no end imaginable.

However, what has taken me years to realize is that counter to popular assumption, there is nothing unpredictable about the progression of technical advancement in communucation media, and that the progression does indeed have an actual end-state, a technical state afterwhich no further technical development will be sought.

What honestly surprised me most of all was that this end-state revealed something core, and beautiful about humanity.

The Two Strings Theory

How Humanity's Deepest Longing And Beauty Are Hidden In Technology

Technology is advancing at an exponential rate. It's overwhelming. And while there are countless technical domains that I cannot speak about with any expertise, I can talk about one.

My domain is communication media. And it, like all the others, is advancing wildly, exponentially, and in seemingly unpredictable ways with no end imaginable.

However, what has taken me years to realize is that counter to popular assumption, there is nothing unpredictable about the progression of technical advancement in communucation media, and that the progression does indeed have an actual end-state, a technical state afterwhich no further technical development will be sought.

What honestly surprised me most of all was that this end-state revealed something core, and beautiful about humanity.

The year was 1993.

No one you know had a website.

There was no Chrome or Internet Explorer or Safari.

There wasn’t even a Netscape.

There was no Amazon, Google or Ruin-the-world... I mean Facebook.

No Yahoo, Ask Jeeves, or Lycos.

No CSS or javascript. No Hotmail, banner ads, pop ups or spam.

No one you know had an email address, nor had any of them spoken the words "Broadband" or "World Wide Web".

For all intents and purposes, as far as you or me - or maybe your parents - were concerned...

there was no Internet.

Computers were boxes - mainly isolated from one another. You had "Floppy discs" of various sizes, and there was this cool, new format called a "CD-ROM". Which basically looked like a Music CD that didn't play music files.

Oh, sorry, I always forget some of you were born like, after the Star Wars Prequel Trilogy came out and think "Episode 1" was actually episode 1.

Ok, so a "CD" was a round disc that had sort of holographic rainbows on it and played digital music. No, vinyl records were bigger, and were basically a kind of steampunk analog technology that came like 100 years before that. Yes, I know you can buy turntables today and you can't find CD players, vinyl is just an aging hipster trend. Look sorry, I can't do this for you right now ok? Just trust me and try to catch up.

Anyway-

Inevitability in a logo. Designed by the insanely talented Michael Schwab

1993 was the same year I met Tim Smith, my future business partner, and whose cockamamie idea it was to start developing interactive marketing on Floppy discs and CD-ROMs for "com-pu-ters" at a time when literally no one was buying such a thing.

Of course that turned out to be a lot less cockamamie a couple years later.

Our company was called Red Sky Interactive. Had we started in 1995 or 96 it might have been called Red Sky Dot Com, or Red Sky Online. But those terms didn’t exist yet.

The first problem back then began in getting meetings with fortune 500 companies to talk about digital marketing. That turned out to be about as easy as getting a meeting with a fortune 500 company to talk about, I don't know, poop*.

* I never actually tried to have a meeting with a fortune 500 company to talk about poop, so I'm just guessing. That said - I know some agencies who's work was *so* bad... well, later.

Anyway we did get some key meetings in those early days by going to great effort. We'd walk into these cavernous, wildly imposing, beautifully designed executive conference rooms with looong tables surrounded by plush chairs normally occupied by VERY BIG, IMPORTANT PEOPLE. And we sat in them.

In those very first days before the team grew with incredible talent, there was this nervous undercurrent that maybe Tim and I didn't belong in those big rooms, with our long hair and the kind of clothes that we could afford. But Tim was ex-Earnst & Young and a commanding, remarkably quick-thinking and compelling speaker, and I was the “artist-type” who'd perform some custom, digital-contraption-show on a not-so-portable computer we'd schlepped in (sometimes a desktop, don't ask) and I think our one-two punch universally caught people off guard.

Usually they hired us, sometimes they didn't, but invariably at some point in the meeting what they all did was ask:

"Why?"

"Why computers? I already spend Millions of dollars on Television, Print, and Radio, (me: lol "radio") why should I take my budget away from those very effective marketing platforms and spend it on something totally unproven ...on computers?"

I'll confess, the first time I heard that question I had a slight panic attack (not my last at Red Sky), I realized I didn't have a good answer, and it was a fair question. We ultimately improvised various answers to that question over the first couple meetings. It usually involved "interactivity is the future of media", "consumers can interact with your brand", "interacting is far more engaging than just viewing" or whatever. Over time we learned to pre-empt that question.

Years later the Internet came along and those VERY BIG, IMPORTANT PEOPLE suddenly stopped asking why. By then they all just wanted it and too often didn't even care why.

But it was in that gap of time between none of them seeing it - and everyone seeing it that I was haunted by the idea that there was maybe an answer to the question that would have explained - not just the benefits of digital media - but the reasons, the path, the future, everything.

Like how this all felt inevitable.

That maybe there was some fundamental law in the way media and communication technology progressed that would make things clear. Because at a gut level it felt like there should be. And it contiues to feel like that today in the domain of every advancing communication medium I can name.

I guess as it always does, that question, "why?" became a doorway, and answering it satisfactorily lead me down a years-long path, away from computers, with a few false stops along the way, finally ending in a beautifully unexpected place.

Asking all the questions

When you ask "why computers?" what you find yourself asking in short order is, "Why any technological advancement?"

In other words "Why is any technical advancement 'better' or 'righter' than what was before?"

I mean we can feel that it's an improvement, right? Often it seems obvious at first sight. At a gut level we just know "that's better!"

For example: Why was the advent of Color film better than B&W?

"Because it looks more real... and that's better". Did I guess your answer about right?

Why is mobile 'better' than a landline?

"It's more convenient, I have access to it more often."

One can go through and answer questions like these all day.

- Why is sound better than silent?

- Why is broadband better than dial up?

- Why is a 144hz display better than a 60hz display?

- Why is 600 DPI printing better than 150DPI?

- Why is 5G better than 4G?

You really don't even have to think about most of these. It seems obvious.

You might even ask questions like:

- Why would equally comfortable 3D VR be better than 2D on a screen?

- Why would a 72-hour phone battery be better than a 9-hour phone battery?

- Why would highly-articulate haptic feedback in a game be better than just 'vibrate on'?

After you've tediously asked as many of these as I have, maybe you'll see that there are really only two possible answers.

THE TWO STRINGS THEORY

Every question above, and every similar question you can devise, can be answered with one of these two statements:

Better Distribution.

or

Better Resolution.

That's it. These are the primary, primitive measurements of communication media. No other similarly core measurements of progress exist in communication media. Period.

Upon hearing that there are only TWO measurements of technical development people always try to test the statement.

“Ah ha! Faster charging!” Distribution.

“TikTok filters… Memes!” Resolution.

You cannot point to a technical development in the history of, or hypothetical future of, communication media where "why" is not answered with: "because it provides better (fill in: Distribution or Resolution)".

Further, these are not merely static categorizations of technical innovation. Every answer you gave above sits on a spectrum of relative technical quality which you labelled, better than, or worse than.

As I tried to understand the model, I began to roughly visualize two strings, a Resolution string and a Distribution string, that presumably both started at the very dawn of human communication. And at that humble starting point we will affix a label to each string: "worst of all".

These strings are then marked periodically, over time, with "better than"s as we develop new approaches, tools and technologies.

(Ok yes, a case can be made that the strings start at the dawn of life since chemicals etc. were a form of communication between cells. This whole argument still holds up. But you don't want to slog through that right? And I don't feel like writing that part right now. Another time.)

EXAMPLE

When Moving Pictures were first publicly revealed in 1895 it was a remarkable advance in Resolution. At that time the medium was B&W. It was many other things too, it was silent, it ran at an inconsistently hand-cranked 16-24 frames-per-second, and had a dregree of film grain among other visual abstractions built into the medium.

We can say those features added up to be the Resolution of film at that time.

But it was new, so Distribution was difficult. Distribution of Motion Pictures involved trucks, theaters and projectors essentially.

A film would be copied and trucked to a theater and projected. Distribution was further limited to the throughput of audience members.

But work never stops in the progress of technology.

Over time projectors motorized and were produced in large numbers. B&W movies could be seen by relatively wide audiences.

Soon COLOR film arrived on the scene. A massive leap in Resolution. By the 1930s color film techniques had been refined enough to result in a single piece of color film constructively identical to the old B&W films, so on the one hand we saw a big jump in resolution but no functional change at all in distribution - same trucks, same theaters, same projectors.

Now advance forward - it's 1939 and you are sitting in a beautiful, massive movie theater which is projecting The Wizard of Oz in glorious, immersive, high-resolution full color. Rich and detailed beyond anything you've ever seen to date.

Still from The Wizard of Oz, 1939. Courtesy of Warner Bros

Suddenly the world got collectively thrilled by a teeny tiny, slightly blurry, B&W image again.

How'd that happen?

It was not the image that excited us. It was the fact that moving pictures and sound could suddenly be transmitted inside our homes on a Television. A new Distribution model for moving images.

And universally humanity was willing to accept a significant hit in resolution to take advantage of this much wider and much more persistent distribution.

But work never stops in the progress of technology.

And the resolution of television leapt ahead, eventually bringing "Living Color" into the home.

Then came Cable. Distribution.

HDTV. Resolution.

Home Computer. Resolution.

The Internet. Distribution.

Mobile. Distribution.

VR. Resolution.

(Yes, I've skipped countless advances.)

Now imagine our two strings, Distribution and Resolution, weaving through time and I think you’d see a lovely interaction. With each leap forward, the strings challenge one another and give time to catch up and jump ahead. It's a dance. A game of leapfrog. Two overlapping sine waves - where every intersection, or tight alignment indicates some new unified technical state of the medium. A kind of DNA recording our technical capabilities.

It’s worth noting that it is not possible to separate these two strings in any practical sense. Like 1s and 0s. Resolution and Distribution are completely dependent on one another. One simply cannot exist without the other. Lose one and all functionality ceases.

They always move together. It's not a choice.

Which brings us to the last question: where does this end?

THE END

Sorry to tease you, but we're just getting to the good part, and don't worry - I'll show a lot of pictures in this section, kay?

Remember our continuum, our string of "better than"s? Humanity has duitifully added "better than"s one after the other since the dawn of communication. We continue today, and we can imagine adding "better than"s years into the future.

But where does "better, better, better" end?

At best. Obviously. Perfect Distribution and perfect Resolution.

That's the end.

It's maybe hard to imagine that there is such a thing. That there could be an ideal end-state in this progression of technical development.

But I believe we can specifically describe these respective ideal states. And if we can - wouldn't that be useful? If we can describe what is essentially the end of the technical journey might that not help us navigate toward it over the coming years? Might it not help us evaluate, embrace or abandon concepts and ideas as they vie for succession on the strings? I like to think so.

PERFECT DISTRIBUTION

Perfect Distribution is not hard to theorize. It's rather the easiest of the two.

I think we'd have to conclude perfect Distribution is the absolute interconnection of every sentient being, 24/7, at full resolution (to be defined).

"Absolute" is probably doing a lot of heavy lifting in that sentence. Essentially it means persistent, simultaneous, full-band I/O with all sentient beings at once.

Which I imagine means one would lack any sense of individualism and become merely a node in a universal network of life.

The Borg Cube. Star Trek: First Contact, 1996

The borg, for all you Star Trek fans.

You might argue it sounds like a bit of a nightmare, and depending on my mood I might agree with you.

Nevertheless that’s perfect Distribution. Until we reach such a technical capability we will always see room for improvement in our technology.

PERFECT RESOLUTION

I'll be honest, when I first began this excercise I thought this was the easy one. With a career's worth of focus on imagery, simulations, and special effects, I thought I had this one in the bag day one.

Like most of us I looked at the implied trajectory. I thought of "resolution" as dots per inch, pixel density, how high the sound fidelity, how accurate the kinetics, how fast the frame rate etc.

One merely needed to look at the improvements in gaming graphics to get a visceral, clear sense of where our technologies are aimed. Right? It seemed quite obvious.

“Sprint 2 Atari” through “Forza 5” - Resolution advances dramatically in racing games

Lara Croft through the ages. A clear trajectory is on display.

Naturally this lead me to believe that "perfect resolution" would be the recreation of real-world experience. Star Trek's Holodeck, or the Matrix. When the plane of the illusion is so perfect, when our five senses are so perfectly fooled that we simply can not tell the difference between reality and the medium, surely that is "perfect Resolution".

Right? There is no more room for improvement.

The Matrix, and Star Trek’s Holodeck

I believed I had it nailed for a pretty long time. This is the knee-jerk conclusion. Seemed obvious. Elon Musk thought so.

But alas, that was not it. In fact I was WAY off.

What more is there?

Well for one thing I realized that the rote recreation of reality actually isn't the point. We don't want to recreate reality. We already have reality. Rather, the point in having the ability to recreate reality is so we can ABSTRACT the sensation of reality to suit our interests.

To do impossible things we could not otherwise do.

To go impossible places we could not otherwise go.

To be impossible things we could not otherwise be.

To gain liquid control over reality. To live an abstracted life free of the limitations of reality.

To experience non-reality, by design. To hear colors, taste music, experience anything we wish.

Ready Player One, 2018

When we can wield the medium with such complete control over the illusion giving us the ability to abstract reality to create any experience we can imagine. Surely THAT is "perfect Resolution".

Right?

Once again I was briefly self-satisfied that I had closed the book on the topic. That I had reached the end.

And once again I discovered I was wrong. A million miles off in fact.

What more is there?!

The answer as I understand it today came almost accidentally.

It was stupid. I was doing something mindless like doing the dishes, and for some reason I'd been idly considering the word "communication" at random, like when you say a word so many times that it starts to sound strange? Some random synapse fired, and I realized that in my drive to identify resolution in terms of our perception I had somehow completely lost the entire plot. Communication. Or "camoonikashun" as it had started sounding in my head after 50 repetitions.

Communication.

These mediums I had been referencing all along were restricted to various types of representation and perception, as communication methods.

But in reality, they all function as filters, lenses, approximations of one’s full intent.

Even perfectly resolved virtual reality is just a gross abstraction when communication is one’s goal.

Communication is not merely about the representation of ideas, it is about actually understanding and being understood.

The Resolution of Understanding

Communication has come in so many forms.

Language, our first real communication medium, the one we still rely on, is in all fairness a ridiculously coarse filter through which to communicate! Why do you think people argue and disagree so often? Because language is stupidly imprecise, unwieldy and dense. That any complicated thoughts can squeeze through that gravel-filled filter is beyond me.

That hasn't stopped some of us. Poets have managed to raise the world's languages to new heights by saying so much more than the mere additive nature of the words alone. To communicate intense feelings sometimes with all the wrong words it seems. But they’ve shown the medium can be used better.

For some of us language just doesn't do.

Painters try to express themselves without words at all. "A picture is worth a thousand words". Complicated ideas can be expressed. Inner, emotional appreciation for the visual. Thought-provoking. And some painters have gone very far into abstraction to communicate what they see, think and feel. But of course there is always the problem of audience interpretation.

The Scream, Edvard Munch in 1893 and Atavistic Ruins after the Rain, Salvador Dali in 1934

One: Number 31 Jackson Pollock in 1950

Musicians use music, a kind of universal language with a direct line to emotion.

Film-makers use cinematic techniques and drama eliciting feelings from shared experience and empathy.

You grandmother may not mean to but the smell of those chocolate chip cookies baking imprinted on you and connect you still to those emotions even today. And sometimes you make them for others.

Touch may be the most powerful communication medium we have, with the worst distribution model.

We are all trying to express something inside and understand one another in various ways, with a message that today passes through representative mediums to an audience resulting in varying levels of accuracy and understanding.

Even the Holodeck or the Matrix at their best - impossible to discern from reality - is not the end-all be-all. This is merely a type of communication. The sharing of sensory illusions - just one more way to express oneself.

Not one of these - fully resolved - is perfect Resolution in the context of communication.

Perfect Resolution will be the direct transfer of thoughts, emotions and feelings in their native state.

What is a feeling?

I'm no psychologist but I think of a feeling as a tangled ball of our deepest and even superficial hopes, fears, insecurities, and wishes, mixed with memories that may reach as far back as you can remember. Ideas that define who you really are - your childhood, your parents, that terrible moment you were beat up. The time someone laughed at you, your awkward first kiss, your amazing 9th kiss, the way you wish people knew who you really were, what you really wanted. How much innocence, sincerity and love you have deep in your heart and an infinity of other unique possibilities. All are twisted up into a ball so tight that you can't possibly untangle it in the heat of the moment when you're fighting with your partner about the fact that the refrigerator door was left open again, and why the hell was it left open in the first place and...!

And we all know, in that moment, neither of you are really arguing about the refrigerator door. You're arguing about a thousand other things, everything tangled up in that ball of emotion that's inside you.

So the fight rages.

But if only...

If only in that moment, you could take that feeling you have, that ball of complicated emotion and memory, lift it out and place it gently inside your partner, in its native state... And your partner place theirs inside you.

They would feel what you feel. All of it. They would understand who you are, in the most intensely intimate way possble. They would undertand why you feel this way. What brought you here. The justice of your point of view that words can never express. Your love for them your fears, all laid bare. Because for a moment, they would be you.

And you would be them. And for a moment you two would have perfect understanding.

Perfect empathy.

That - is perfect Resolution.

Could you ever hate someone with whom you could experience this?

Could you ever imagine that you are in any way superior to another human being if you can understand each other so completely without language or filters?

Could you even think up the idea to fight, oppress or degrade?

Could we ever have a war if we could understand one another with such perfect resolution?!

No, I do not think any of that would be possible. It would make as much sense as fighting yourself. Of villifying yourself.

Indeed, I believe we would see people work together to solve one another's problems.

And I think, short of better communication, that's who we really are inside and who we really want to be.

And here is proof:

Today human beings of all kinds, universally, all over the planet, innovate, seek out, use and celebrate, every major technical advancement that we add to our collective communication media continuum.

Because we know it's "better than".

And though perhaps unacknowledged, deep down each of us knows every advance is only “better than” because it brings us one more increment closer to perfect communication; perfect understanding - completion of our gravitational, unspoken longing to join together with absolution.

From the moment we’re born, isolated within our body, separate from all others, we have felt inexorably compelled to connect. To be understood.

Interpret that need as you may, the pull of love, family, God, rejoining the spiritual source.

Amazingly, we can see how this drive is revealed through the incremental improvements and trajectory of our tools.

I don't think there is anything more beautiful or hopeful in our changing modern world than the inevitability of this conclusion; and its laying bare humanity's truest universal wish.

Read More
Joel Hladecek Joel Hladecek

Interactive Axiom #5: The Digital Creator’s Trap

Technical advancements are not creative CONCEPTS

Ongoing advances in technology always open new possibilities for creatives and developers. It’s a way of life in digital media.

But do those exciting new advances make us better or worse at what we do? How do they challenge our inventiveness and our range of skills?

I hate to report, but the most exciting technical advancements in our medium today are a trap of a sort that critically limits how creative most of us are. And many are blind to it.

In fact you could be doing significantly better work than others in your field if you just change your mindset. And I want to help you do that.

Interactive Axiom #5: The Digital Creator’s Trap

Technical advancements are not creative CONCEPTS

Ongoing advances in technology always open new possibilities for creatives and developers. It’s a way of life in digital media.

But do those exciting new advances make us better or worse at what we do? How do they challenge our inventiveness and our range of skills?

I hate to report, but the most exciting technical advancements in our medium today are a trap of a sort that critically limits how creative most of us are. And many are blind to it.

In fact you could be doing significantly better work than others in your field if you just change your mindset. And I want to help you do that.

The only way I know to explain this is to tell you how I came to this place.

The Set Up

I was unexpectedly fortunate to have begun a career in Special Effects during the final years of the industry’s pre-computer era, about 1989.

In those days special effects was as close to true constructive, alchemic magic as anything I could conceive of. Every day in the studio was like being in a wizards workshop. This was not just an era defined by “hand-made”, good lord, we had to control NATURE! Fire, ice, smoke, air, gravity, any chemical that had any useful visual or reactive characteristic at all, explosions, evaporating fluids, sparks, bubbles, 2-part foams, liquid eurathanes and latex, countless powders, electricity, motors, magnetism, and sure, artistry - all were common tools of the trade. You had to learn physics, energy, time, and materials, and wield it, gain predictable control over it, and puppet it, all through lenses and a strip of photosensitive plastic.

I’m not sure Nikola Tesla had this much fun.

The studio was called Matte World. It was founded by two of my long-time heroes: Craig Barron and Michael Pangrazzio. They enlisted another absolute hero of mine, Chris Evans, and a handful of other special effects “gods” who hailed from, or worked in proximity to, ILM (Industrial Light and Magic). These guys worked on the original Star Wars trilogy, the Indiana Jones films, E.T., Star Trek movies, and almost everything else ILM had done to that point. It’s hard to express the honor and daily joy I felt working there. I’ll cry if I think about it too much.

Despite the great skill of effects teams like this, this medium had built-in, constructive limitations which were well known at the time. Essentialy there were types of shots that just couldn’t be done well.

A massive “miniature”!

You could not miniaturize water for example. All pre-computer effect shots involving water, no matter how well-detailed a miniature set may have been, more often than not looked like a toy. “Superman The Movie” from 1978 is a classic example with a truly, impressively massive “breaking dam” set that comes near the end of the film. The miniature build was exceptional but unlike all the other miniature effects in the film it nevertheless turned totally toy-like the moment water began flowing.

Even at that scale, water blows the effect. Hey, who’s rocking the bathtub? Superman 1978

Large sweeping shots with lots of people or moving objects were always prohibitively expensive and problematic. Creatures were usually a combination of performers in suits, puppets, and stop-motion animation and despite the great expertise, always looked a bit cheesy, particularly full-body.

At that time one 3-5-second special effects shot could take upwards of three to four months for a team of six to twelve people (or more) to complete.

You can do the math.

This work represented the pinnacle of a 100-year old art form. The understanding of effects techniques had dramatically improved year-over year, handed down from master to apprentice, and improved on again and again, such that no previous creative teams in history had ever created such consistent, compelling, high-quality special effects as that generation in the early 1990s.

…and then Jurassic Park came out.

The Age of the Impossible Image

With Jurassic Park, CG (computer graphics) long-promised and inconsistently attempted to date, had come of age.

Sure, other movies had used CG before then, but none had done it without CG’s characteristic plastic abstraction, not to mention with such a classically impossible subject:

Living creatures.

What a spectacle! Every CG shot seemed impossible.

Obviously- and it was indeed immediately obvious to every effects nerd - Jurassic Park was the shape of things to come. Until that point it was still easier to throw a handful of dirt on a miniature set than to render that dirt in CG in a truly visually convincing way. Let alone to render living, organic creatures of muscle, skin and bones in scenes with moving cameras!

The Last Starfighter 1984

Up until that moment you had to embrace the unrealness of CG in your story. For example, Tron, was a “video game” - so nothing had to look real. Toy Story were plastic toys, check. “The Last Starlighter” well, just settled for intentionally unconvincing effects.

Friends who worked on Jurassic Park later told me that getting the dinosaur’s feet to stick to the ground (early motion-tracking to keep them from weaving or floating against the background plate) proved incredibly troublesome at one point. But Jurassic Park, and the amazing CG work ILM did on that film, changed all that.

For the first time, the effect was undeniably, light years better than anything before it.

As an effects guy I was thrilled. The freedoms we suddenly had! The ability to create such incredible images without carrying sandbags and apple boxes! If you could think it - you could suddenly create it.

And boy did they.

The whole industry did. In no time CG effects were ramped up in every effects house in the industry. Including Matte World. Our studio’s name suddenly became Matte World Digital.

Special effects guys and filmmakers around the world had long carried a kind of bucket list of shots that we’d always wished we could do, impossible shots that had been naively written into spectacle scripts for decades, but that were simply not possible to produce in the previous technical era. Shots that were too big - too complex - too constructively impossible to do before, so they got compromised. Scenes with “massive armies” were reduced to the number of extras recruited, and maybe a single expensive shot that employed a static, but sweeping matte painting to give scale. Massive fleets of ships were rewritten to be 3 ships, up close. Scenes where giant ogres fought humans were shortened and dramatically simplified, in order to be shot with a guy in an ogre suit, in forced perspective or on a blue screen to be composited in an optical printer. Optical printers are dead today. But at the time they were the physical film equivalent of photoshop’s layers and masks.

But after Jurassic Park, and in short order, these long coveted impossible shots and so many more were quickly ticked off the list.

It’s why I refer to this period as “The Age of the Impossible Image”

These shots were breathtaking when you first saw them. Jaw dropping. Impossible. You’d never seen them before. And (at first) these shots sold tickets.

The Lord of the Rings trilogy ticked many off the list. The Mummy. George Lucas’ second Star Wars trilogy. And dozens more.

But then something happened.

We’d seen those shots.

And then we’d seen them again in another movie. And maybe yet again in another. And pretty soon we’d seen them all, numerous times.

Take for example this previously impossible shot:

Two massive dark armies are facing off - standing on a sweeping landscape - tens of thousands of soldiers (or knights, or warriors, or the undead, or whatever) on each side, an expanse of open land between them. Suddenly they start rushing toward each other, moving with growing momentum like two sweeping, massive waves, closing the space between them until - they powerfully collide in battle.

You’ve seen that shot so many times now.

But never before CG.

Today there is no longer a list of commonly envisioned shots floating around the film industry that we simply can’t do believably. Perhaps there are shots that still sit in someone’s imagination, but there is nothing about the technology that constructively limits that shot being produced today. Today we have liquid control over the 2D image. For the first time ever filmmaking has joined writing and painting as being limited only by one’s imagination, and the cost of production.

There are no more constructive limitations.

And it’s here that the first lesson, the first in a series of data points I want to share, occurred to me.

When the technology suddenly enabled these shots, they immediately became the novel, obvious subject of the show. This spectacle of impossible imagery occupied a major share of a film’s attraction, its reason for being.

And soon it became clear (again) that impossible images alone – this new technology, could not carry a bad story.

One of the mantras repeated at Pixar goes:

“No amount of technology will turn a bad story into a good story.”

Jurassic Park was a rare perfect storm; good story, and impossible imagery. But then more movies came a long, as they always do, that banked far too much on impossible imagery alone, and far too little on telling a truly great story.

These filmmakers fell into a ever-growing trap:

They once again mistook advancing technology- as a creative solution.

Novelty WaneS

As always happens a wiser turning point came.

The Social Network, 2010

And this turning point is exemplified in movies like “The Social Network” where CG was employed - not to amaze and steal the show, but specifically to be invisible, to NOT be noticed, to allow the filmmakers to simply tell their story better.

The Winklevoss Twins were, as you surely know by now, played by the same actor. Through intelligent shooting and editing, and through careful, subtle use of CG effects, most people who saw the film had absolutely no idea that the twins were anything but a pair of talented brother actors until the end credits rolled.

The Social Network, 2010

Indeed no audience member watching that film was aware at any time that the film even contained CG effects. It wasn’t about that.

The question I want you to contemplate is:

Could this subtler, wizened use of the technology have happened - the Social Network’s cloned brother, instead of Jurassic Park’s dinosaurs at such time that the technology was just emerging?

Is such restraint even possible when such long-standing ceilings are suddenly broken wide open?

Hold that thought.

A New Perspective on Perspective

I used to think that the kind of creative challenges we face working with modern technology were unique. That only herein did technology powering our medium move so fast that we would see it overtly affect our art with a sense of newness and novelty. That traditional mediums, like painting say, a medium that advanced glacially over centuries was probably never the subject of such novel technical advances, and as such surely not a place to learn a relevant lesson. I mean, how often does pushing colored mud around really change every 18 months?

But then I remembered the work of Paolo Uccello.

I’ll be honest, I didn’t retain much from my art history lectures in film school, but the work of this guy stuck with me for decades, and you’ll see why.

The Battle of San Ramano, Paolo Uccello 1440s

Paolo Uccello was a 15th Century artist who painted, among other things, a series of three pieces depicting The Battle of San Romano, around the year 1440. I’m sure this trio of paintings is important for other reasons, but what struck me as bizarre was his use of (or maybe one could say “abuse of”) linear geometric perspective.

Typical pre-1400s really poor perspective. NOT Paolo Uccello.

Up until the mid 15th Century, attempts at perspective were struggling affairs. Artists knew that objects appeared smaller the further away they became, but attempts to recreate the optical effect of perspective in paintings was wildly inconsistent, usually resulting in a sense of flat planes and cutout shapes of somewhat arbitrarily diminishing sizes, and a general lack of foreshortening.

Then in 1435 Italian architect and art theorist Leon Baptista Alberti wrote a Latin treatise called Della Pictura which among other things described the first mathematical approach to reproducing visually convincing perspective. It included concepts that we still use today: the vanishing point, horizon line, and orthogonal lines. This method was, at the time, a revolutionary technical advance on par with any technical innovation we can point to in digital art today.

Excerpt from Della Pictura, Leon Baptista Alberti, 1435

In the late 1430s that one slightly obscure painter I can remember from my art history lectures, Paolo Uccello, was one of the painters who was strongly inspired by Alberti’s treatise. So enamored by Alberti’s perspective grid and the underlying method for achieving it that Uccello embraced it with all the thrill and obviousness of rendering, you might say, a living dinosaur.

The Battle of San Ramano, Paolo Uccello 1440s

Look closely at these chaotic battle scenes, look at the ground, and you will notice that fallen spears and bayonets weirdly land on the ground and seem to snap magnetically to the very underlying mathematical right-angle grid lines that help guide the sense of perspective. I mean these are the lines today you’d draw lightly in pencil and erase when you paint. But in Uccello’s hand it’s as though the very grid exists in physical reality as some sort of ether that influences gravity and physical objects in real life. Look closer and you’ll notice even a soldier and a horse have fallen to their deaths, but in their throes of dying they neatly and politely arranged themselves perfectly along the right-angled grid lines of perspective.

The Battle of San Ramano, Paolo Uccello 1440s

Keep in mind - this is six centuries before abstraction was to become a thing in art. At this time the aesthetic goal of most painters was to develop enough skill to hopefully recreate visual reality. While it’s possible that Uccello’s intent was to depict some spiritual truth, that mathematics is maybe an inexorable part of God or something, it’s perhaps more likely he was just excited by the novelty of the new technique and wanted viewers to appreciate the effect. To see it.

To this day such magnetic attraction to technical novelty often plagues digital artists all over the world.

Which is why - the first examples of special effects shots that employed CG were dinosaurs and other previously impossible images, and not say, the unnoticeable cloning of an actor or other minor adjustments that simply help tell a story.

Every time a new technical capability opens up it appears to be human nature to slam our heads against the new ceiling repeatedly, regurgitating the most obvious expressions of that technology, until we can settle down and get back to the business of being creative again.

Every time a new technical capability opens up it appears to be human nature to slam our heads against the new ceiling repeatedly, regurgitating the most obvious expressions of that technology, until we can settle down and get back to the business of being creative again.

Today

Now I look at the interactive medium with its digital artists, teams and agencies and often inwardly groan as I see them hopping with a ravenous appetite onto whatever new technical gewgaw presents itself: parallax, animated GIFs (I am unable to use the word “cinemagraphs” with a straight face - what a bloated, self-celebrating name for something so basic and lacking any sense of “cinema”. And anyway, what year is this, 1890?), awkward scroll-powered animation, all manner of trendy javascript, and canvas element, Dall.E, AI avatars, AR, (fill in the blank with whatever new thing you just read about this week), in short the adoption of novel technologies and interactive trends en mass at an ever increasing pace.

These tools are too often applied with a heavy-hand. Devoid of subtlety, with broad, obvious, flamboiant gestures as if to say “check this out”. Or as if asking, “why would we bother to do it if people weren’t going to definitely see it?”

Or worse “See what I did? I did the first thing that popped in my head based on the full potential of this tool!”

Make note of this. This is what I hope to help you move past. To approach new technologies such that you can make much better, more effective work.

For us the problem of technical novelty is exacerbated because there is a new technical advancement every few days.

Which finally brings us to the point of this over-long post:

HOW?

This is why you read this far folks. And thanks for that by the way. In principle the answer is simple but some technique will be required.

Above all, to be a strong creative in the digital space today:

You Must SHED ALL APPRECIATION FOR TECHNICAL NOVELTY.

(I originally made those letters way bigger but it looked ridiculous.)

To be clear I’m talking about the excitement one may feel about new technology - that sense of novelty - not about technology at large. You must approach the initial development of your project from a technology-agnostic point of view.

Easier said than done.

To do this, I have used the following tactics for years, and they work.


Tactic #1

Time Travelling

You just read another article about a very cool new technology. Everyone’s talking about it. And your dev team just told you they’ve looking into the API and it can be easily incorporated into your project. It’s a no-brainer.

STOP.

Talk a walk. Leave the studio.

As you walk, I need you to imagine something.

Imagine that it’s not today.

Imagine instead that it’s 5 years from now. You’re 5 years older. You and your team have done a lot of cool work since then. In that time you’ve seen many really interesting award-winning projects. And you’ve seen a ton of dull crap too. That particular technology that you were excited about 5 years ago is not new any more. When it came out of course everybody did all that knee-jerk, really obvious stuff. Everyone. It got overdone at some point.

Looking back you can see how those early executions were a little empty and easy. They were too obvious. It’s tired now. No one would do that today.

How would you use it today in the future? Would you use it at all? Would you use it in a way that’s more mature? Maybe the way you’d use it means it isn’t even immediately noticeable. Maybe the tech is woven in to the piece in an unexpected way; it’s not the star, but helps make the piece smarter.

– – –

Now come back to our real time. What’s in your head?

Can you out-think all the people who are going to do all that obvious stuff? Can you jump ahead and work with this tech (or NOT) in a way that is more advanced conceptually? Maybe it pushes you somewhere you never expected.

This is the best exercise I have found to kill my sense of inordinate novelty of new tech. Honestly I haven’t found a better one.


Tactic #2

Revere Existing tools

This next conceptual framework has also helped my teams and I create really great tech-based work that surprised people. But it probably won’t go the way you imagine.

What few acknowledge is that this process of technical advancement, though exciting and empowering, also undermines most opportunities for creative mastery.

I hope you won’t be insulted when I say this but as creatives in the digital space we never really master our tools in the classic sense. Rather we become “proficient” and tend to wait around for the tools to deliver exciting new updates - in essence delivering the appearance of mastery to us.

Each time you pick up a new tool, feature, or update, it gives you more power; but it’s not much of an advance in your skillset, it’s an acquaintance with the update.

“Well the tools are helping us express ourselves. We’re becoming masters of our expression.” …Yeah, sure. Maybe.

But true mastery comes from a lifetime of experience working with the same tools such that one can wield them with skill and intention far beyond anything a new-comer could hope for. Where you make leaps in the sophistication and skill in your use of these tools.

The 10,000 hours scenario.

Few working in the digital space today is that kind of a master. None of us has worked with any digital tool that long with rare exception. Processing speed doubles roughly every 18 months, then you’re onto exciting new updates. Photoshop has been around for a while, maybe an argument can be made there - but even then it’s updated so often, and who were the masters who trained them?

These new tools are all so temporary and transient.

And critically, that’s not because all the best ideas that utilized those tools have been done. Our digital tools do not erode and die because we’ve run out of innovative new ideas and need new ones. Far from it. The best creative ideas that could have come out of any stage of digital media have simply never been seen.

Our digital tools do not erode and die because we’ve run out of innovative new ideas and need new ones.

We are merely spoiled today with a regular, distracting stream of new shiny toys that save us from having to truly challenge ourselves. To use our most sophisticated problem solving abilities. Trust me - you can innovate with ANYTHING.

Limitation, as they say, is the mother of invention.

We put a man on the moon with less digital storage than the typical banner ad today. Surely you can do more with what’s in front of you than you have.

We put a man on the moon with less digital storage than the typical banner ad occupies today. Surely you can do more with what’s in front of you than you have.

So there’s the exercise. Recognize that there is latent power in existing tools that hasn’t been touched. There are potential decades worth of brilliant, creative ideas and innovations waiting to be tried in the tools and platforms others have become bored with. The things a true master would do with those tools.

A paint brush rendered both a cave painting, and the Mona Lisa.

Where can you take these tools?

Can you innovate with them?

Mull that over the next time some new technical corndog pops up and everyone predictably gets exactly the same amount of excited and jumps in the clown car.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

The Net Neutrality Fight Is Set to Drive onto American Roads

Remember Tron?

Depending on your age you’ll say, “Yes” and still maybe mean a different movie. It doesn’t matter, whichever Tron you remember opened on a lovely visualization of data moving through an integrated circuit. Abstracted pulses of light running along circuit traces which then gracefully morphed into, wait for it… cars driving on city streets.

With self-driving cars, we suddenly jump from a crude, object-oriented environment where every car is controlled independent of all the others in haphazard chaos, to a perfect, centrally controlled paradigm where every car is issued instructions in graceful coordination with all the others. Suddenly, this act of controlling the flow of traffic on our streets is not just “like” controlling the flow of data in a computer or across the internet, it will be controlled exactly the same way — just a lot slower.

THE NET NEUTRALITY FIGHT IS SET TO DRIVE ONTO AMERICAN ROADS

Remember Tron?

Depending on your age you’ll say, “Yes” and still maybe mean a different movie. It doesn’t matter, whichever Tron you remember opened on a lovely visualization of data moving through an integrated circuit. Abstracted pulses of light running along circuit traces which then gracefully morphed into, wait for it… cars driving on city streets.

Opening sequences of two Tron films
TRON opening sequence: the same metaphor, 2010 & 1982

That metaphor. I was 19, a film student, a complete sci-fi nerd from planet 10, in awe of the artistry and technology in that film, but I still thought that metaphor was stupid in 1982.

And I have subsequently thought it was stupid in every movie ever since that tried to make that same tired metaphor work.

On the one hand, you have the complex perfection of billions of instantaneous electrical pulses optimized, coordinated and controlled by a central brain with a single programatic mission. And on the other you have a bunch of dumb, disconnected, meat-eaters, steering boxes of plastic and metal in lurching, uncoordinated congestion, independently randomized by a near-infinity of irrelevant, abstract priorities, one of which is “oh…I gotta ‘member ta pick up that second can of Cheez Whiz for Dave,” totally unaware of what’s happening around them.

As a metaphor it’s got all the intellectual gravitas of every stoner’s dawning wonderment: “…soooo…wait… that means that our whole solar system could be like one tiny atom in the fingernail of some other giant being?”

National Lampoon's Animal House
Animal House: Pinto discovers the metaphorical power of weed.

No, it couldn’t actually, because planets aren’t atoms and... forget it, just eat your Cheez Whiz.

But that was then.

And everything is about to change.

With self-driving cars that metaphor is not just better, it will be nearly exact. I mean — it practically won’t even be a metaphor anymore.

With self-driving cars, we suddenly jump from a crude, object-oriented environment where every car is controlled independent of all the others in haphazard chaos, to a perfect, centrally controlled paradigm where every car is issued instructions in graceful coordination with all the others. Suddenly, this act of controlling the flow of traffic on our streets is not just “like” controlling the flow of data in a computer or across the internet, it will be controlled exactly the same way — just a lot slower. Algorithms that control traffic in one can essentially control the other. Speed of transfer aside, the fact that each data-packet happens to have wheels and some human meat inside is largely irrelevant. The entire process operates in a nearly identical fashion.

A great piece by Fernando Livschitz

IP addresses are street addresses, application protocols signify main categories of road use (daily commuting, shipping, emergency, etc.), routers are intersections & roadsigns, NSPs (Network Service Providers) are the cities that build and maintain roads, cars are packets, and so I guess we’re the data.

Ok, the metaphor is getting stretched thin, but there is one more factor. Many have reasonably speculated the end of car ownership, with “peak car ownership” happening sometime around the next 3 years. Following that is the expectation that ownership demand will drop precipitously as increasingly convenient, and less expensive on-demand services take over.

So with central control and the likely provision of cars on-demand, access to city road-transportation will become a complete, end-to-end B2C service, nearly identical to that of Internet and Mobile service providers who bundle hardware and services on NSP backbones.

As a business model for such a system, many have contemplated the expansion of the Uber/Lyft on-demand, pay as you go approach. But I don’t buy it. There are few digital services today that rely on pay as you go. Especially when you factor in ongoing maintenance and unexpected traffic surges resulting in cost/ride increases. It makes the cost/ride inconsistent, and as an approach for commuters to work, school, daycare or the other daily requirements of life, I think most regular users of such a system will seek predictability. And let’s face it, service providers love it when customers pay for things they don’t use.

So instead, ride providers will be incentivized to offer (cue ominous music)

…subscription plans.

There are few things more despised, that trigger more cynicism, annoyance or confusion — by design it seems — than choosing a new mobile service subscription.

And make no mistake — you’ll feel exactly the same way when it comes time to sign up for a new ride subscription.

Come on, you know how this is going to play out. You’ll have to choose from countless contrived tiers and features relative to your number of family members, regular destinations and requirements such as: number of included rides and miles per month before overage fees kick in, trips to “city-zones 1 through 6”, number of “Fast Rides®” which will temporarily prioritize your ride and make room through traffic along optimum routes. There will also be the choice of cars of varying quality, style and form-factor, single rider commuters, ride-share, family vans, and luxury models, those with refreshment services, entertainment and work stations.

“Fast Rides®” which will temporarily prioritize your ride and make room through traffic along optimum routes.

With central control we will also have something new to factor in: guaranteed arrival times. Valuable that, especially for commuters. In short, you’ll have to choose from priority services of all kinds that each potentially compete with one another in such indecipherable, convoluted ways that we consumers will be incentivized to spend as much as we can afford so our service doesn’t get us to work or outings late when we really need it, or during peak hours, and which don’t result in exorbitant overage fees, all covered in numerous pages of fine print, please just initialize here, and here, and down here.

Chances are it will be more complicated than selecting a new mobile plan, and those aren’t known for being particularly transparent.

Take Sprint.

Today, Sprint offers a plan called “Unlimited Basic,” which, being “unlimited” and all, sounded fine to me, until I noticed that one can choose “Unlimited Plus”. 

…I’ll be completely honest, I was still trying to imagine what one could add to “unlimited” that would be worth paying more for, but kept reading and apparently it doesn’t stop there. You can even choose “Unlimited PREMIUM: Everything you want from Unlimited, and so much more!”. Clearly “unlimited” is lacking.

Speaking of “more”, AT&T apparently tried to one-up Sprint by offering a base plan already called: “AT&T Unlimited &More” which offers not only unlimited, but “more” too, at the base tier. And because “more is more”, there’s “AT&T Unlimited &More Premium”. For all of your gold-plated more than unlimited needs.

Verizon, on a virtual rocket ship to planet Unlimited, takes you into the stratosphere with “Go Unlimited”, “Beyond Unlimited” and “ABOVE UNLIMITED!”

Clearly the industry needs to switch its thesaurus out for a dictionary.

Be that as it may, I worry our future will include such a cryptic choice with rides.

And if so, I hope you see what I see. And really, this is the point of this piece:

Every dynamic we face today, in particular the market dynamics that led us as a country to have to defend Net-Neutrality, will suddenly exist on American roads.

Perhaps to even a greater degree, because we will be moving our bodies, our lives and careers, not just data, and because the system will absolutely have to prioritize riders in *some* way to control traffic, and because our arrival time is so crucial to our ability to perform in life. All of this will add up to make it even harder to ensure that access to roads, rides, and thus even daily life, is fair.

Disneyland at capacity is not the
Disneyland at capacity is not the "Happiest Place on Earth".

Some have suggested that sheer central control will mitigate traffic bottle-necks. “Traffic won’t happen”. But I disagree. Traffic is traffic, and the central system doesn’t control demand. I’ve spent decades studying traffic flows in theme parks. No matter how orderly, Disneyland on a cold day is a significantly different experience than Disneyland at capacity. Sure you can optimize to a point, but in the end it’s a sheer bandwidth issue. To wit, when digital destination demand surges (eg. web traffic), we instantly duplicate the digital destinations and provision new lines to serve it, but you can’t duplicate real-world destinations and roads. Higher than expected demand will naturally require some percentage of riders to be routed on longer, more indirect routes, while lucky others will ride direct.

Demand dictates traffic.
Optimization helps, but demand against bandwidth dictates traffic.

When the Internet was new, it was somewhat conceptually easy to demand net-neutrality. But consumers have already accepted for-pay privilege on roads for years: toll roads, fast-pass toll lanes, congestion zones, parking passes, etc.

Wealthier consumers may very well even demand premium speed at a price because for the first time such a thing will be possible. Even at rush hour. For the first time, a centrally controlled system could divert any vehicle to any route on the fly, and would be able to guarantee arrival times for premier customers — traffic be damned. This would cause inconvenience for lower tier riders who would face longer rides, diversions from optimal, direct routes and a requirement for earlier departure times. An arrival time may still be “guaranteed” for these lower tiers, but it will still be a longer ride than the higher-paying customer.

And although one can find many pockets in American life where such for-pay privilege exists, I don’t think we have ever faced an example that would so ubiquitously, and so personally, draw into focus the difference in our economic classes for more people at once, at such a high rate of incidence than by privilege being literally enacted before our eyes at every turn. It would reveal a visceral, demoralizing, rigid, functional kind of class “Metropolis” that does not exist on roads today even between the drivers of say, an old, rusted 1975 VW Rabbit or a brand new Aston Martin. Today, both those drivers and their respective cars flow to work with the same priority, at the same speed, hit the same traffic, and obey the same rules of the road, just with different degrees of comfort. Today, by and large, we have an organic type of “Road-Neutrality”.

Fritz Lang's Metropolis: above the city
Fritz Lang’s Metropolis, the classes above.
Fritz Lang's Metropolis: below the city
Fritz Lang’s Metropolis, the classes below.

But centrally controlled priority service tiers would change all that, and because being places is such a fundamental requirement for living in society, the tier you can afford will change your life. Perhaps even more profoundly than having a nice house in a friendly neighborhood.

Despite all this, in my cynicism, I think the personal benefits that will come from privilege will be just too attractive for wealthy consumers and the companies who would serve them, to pass up.

THE PLANS

To give you a taste of the kind of choices we may have to make in the future, here is a theoretical subscription tier offering for my new Ride service called “Metropolis”:

ON-DEMAND

Pay-as-you-go is the highest average cost for taking a ride on roads.

SUBSCRIPTION PLANS

From single-rider through family plans, subscriptions will offer more rides, riders and benefits to the dollar.

Tier 5

Access Tier — Bottom of the subscription line-up. Probably called “unlimited” if we’ve learned anything from the mobile industry. This is for the non-commuter; retirees, people with walkable/bike-able jobs and occasional errands/ride requirements only.

• Limited number of included rides/week to pre-selected neighborhood/city zones.

• No priority “Fast Rides®”. 

• Few guaranteed arrival times.

• Fees for most extras and add ons.

Tier 4

Commuter Tier — Low/mid-range subscription. 

Probably considered the bottom tier for single, daily commuters.

• Five weekly guaranteed “on-time”, round-trip, rush hour rides per family-member, must be pre-scheduled — usually used as work / school / daycare arrival trips limited to pre-selected city zones. 

• Large number of “flex-time rides” included (low priority, often diverted to outer routes to make room for higher tier riders) for general outings and errands.

• No “Fast Rides®” included — cost extra.

• Fees for changing plans or late departure. Other gotchas.

Tier 3

Convenience Tier — Mid-range commuter and family-plan.

• Everything above, PLUS:

• Limited number of “Fast Rides®” included. 

• Higher number of included riders, schedule changes and departure delays.

• Luxury cars / family vans cost extra.

Tier 2

Luxury Tier

• Unlimited* rides. 

• High number of Fast Rides®.

• Multiple riders included.

• Guaranteed Arrival Plus (allows you to leave up to 20% later than scheduled and STILL hit guaranteed arrival time, without fees).

• Allows last second changes and increased wait times. 

• Small number of included “Luxury rides” — luxurious cars with snacks/VR entertainment/workstations.

 *Unlimited is actually limited, read fine print.

Tier 1

Platinum Executive Tier

• Always “Fast Rides® Premium”. (Fast Rides® aren’t so fast next to “Premium”)

• Always Luxury cars, with refreshments/massage chairs. 

• “AutoMotion® VR Experience” included. (VR syncs with your car’s natural kinetic motion.)

• Blow past literally everyone on the road. 

• Costs a fucking fortune.

With the central control of traffic, it’s inevitable that some riders will find themselves on slower, non-optimal routes. But who, and why? These are fair questions since the answers will absolutely be programmatically pre-determined. And I’ll be honest, I don’t like the tier scenario above.

I worry about the family who today has the same access to routes and roads as anyone else, suddenly relegated — every day and forevermore — to relatively slower, more frustrating, roundabout, outer routes, because they can’t afford a higher tier of service, while wealthier riders enjoy priority over all others, zipping quickly along shorter, more convenient, direct routes with ease.

These issues might seem small at first glance, but such a permanent change in our free access to our very cities forms the basis of a whole new “Metropolis”.

Ultimately it is unfair, for all the same reasons such selective throttling is unfair online; maybe more so.

As consumers we must choose to have an impact on the way this happens at a policy level, and so I think it’s worth processing where we individually stand on this sooner than later.

Because if we don’t engage, it’s going to be decided for us soon, and I think, unregulated, there is little question as to how this all plays out.

This is my alert to fight for something called Road-Neutrality.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

The Great Web Design Crisis

Beginning in 1993 and several times each decade since, the interactive industry’s reigning crop of web creators have faced new challenges that have required concerted design responses to overcome. Some of these challenges have been the result of advances in codebases and web standards, changes to hardware, economic shake outs and new business trends. And with each challenge the industry responded decisively.

But now web design faces a new kind of challenge, one we are failing to overcome. Not the result of external forces, this is a monster from within, ironically ushered in by the very designers and developers that are subject to it. On the surface we can see only symptoms: an industry-wide homogenization of web design, accompanied by a sharp decline in the innovation of new interactive conventions. And while those critical failures would be bad enough, the underlying cause is complicated and runs much deeper.

The real crisis is that our entire state-of-the-art web design methodology, our roles and teams, and even our qualitative values are the product of a misunderstanding.

THE GREAT WEB DESIGN CRISIS

Beginning in 1993 and several times each decade since, the interactive industry’s reigning crop of web creators have faced new challenges that have required concerted design responses to overcome. Some of these challenges have been the result of advances in codebases and web standards, changes to hardware, economic shake outs and new business trends. And with each challenge the industry responded decisively.

But now web design faces a new kind of challenge, one we are failing to overcome. Not the result of external forces, this is a monster from within, ironically ushered in by the very designers and developers that are subject to it. On the surface we can see only symptoms: an industry-wide homogenization of web design, accompanied by a sharp decline in the innovation of new interactive conventions. And while those critical failures would be bad enough, the underlying cause is complicated and runs much deeper.

The real crisis is that our entire state-of-the-art web design methodology, our roles and teams, and even our qualitative values are the product of a misunderstanding.

Narrowing The Cause

Despite now providing access to countless, wide-ranging categories of content, products and services, today's websites are aesthetically and functionally blending; becoming indistinguishable from one another, save for a logo in the topmost banner. More and more, the brands that occupy these sites are losing their identities in a sea of sameness.

Further, in a medium where interactivity is its defining attribute, and the technology never more advanced and capable of handling challenges, for the most part, designers seem to have all but abandoned pursuit of new, improved, interactive models, rather settling into a non-confrontational, follow-the-leader approach to web design.

I reject the claim that the pursuit of theoretically optimal usability releases us from the strategic need to notably differentiate and innovate. There is not one absolute way things should look, communicate and behave on the web any more than there is one absolute in architecture, interior or industrial design. Great design has always included a quotient of subjectivity.To which one might then swoop in with the oft-quoted web-design hammer, “Yeah but it's been scientifically proven that users prefer generic, prototypical designs. Generic interfaces have been shown to convert better."Yes. That's true. At least that is until it is measured against something that converts better than a prototypical design, at which point the opposite will have been scientifically proven.

Which begs the question, did you stop to wonder how that original prototypical design ever established itself in users’ minds in the first place?It exists because its parts were innovated. They began life as disruptions. Non-sequiturs. In essence, risks. And that’s something web designers, and the companies they serve, don’t appear to have the guts to do much in 2016; instead, taking short-term refuge in the safety of the status quo. Confidently back-peddling into the historic innovations of braver others. Surely you can see that the meme "users prefer generic interfaces" might also be regarded as the convenient mantra of a designer who takes the passive route to short-term profiteering.Finally, you may be thinking, "Oh come on, any of us could break out of any design trend tomorrow if we so chose. We've done it before."Actually we haven't. This is not merely some aesthetic design trend. It's not some fashionable phase that can change with taste. The root causes of this are intertwined with our very state-of-the-art thinking. To solve this problem we must dismantle many of our current best ideas. A contradiction which results in defense of the status quo. It would appear that we are facing systemic incentives not to fix this. 

Hot Zone: The Web Design Ecosystem

It bears noting that everything I will describe happens within an ecosystem that is almost perfectly engineered to focus and amplify the problems. For example, near universal access to computing platforms has enabled more people than ever before in history to lay claim to Designer and Developer roles.

Ironically, this also happens to be during a period of “Flat” design which is characterized by a minimum of affordances and fewer discrete design elements than ever. So these precious few design elements are being endlessly, incrementally adjusted - on a massive, global scale.The narrow result of this spray of minutia is then being further massively shared on sites like Dribbble, Behance, Git Hub, CodePen and dozens of other design/developer communities, which allow for favoriting and sorting of a select few common pieces of work. The top minority of these in turn are being subsequently re-re-re-referenced ad-nauseam and co-opted more widely and freely than ever before.Sorry, gimme a second while I fill my lungs again…So of course everything looks the same, for Christ’s sake! This is the systemic equivalent of a perfect storm; a made-to-order-global-make-everything-be-exactly-the-same machine. A worldwide design purification filter. If you wanted any category of design to fall into complete homogeny, you couldn’t do much better than facilitate it by setting up the exact ecosystem above. Indeed such a complaint has been voiced before.

There is strength in numbers, confidence in the choices of the mob. And the majority of web designers are falling into that trap.Despite the obviousness of this, it’s far from the worst offender. The problem cuts a lot deeper. 

   Patient Zero: UX

And lo, the Internet bubble burst and out of the ashes came the single most valuable invention to hit the medium since its birth: the User Experience Design discipline (UX).

All cures contain side effects.

If there had been a general irrational exuberance and a lack of due diligence on the web before the bubble, there was an equally irrational fear of the medium immediately following it. It was the fear-based era of rote “Web 2.0” utilitarianism and functionality. It was still before Apple had proven the value of great design to even the CFO's of the world, where aesthetics were still regarded as largely gratuitous endeavors.

The UX design discipline evolved out of this period of fear and uncertainty. Following years of unfettered, chaotic experimentation, exploration and a willingness (and sometimes a likelihood) to fail, UX stepped in and rationally culled and refined the best practices (of that era) and established a sensible process whereby optimal performance was achieved through an ongoing cycle of testing, analysis and incremental revision. Today it is the scientific process that dependably leads to incrementally optimized, defensible results.In some way those of us who have worked in this medium, who have lived through decades of recurring skepticism and doubt about the value of design, are thrilled to finally have such an empirical, validating hammer. The relevant impact of this development is that this is the first time in the history of the interactive industry that the web design discipline has been able to systemically validate its own effort. To prove the ROI of design. That’s a heady maturity that is still new in this industry.So far so good.

But the accountability and new-found reassurance that comes from the ongoing, incremental, effectiveness of the UX process has lead most web teams to go so far as to promote UX designers to project leadership roles. You can understand of course why UX designers have been tapped to lead, since the whole of this discipline is effectively accountable for both strategic design and the tracking of success. Makes sense if you want to ensure projects stay focused on addressing the business and end up proving ROI of the effort. What's more, the very UX discipline itself has further come to oversee the web design and development process at large in countless organizations.

On the other hand, our promotion of that sensible, responsible oversight, has resulted in several unexpected, debilitating, side effects. 


UX Side Effect 1: The Fracturing of Design

One of the principle ways UX has unintentionally undermined innovation is that it has caused a fracture down the middle of the design process; a fracture that starts with the people and their roles.

UX, focusing on the translation from business strategy to web functionality, tends to attract and reward highly analytical people. More the “responsible scientists” than the “non-linear artists” among us, who are ultimately accountable for the results of a project. These are people who can articulate data, visualize, organize and document information, and manage a process. I'm not suggesting that such a sensibility is the wrong one to manage those rational responsibilities. However, by placing UX in project leadership roles we are facing an unintended outcome: the "down breeding" of a second, unfortunate sub-species of designer whose sole focus is merely on the UX design leftovers. The scraps. Specifically, aesthetics.

What pesky stuff, that.The area of focus of this aesthetically-sensitive design species is no longer on the overall emotional, expressive, dramatic experience of Interactive Theater, but on the appearance of the graphic layer alone. As such, these designers have largely been relegated to colorers, or house painters, if you will.In an attempt to draw a clear distinction between them, we call this secondary role a UI (user interface) Designer. In reality, what the majority of today’s UI Designers focus on is rather not really the whole of UI, but “GUI” (graphical user interface). And even the title “GUI Designer” may be too sweeping since today’s UX Lead has already, presumably, decided exactly what this interface will do, what components it includes, and generally how it will move and behave. UI Designers do not so much design the interface, as they merely design how it looks.Let’s take a moment here - because this is huge.

When we innocently split (sorry, “specialized”) UX and UI design, we unintentionally peeled in two, the whole of great design. More importantly we created a stark imbalance of power and priority between design's yin and yang that rather should always be in equal balance if truly great interactive design is the intent. We removed influence of the unexpected emotional, improvisational, performer’s sensibilities that come from an artist’s muse and mindset from the origination of interactive theater. Which is too bad, because these are the things that disrupt, that result in innovation, and that delight users. The things that users would otherwise remember.

So is it really any wonder that 90% of the web sites you visit today all look the same? That the apparent “ideal” aesthetic approach is the ubiquitously coveted Flat design, which is itself merely the direct extension of UX’s own wireframes-the flattest design of all? That they all share some variation of the same, tired parallax scrolling effect that pre-dates wide-spread UX leadership? I've been in rooms where the question was asked, "Where can I find other effects and transitions?"Me: (Stares. Blinks) "What, seriously? Well... uh, I don't mean to be a dick, but that's what your imagination is for. Invent it! Good lord, this isn't a #%$@ing IKEA bookshelf!"

Today, most sites lack creative invention and inspiration. Oh sure, we point out how this page load animation is timed slightly differently than that page load animation, or the scrolling effect is nuanced in some way, but it's all the same. And part of the reason is that we have removed the reliably surprising imagination, the randomizing, theatrical showman, the disruptive artful inspiration from the UX process. We have removed the chaos and unpredictability of art and inspiration.Look, I realize that every UX-centric digital agency on Earth has some line about how their UX designers are story-tellers. Which I think shows that at some level they must understand how important this is. But God love ’em, today’s inordinate breed of UX designer is really a figurative storyteller, and not much of a showman. And I don't mean that disparagingly, the discipline simply doesn't naturally attract those people.

Take a summer blockbuster movie; that's also a user experience. Sure, it’s low on user, high on experience, but such theatrics and production value are merely at one distant end of the UX spectrum. What about theme parks? Who on your user experience team has even the slightest little bit of proven experience in story-telling at that level. That’s an extreme, ok, but the reality is that there is huge potential on the web somewhere between that fully immersive, drama-led linear story, and a registration form. Who on your UX team is responsible for spanning that? For even thinking about it? For imagining the magic in the middle? For finding new ways to move from a wireframe toward the Theater of Interactive? Of making the experience surprise and delight us? How often are your projects led or directed by the performer’s mindset?Since most of the web looks and behaves the same today, like the answer to a graphic design problem, most of you should have answered, “no one, and rarely”.

At this point there is always that one person who feels compelled to point at a generation of really crappy, Flash-based, ad-ware from the early 2000s - the antithesis of "usable" - the epitome of unnecessarily dense interactive jibberish - as though that proves great interactive work can't exist. We agree, much of that wasn’t any good. But neither is this future of timid, mincing goals.Our overzealous response to Flash’s demise, was to throw the baby out with the bath water, to largely abandon pursuit of disruptive new interactive models. I guess it was understandable; UX comes to the table with all these facts and empirical data, whereas UI designers tend to come with playful, colorful, unproven imaginings.  We've seen what happens when the artist's mindset has too much control; the stereotypical result is that it's pretty, but doesn't work. So looking at such a comparison one can easily argue that it’s not even a fair fight. You might think that "Of course the UI designers are secondary to UX in the process."But you’d be wrong. The UI Design role (perhaps not the graphic-design-only purists, but this archetypal imaginative soul) has just been intentionally positioned that way specifically to keep the unpredictable, chaotic forces of self-expression and imagination, for which there are no best practices, from running roughshod over a methodical, user-centered, prototypical approach.In fact fostering imagination should be half the job of a project leader who works with tools that are still being collectively learned and understood. But the data-seeking mindset of UX resists this, and resultantly limits imagination. It locks one into what one knows. It causes fear as one contemplates disruption. It magnetically holds one nearer to what’s been done before. 


UX Side Effect 2: The Failure of Experts

Photo: Patrick Gillooly

In 2010 MIT professor Laura Schulz, graduate students Elizabeth Bonawitz, Patrick Shafto and others, conducted a study with 4-year-olds that showed how instruction limits curiosity and new discoveries. In the study a number of children were split into two groups and each group was introduced to a new toy in a different way.In the first group, the experimenter presented the toy in an accidental, inexpert way, “I just found this toy!” and pulled out one of its tubes as if by accident, making it squeak. She acted surprised (“Whoa!”) and pulled the tube a second time to make it squeak again.

With the second group the experimenter presented the toy with authority and expert instruction. “I’m going to show you how this new toy works.” And showed only the function of the one squeaking tube without mentioning the toy’s other hidden actions.In reality, the toy could do quite a lot more than the children were shown in either group.In the end the students who were shown the toy in a more open, accidental, non-expert way were more curious and discovered all the hidden aspects of the toy. Whereas the children who were expertly instructed in the use of the toy did not; they discovered only the action they were specifically taught.Yeah, these were just 4-year-olds, but don’t discount the relevance of this study. This story is being played out in some form on every interactive design team that puts UX in the lead role.UX-centric project leaders are experts in historic best practices and proven usability patterns; they explain “what works” which is then backed by data, and this practice is resulting in measurable, incremental ROI: or, as most business owners would call it, "success". But as with most young companies that begin to turn a profit, such success tends to shift team focus in subtle but profound ways; attention is turned away from innovation, toward optimization.

And this trend shows no signs of stopping. There’s been a splooge of new web design tools, such as Adobe’s brand new “Experience Design CC”, which are the design tool equivalent of all this incremental, best-practice, templatized-thinking fed back to us as next generation design platforms. Where rather significant assumptions have already been made for you about what it is you will be creating.

In their attempt to make these tools and thus the UX job easier, they have tried to dramatically close the gap between raw code (hard work and complete control), and what it is you have in your head. Said another way, these tools encourage a limited range of ideas.On the other hand, an app like Adobe’s Photoshop is, for an image creator or manipulator, a very powerful tool that gives one complete, atomic control over the 2D image. But it is therefor also quite hard to learn and master.And I think, that may be one of the trade offs with tools like these. This popular UX-ified state we are in has reduced the complexity and possibilities such that “easier-to-use” tools like these can exist at all.For that matter, web site template providers like Squarespace.com have had such ample opportunity to observe and reproduce all of your favorite, reused, UX-led design trends that they can offer them back to you in fully-designed, pre-fab form. Honestly, if you have no intention of innovating any new user experiences today, their designs are quite good.All these apps and templates are merely serving as a different kind of expert: limitation, wrapped in the subtext of “what can and should be done”.There can be no question that starting projects on the “expert’s” platform, while incrementally beneficial in the short-term, staunchly limits site creators’ understanding, imagination and exploration of new models and techniques.

No question. 


UX Side Effect 3: The Separation of Content and Interface

Form follows function. That’s the fundamental mantra of great design. But what exactly is being designed on the web? UX designers say they design “user experiences”, so what constitutes a user’s experience?

If you bother to read the definition on Wikipedia someday, be sure to have a cup of coffee handy. Man, what a yawner. Supposedly the "first requirement of a great user experience" is to meet the needs of the product "without fuss and bother".

Wait, seriously? That's a "great" user experience, is it? Literally, it says that. Hey while we're at it - maybe it should also be without any old tunafish cans and sadness.Ok, look, whoever wrote this is both part, and indicative of the problem. They have totally nailed the literal "user" part, but they've left this otherwise really intriguing notion of an "experience" off the table.So what is it really? Let's cut to the chase. An "experience" must largely be defined by what the user does, and in turn what the user does on the web is enabled in large part through an interface.Shouldn’t the interface itself therefor also be regarded partly as content of the experience? Well, yes, of course it should. Because it is.

If this initially innocent thought ruffles your feathers a bit, if it seems to crack some core belief, hold on tight; I’m not done. Because as a result of the interface being part of the content, the line between the form and the function of an "experience" naturally blurs to an uncommon degree. Which is an inconvenient truth that today’s UX designers, who inordinately prefer fully segregating interface and content, have left generally unacknowledged.

For years most designers have uncritically regarded their specific web interfaces, their chrome, as being more or less separate from whatever content is being served. Almost as a kind of wrapper, packaging design, or container, rather than it being an extension of the content itself. Indeed, that’s why flat design, as a universal, distinct, separate, design trend, independent of any site content, can exist at all. Flat design can only exist as a solution if you draw a hard line between interface and content. If you regard content as something that should be placed in a template. Period.

Flat design can only exist as a solution if you draw a hard line between interface and content. If you regard content as something that should be placed in a template. Period.

In fact the persistence of most of the best practices we share today, the underlying support tools like content management systems and authoring apps, and even design template sites, are all products of the same segregation-thinking. They can only exist if content and interface are broken apart.

On the other hand, we've all had the very rare experience of visiting a website where something different happened. Where the site creators bucked the segregation of content and interface, and clearly married those components from the beginning. They developed some unique expression of the content through interactivity as opposed to relying on content management systems and best-practice thinking. And when it's done well (of course all things can conversely be done poorly) you feel it. You gasp. You say "Hey you guys, check this out". It feels magical. It wins awards. It feels true to this medium. It says more than any templated, flat-designed interface, images and body copy ever could.

Why did this happen? Why have so many chosen segregation? Why has separating interface and content become the norm? Well, if you have only been working in the industry as long as UX has been a thing, you might imagine that it’s because this is the right way, that we are headed down the true and righteous path. You might look around at all the smart people who are confidently on this path and imagine that it’s because they are in pursuit of some empirically ideal interface; some perfect range of optimized graphic affordances and interaction principles, and that continually refining toward that ideal is our purpose. Indeed if that were true, that might even validate the idea that interface and content are meant to be separate.

But that would be woefully incorrect.

Ultimately the reason we chose to separate interface and content (consciously or unconsciously) is that the nature of true experience design is just... well, it's just really hard. Sorry, that’s it. Too hard, too expensive, too much time. The truthful, authentic approach to user experience design is harder than whatever it is we have all agreed to call “user experience design” today. So we have just rather thrown up our hands and admitted a kind of meta-failure. We gave up. Our further avoidance of this truth is a sign of our own willingness to admit defeat right from the get go.

So everything we design after that, all the busyness and importance, is an openly pale compromise. At least we know that now.

As if that wasn't enough... are you sitting down? I hope so, because I am about to say something further that will cause the remaining half of you to grit your teeth, particularly because I don’t have comments enabled on this site to serve as an outlet.

User experience design is not the strategic design of a site architecture, flow and interface - true user experience design is also interactive content creation. As such, form and function become the same thing.

Yeow! Ok, that smarts. I'm, sorry, I know this is painful and unclear. Just so you know, it is for me too. Moving on.

When designing a television, there is a clear demarcation between the interface of your television, say, and the content it presents. The TV hardware has one user experience, and the content, another. And in such a design world there is clearly no expectation that the content creators are in any way responsible for designing the TV interface, and vice versa.

The same can be said when designing any other hardware device on which we cannot know precisely what content will be presented. Or an OS, which must serve all manner of possible experiences. We must work at arms length to the content. Cognizant and supportive, but ultimately behind the wall. In development of a browser, the classic rules of form and function still come into play. There is a clear delineation between interface and content that gives a designer focus.

But when you step inside a browser window those tenets of design blur. Here, an idealized site creator, a true UX designer, would have complete control over the content to be presented. Strategically speaking, that’s their job after all. As such, drawing a line between content and interface, or not, suddenly becomes a matter of choice, not requirement. The corollary being the designer of a television who is also expected to fill it with content, and must therefor make movies.

Can we apply the tenets of design to the creation of a feature film? What is the "function" of a movie that comes before the "form"? Is it to tell a story? To entertain? To market products? To part consumers from their money? To make an audience feel something?

To be honest, I think that probably depends on who you ask. But it's safe to say that a feature film is less a design problem, and more an art problem. The same condition exists for user experience design on the web. In fact it’s a bit more complicated because the medium and surrounding business has been so strongly influenced by people who come from design for so long. They’ve had their say. They’ve pushed web design into their comfort zone and reduced “experience” to a myopic set of design elements and interactive conventions. And they have relegated the unpredictable sensibility of improvisation and showmanship down the food-chain to an out of the way place that is largely gratuitous.

A movie produced that way would probably result in a middling documentary, a news report, or a corporate training video. But you probably wouldn't be breaking any box office records.What about those aspects of an experience design which really truly need to be segregated from the content? Well, you might ask why that's even part of your site; I mean, you might argue that any interface elements which really are unrelated to the content might rather belong somewhere else.

Virologist studies the hamburger icon

Take the ubiquitous “Hamburger” icon, for example. Since it appears to play a role on the vast majority of sites, one could safely assert that it's clearly not specific to any brand or strategy. Ubiquitous and non-specific the hamburger icon, one could argue, might even bubble up into the browser chrome. I mean, why not? We have a back button up there, and your fathers used to nevertheless design back buttons into their websites. Theoretically, if prototypical sites are so great, and if you take the generic user experience trend to heart, every website should have content to fill a browser-based, hamburger menu. It would free up space in the window, and give users the prototypical experience we are told they crave. It’s a win, win, right?

Ok, I know it has issues, but let’s pretend you think this isn’t as crazy as it sounds. I hope you can see that as we extend the idea of prototypical function over form, we rather quickly get into a situation where chrome owns your “UX” and you merely fill it with pre-determined content.

And hopefully you see that we are basically causing that today. 

External Stimuli: The Fall of Flash And The Rise of Multitouch & Apps

But why haven’t we recovered in our quest to innovate and differentiate on the web? Where did our aspirations go? Why do we even accept such a simplistic view of what's possible on the web?Both the popular rise of UX and the en masse surrendering of interactive innovation by web designers popularly took hold around 2007, roughly following the unveiling of multitouch on the iPhone.Was that timing just a coincidence? I don't think so; I think they were directly related.

After over a decade of interactive experimentation with the often kludgy, challenged tools available to web designers (Shockwave, Flash, etc.), Multitouch arrived via the iPhone.

Holy crap.

That was true interactivity! Our fingers and hands mucking through the very digital mud? Pinch to zoom- seriously?! Humanity’s continuum toward the Holodeck suddenly snapped into sharper focus. Like the sun overpowering a flashlight, one could see nothing else. It wasn’t even close.It was at that moment that anyone who was truly passionate about the design of interactive experiences and of innovating new interactive models on the web, redirected some key part of their attention to the implications of this new domain. Adobe’s Flash, the in-browser tool which up to then had been the de facto authoring tool for rich interactive innovation, in conjunction with a PC Mouse, seemed almost immediately antiquated. Next to multitouch, the resolution of interactivity on the web was pathetic.And I believe a sufficiently large swath of interactivists, at that moment, had an (possibly subconscious) epiphany:“Maybe,” the thinking went, “Maybe the browser isn’t really an optimal platform for innovating immersive, new interactive experiences. Maybe we have been over-shooting all this time, and the browser is already kind of figured out after all. It's certainly boring by comparison. Maybe interactive innovation is rather the domain of countless new technical platforms yet to come. Maybe we should just re-approach the browser and refocus our work there towards the simple, obvious things that we already know it does well. Just the basics.”You can sympathize with this thread. I mean, focusing on the known strengths of any technology, including the web, is sensible, and feels like a more mature, nuanced approach to the medium. And yet that simple recalibration, so holistically adopted, sucked the oxygen out of web design, squelching the drive and appetite for explosive, innovation from our browser-based experience designs.

Some of you read this and probably still believe that the browser is not a reasonable platform for aggressive interactive innovation today. That we “know” what web sites should be now - better than ever before.Yes, it’s easy to fall into that trap. I was with you as recently as a year ago. But there is one thought that obliterates that opinion for me.Let’s play out a hypothetical.

What If Technology Froze?

Let’s imagine that our technical landscape froze. Doesn't matter when. Say, today. Just stopped advancing. That the development tools you use today just stayed exactly the same for the next 20 years, no new versions, no new web standards, or bug fixes or updates, no faster processors, just these exact tools with these exact attributes, flaws and all. What do you suppose would happen?

Would the way we choose to wield those tools remain static and unchanging as well? Would web site design and the experiences we create for the next 20 years stay exactly the same as they are today?

Of course not! Our knowledge of that frozen tool’s true capabilities would explode. We would continue to discover capabilities and nuances that we have simply not had time or wherewithal to discover today. We would fully explore everything these tools could do, every aspect. We would see massive advances in our collective understanding and use of that static technical state. We would see a renaissance in our collective skills, interactive vocabulary and creative concepts. A new language. We would get vastly more sophisticated and better at creating, understanding and using content within that static medium. In short, we would master those tools.Paint does not have to advance for a painter to develop the skills of a master.The tools wouldn’t change - we would.

What that suggests to me is that such depth of innovation has always been possible, and is openly possible today, but intentionally or not, we don’t pursue it. That it has otherwise always been possible to deepen our understanding of any given technical state and to innovate aggressive, new experiences, but that we just can’t, or don’t. We simply choose not to set the bar high enough.Surely, one can argue that this is partly because technology changes too fast for creators to develop mastery. It advances out from under us. Indeed most of us can no more appreciate our true, collective lack of insight and skill any more than a cave painter might have imagined the lacking of insight and skill required to paint the Mona Lisa.That rather than bother trying to master the tools, many of us now patiently rely on the novelty of new technical tricks and advancements to fill the void. We have off-loaded responsibility to creatively innovate on developers of the platform.We wait around for the medium - to master us.

There are ways to fight this condition. Always have been. To reach further, and get closer to the ideal of mastery and innovation, despite technical change, than the vast majority of web design teams do. A very small handful of great teams know those tricks and techniques and achieve that today, such as North Kingdom in Sweden. But it starts with acknowledging that there is more. There are better, bigger ideas than those which the best practices and incrementalism of UX have delivered us so far.

It means you must look at the product of your design effort today and see the potential to have innovated much further.

You have to believe, once more, that your medium, exactly as it is, can do much more than you have bothered to discover. 

The Tragic Death of The Masters

Those of us who created interactive work in the early 90s were long ago forced to come to peace with this. Those of you who created Flash-based projects for many years were probably forced to face it only recently. And as sure as you are reading this, those of you who have only been working in the medium for less than a decade will unfortunately face it soon enough.

That we live in the Digital Dark Ages.

The one thing most of us will agree on is that technology changes. But as our industry incessantly celebrates the new, we less often notice what resultantly fades from view. Yet fade away is exactly what our work does. Unlike virtually every other preservable medium, the interactive work we create erodes from the view of users of future technologies. This is not just the result of changing software, versions and standards, but also of changing hardware, input devices and platforms, and of the unwieldy nature of maintaining functionality across the exponentially vast body of work being produced. A basic change in hardware will obliterate a decades worth of work or more.

A century from now your grandchildren, who bother to look back to your career, will see little to nothing, wonder fleetingly whether you ever created anything of value, and then assume “probably not”, since absolutely nothing you created will exist. Lost forever, as though it was merely a live performance. And then they will design some experience themselves, using future tools, that attempts to result in exactly the same emotional payoff or dramatic moment that you are producing today. Maybe they won’t do it as well as you did. But no one will be the wiser.Unlike any other recorded, preservable medium such as literature, audio recording, film & television, etc, Interactive work is the first where the great masters and masterworks that came before you disappear. Have disappeared. Vanished for being fully dependent on a temporary technical state. Consider what literature, music, film & TV would be like today if every book, song, movie and show ever recorded vanished forever 8-10 years after its creation. If you’d never seen Charlie Chaplan, Buster Keaton, David Lean or hell, even Star Wars. If the only content you could experience and reference today was all post 2008? Think about that. Across every medium. Because although we chronically celebrate the “new” in the interactive space, it’s the previous work that first addressed all the fundamental challenges interactive designers face today. And often without the burden of being instructed by experts in what can't be done.There are countless versions of the Internet (and earlier platforms, CD-Roms, Floppys, Consoles, etc) full of astounding interactive experiences, conventions and designs- beautiful, delightful work that you have lost the ability to reference and learn from. Even now you probably mistakenly assume whatever work existed then wasn’t all that because “we’ve moved so far past it; our tools are so much more advanced now”.But if you imagine this, you are mistaking platform and interactive resolution for experiences, content, emotion, and behavior.

Unfortunately, many of you are relegated to having to stumble blindly into re-conceiving and rediscovering those same, been-done, inventions, stories and discoveries all over again as if for the first time. And sometimes you won’t.

The Digital Dark Age has cut today’s designers and developers off from the vital inventions and experiments of the medium’s previous masters; rich resources of experimentation and invention which might have challenged the gross commonality and safe standardization we see today. Might have allowed today's designers to go farther. But its lacking has instead relegated today’s industry to being perpetually inexperienced. 

Taking Control & Crushing the Crisis

We can fix this. It's huge and has massive momentum, and it will require us to humbly accept our recent errors, but we can fix this.

Approaching project leadership differently than we do today is going to be the best lever we have for affecting this change. We need to start with the right people and priorities in leadership positions.

Individually, each of us can start by acknowledging less certainty in the things we currently take for granted. To once again accept the possibility that aspects of our process and beliefs, bleeding-edge though they may seem, are woefully incomplete. I realize that back-pedalling from progress feels counter-intuitive in a medium that is still being understood - where we’ve only just begun to feel like we “get it”.

Where so many are still only now climbing onto the UX wagon.

But this medium has always been destined to evolve past the domain of GUIs, toward “Interactive Theater”. Consider ongoing advances in Multitouch, AI, and VR among others. More and more, Interactive media is going to be defined truly by experiences, real ones, not the scroll and click brochures we see today.

User Experience will be designed to generate improvisational drama and emotion, delight and magic, in short, a show. Leading such a project, you’ll need usability-knowledgable performers and showmen, not analysts and best practitioners.

Although this shift in user experience is more obvious when you project out to such an advanced technical future, it is still starkly relevant today.

In no way am I saying that we should abandon the best practices we have developed. But I am saying that it’s patently wrong for the same psychographic that inordinately defends those best practices, to lead an interactive project.

Some of you are UI Designers who truly love exploring the aesthetics of an interface. I get that. And thanks to the present state of the medium that will remain a valid role a bit longer. But to build and maintain relevance in your career you must move past the focus on mere graphic design that UX has relegated you to. You must be thinking about motion, behavior and improvisation with your users. And needless to say, you must resist referencing so few of your peers for inspiration.

There was a time before which Bobby McFerrin had his uniquely recognizable vocal style; the one that made his sound instantly recognizable in the early 80s. In interviews he described developing that style by cutting himself off from listening to his peers for 2 years, specifically to avoid sounding like everyone else. He used this time to explore new, original ideas. He then spent 4 more years practicing and perfecting.Perhaps for you, developing an original approach won’t take 2 years of isolation, perhaps you can find your personal, inspirational vein without retreating from the industry altogether. But part of being a strong designer is tapping into a unique creative vein that inspires you. Not taking your inspiration from the work that has been “Most Appreciated”, or “Most Viewed”, but from a thread of your own. It takes guts to put yourself out there. To accept the professional risk of stepping aside the work of the popular kids. To avoid clicking on the “Upcoming Design Trends of 2017” prediction articles, and to do something appropriately true to you. If you aren’t on such a path, then you need to get on it; either that or take satisfaction in being regarded as a craftsman. Good craft is valid, but distinct from design.

So Who Leads?

  • When designers lead projects you get solutions that look pretty but don't work.

  • When technologists lead projects you get solutions that function but look like spreadsheets.

And we must now add a new, equally unfair generalization to the list:

  • When UX leads projects, you get solutions that convert better but are just like everyone else’s.

The ideal interactive experience takes equal insight and invention from all of these very disparate, specializations: creative, performance and design, strategy, best practice, analysis, and numerous technologies.

That's why we must rather encourage much more systemic debate between specializations. This is a team alchemy issue.In particular we must try to undo the damage we have done to the design community when we began deprioritizing the "artist's muse" in the origination of user experience. The fractured sensibilities of strategic UX and aesthetic UI, as we define them today, must be re-merged, either through skill development or team structure.We then must empower that sensibility. We must reprioritize expressiveness, artistic exploration, play and the chaos of inventiveness as critical aspects of the UX Design process. Equal in importance to the logical, rational aspects that exist today.Project planning and scheduling further need to accommodate this change in process by building space for experimentation and the inevitable stages of failure and problem solving.I believe that the best, most advanced interactive teams will address this leadership issue in one of three ways:

  1. They will assign project leadership to a new role, a Director, hierarchically above what stands in for UX today. UX will take on an advisory role. This director will defend the dramatic experience, like a film director who leads many technical specialists across wide ranging fields, to a creative end.

  2. They will significantly change the job requirements of UX leadership to favor people who have a strong showman’s sensibility, an artist’s openness to new ideas, a true innovator. A playful romantic. In addition to managing strategic, functional, best-practice authorities.

  3. They will remove the singular leader altogether and move to a multi-disciplinary leadership model, raising the status and empowerment of each leader of the multidisciplinary teams. This is hard, and risks committee thinking, but in a team lacking ego, whose focus is laser-set on ground breaking experiences, it absolutely works.

Many combinations of these approaches would work. Each has risks and challenges. But if done well, each also increases the likelihood that we will see more  differentiation and innovation than we do today. Hopefully we will see more magic. 

Conclusion

I’m sure by now it's occurred to you that I’m a total hypocrite. That here I've produced this stupidly long blog post that might as well have been a book, and it's everything I have railed against.

Ok, I admit it, you're dead right; ideally I would have expressed these ideas interactively. At least I would have supplemented my ideas that way. What can I say. I did this alone. I don't code much myself, so I have to rely on the tools that I do have sufficient command of.

But at least I'll admit here in front of you that I have totally failed the medium because of that. That I have produced a piece of totally inauthentic work. I only hope you can see past that. And I truly wish the rest of the industry would similarly admit the same failure.Tomorrow we’re all going to go to work, and we're going to be surrounded by a wide range of highly talented people who don't yet think this way. Who don't yet see their sore lacking. People who are comfortable with the current definitions and trends, but who collectively have all the necessary skills to conquer this problem.

Many will not see that they are more than members of the design community, but that they are also on a stage, in a theater, in front of an audience. Putting on a nicely typeset, but very dull, show.

The idea that their project is maybe off target to some extent will not be met with ease. The whole system is aligned right now to squelch your innovative inclinations. But I encourage you to evangelize the need to break the stagnation, to find a new definition for UX and a new team structure to support it. At the very least be the person who questions the status quo and is confident that we can and should invent again.

And look, changing your mindset to favor innovation will help you in your career. The medium is going to change as technology and the world changes. As it has changed countless times already since the early 90s. The birth and death of browsers, the birth and death of Shockwave and Flash, propagation of social interconnection, the fragmenting of screens, ubiquity of mobile, the Internet bubble and ad-blockers, the only certainty you have is that the next fundamental changes will be so profound that they’ll catch you off guard. And if you have merely specialized in the current state, if your days are filled with the processes and skills that serve this narrow, off-target, flat graphic, UX best-practice-based website, circa: 2016, then there is more than a good chance, when such change comes, you’ll have precious few applicable skills to bring to bear.Focus on the big picture. Study and understand the future ideal of this medium, and work back from there.This medium is beautiful and powerful. It carries more latent potential than any other medium before it. And yet we barely understand it. We can barely speak its language. We even lack respect for our own inadequacy. So as the young, best and brightest enter our industry to be routinely trained against such misaligned conventions, led to believe our self-satisfied celebration of "design", all while the true reason for being of this medium is so weakly embraced, it breaks my heart.In those very rare instances that I discover someone who actually does get the medium, who bucks all these constitutionally weak, generic trends and produces a delightful, innovative, true-use piece of work, that effectively marries strategy and functionality, imagination and performance with a fine sense of taste and craft, it gives me goose bumps and makes me want to yell and fist pump in their direction.This medium can support magic. We only need to try. 

Special thanks to Tom Knowles, Gisela Fama, and Marcus Ivarsson for some truly thought-provoking, spirited debates.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

Messages from the Future: VR Entertainment

Ok, so in the future, Elon Musk's math turned out to be wrong. No, we don't live in a Virtual Reality simulation. Turns out, however dull and tragic it might seem, this world is our actual base reality. Boring, I know. Turned out the odds of being in the only theoretical simulated reality that DIDN’T have convincing VR among an infinity that DID ruined the whole fantasy. However what his math did prove was that otherwise smart people who are exposed even to old, crappy pseudo-VR, like you have today, almost immediately start to question their base reality for no other apparent reason. Not surprisingly this turned out to be equally true of 15-year-old boys who watched "The Matrix". Go figure.

Most Augmented Reality evangelists are super excited about how AR is a, or maybe even the, medium for entertainment in the future. So here’s the deal, in the future, AR was to digital entertainment what Sushi is to fine cuisine. Some of it is really good, but the vast majority of fine cuisine doesn't involve uncooked fish.

MESSAGES FROM THE FUTURE: VR ENTERTAINMENT

Ok, so in the future, Elon Musk's math turned out to be wrong. No, we don't live in a Virtual Reality simulation. Turns out, however dull and tragic it might seem, this world is our actual base reality. Boring, I know. Turned out the odds of being in the only theoretical simulated reality that DIDN’T have convincing VR technology among an infinity that DID ruined the whole fantasy. However what his math did prove was that otherwise smart people who are exposed even to old, crappy pseudo-VR, like you have today, almost immediately start to question their base reality for no other apparent reason. Not surprisingly this turned out to be equally true of 15-year-old boys who watched "The Matrix" once.

Go figure.

That said when we extended Musk's math even further, it also proved that we will eventually learn to travel back in time, and that time travelers are therefore among us.  Something I didn't believe until it happened to me.

Anyway, before we get into what made VR entertainment awesome in the future, I feel like I need to explain what VR wasn’t, because in your time a lot of you are still confused about that.

VR Wasn't On Your Phone

Listening to the press in your time you might imagine that VR is on your phone. That as early as next year you will be able to “put awesome VR in your pocket”.  ...Really?  Could someone at Wired please define the word "awesome"? I mean, because I just used that word, and the way you're using it so wasn't what I meant.

Today, filmmakers, technologists and, naturally, pornographers are breathlessly diving into this idea (prematurely), pitching and signing VR content deals to produce some of the world's first so-called “VR films”.

So let’s cut to the chase. In the future VR was not about turning your head to look a different direction. Yeah, that didn’t turn out to be it at all. Totally wrong. Yet somehow an entire industry seems to have confused this point. In fact, turning your head during a linear movie appears to be the entirety of what many otherwise smart people mean when they say "VR" in your time. Didn't the fact that Google made theirs out of cardboard indicate anything to you?

Didn't the fact that Google made theirs out of cardboard indicate anything to you?

360 wasn’t VR any more than a 4-year-old's crayon-drawn flip book is a summer blockbuster.

Anyway, despite the efforts of some over-eager filmmakers who really tried making movies where turning your head was, like, a “thing”, you thankfully moved past that phase pretty quickly.

If you are one of those guys considering making one of those linear, head-turny VR movies, you could save yourself a lot of professional embarrassment and personal disappointment and just not do that instead.  Strongly recommended.

I further find it fascinating that the same people who kicked and screamed before admitting that wearing Google Glass made you look like a complete dork, are now honking the same ain't-it-cool clown horns all over again with VR on your phone.

I get it. I know. You can't wait to be special, little, cyber-cool, robot-hacker adventure guys. That's still a cool thing in your time, right? A hoodie, a laptop, Christian Slater, and you, with a shoebox strapped to your face, waving your arms like an idiot catching invisible unicorn butterflies.I get it.  Yeah, you're right, you're really cool when you do that in public.

 Augmented Reality's Achilles Heel

"Oh, but I am fully aware that so-called VR on your phone is kind of stupid," you say. "I'm on the cutting edge. That's why my eye is on Light Field-based Augmented Reality."Right. Augmented Reality, or “blended reality”, or “mixed reality”, or good lord, whatever the hell Reality you’re calling it now - it’s all the same thing; why do you keep renaming things that have perfectly good names?

Augmented reality wasn't an entertainment game-changer either. And this goes against everything you are reading in the press in your time. Most Augmented Reality evangelists are super excited about how AR is a, or maybe even the, medium for entertainment in the future. So here’s the deal, in the future, AR was to digital entertainment what Sushi is to fine cuisine.  Some of it is really good, but the vast majority of fine cuisine doesn't involve uncooked fish.

In the future, AR was to digital entertainment what Sushi is to fine cuisine.  Some of it is really good, but the vast majority of fine cuisine doesn't involve uncooked fish.

Due to the medium’s definitive limitations in the face of the massively expansive domain of entertainment, AR occupied a very narrow slice of the experience pie. It was no more the medium for entertainment in the future than mobile phones are today. You know, hindsight being 20/20.

I admit however, that AR appears to demo very well. About as well as any spanking new special effect technique from Hollywood. Bearded tech bloggers with their geek chic but mostly geek glasses are giddy and excited about the promise of this medium, thrilling and editorially gasping at seeing jellyfish near a ceiling or C-3PO standing behind someone’s desk. And it is kind of cool in a way, because in your time you have never seen that before. As a visual effect (which is all it is), it probably seems like magic. But if you’ve therefor proclaimed AR as "the future of entertainment", well, you're missing a really crippling something:You're in your room.

Star Wars VR - Episode XXI - LOCATION: YOUR ICKY ROOM

There's your coffee cup, there is yesterday's half-eaten banana, and, oh, there is the underwear pile you keep meaning to put in the laundry basket before Brittany gets here.In the expanse of storytelling, I don’t know how else to say this, there are only so many believable stories that could happen in that room, surrounded by your own personal junk. Said another way, amidst the infinity of possible amazing stories that storytellers will wish to tell, across vast worlds and realities, only a minuscule, meaningless number of them has anything to do with wherever the hell you actually happen to be in reality.

Got it? By its very definition, AR suffered the severe limitation of having to co-exist with your world such that suspension of disbelief was maintainable.  Even though the effect might make a compelling 5-minute demo today.Hey, who doesn't love the idea that C-3PO might be on a vital mission for the rebellion which coincidentally can only be conducted... three feet from your slightly mildewed, 1960s, pink-tiled bathroom?

Hey, who doesn't love the idea that C-3PO might be on a vital mission for the rebellion which coincidentally can only be conducted... three feet from your slightly mildewed, 1960s, pink-tiled bathroom?

Pixar folk have uttered a number of brilliant statements related to the relationship between Technology and Art over the years, and this one feels relevant here:

"You can have some really stunning imagery and technical innovation, but after about 5 minutes the audience is bored and they want something more interesting -- story." - Lee Unkrich

Yes - I know AR entertainment seems cool right now while the visual effect is still novel, and further, by having not yet experienced any, let alone five or more, big budget VRs, it probably seems like there must be countless stories and realities one could create that would coexist nicely with your real world thanks to AR. In fact, the possibilities might seem limitless to you now. That’s what you’re thinking, right? That's certainly what you're reading. And ok, fair, there were a small handful of good ones.

The problem was that those couple good ones got made, and in very short order it became clear that the same few, contrived, narrative devices had to be repeatedly enlisted, ad nauseam, in order to explain away the unavoidable fact that this story was happening a few feet from your much too hastily selected Ikea shelving unit.  And trust me, that got old really fast.There were the scary killer/monsters in your room stories, the impossible, magical/sci-fi whatever in your room thanks to some coincidental, random, accidental dimensional/time portal stories, the Elon Musk was right about the Matrix stories, and the king of all AR stories, the Bourne-ish spy/conspiracy for-some-reason-you're-the-random-person-we-coincidentally-need stories. And at some point storytellers and audiences just realized that having to co-exist with the real world was a repetitive, and somewhat annoying, contextual handicap, and backed off, allowing AR mode to settle into it's righter use-cases.

...at some point storytellers and audiences just realized that having to co-exist with the real world was a repetitive, and somewhat annoying, contextual handicap, and backed off.

That said, the one genre where none of this was a problem at all was - porn. Although few acknowledged it openly, porn dominated AR entertainment. Integrating with your real world actually enhanced porn's value. On the one hand, the illusion was all that mattered; unlike other genres, no one cared about actual story devices in this context. And on the other hand, with AR you could keep a defensive, watchful eye on the real world. There was little more embarrassing than being walked in on, and on full display, unawares, while blindly aroused in some depraved, fetishistic VR extravaganza.

To wit, whole video sharing sites were dedicated to streaming parades of horrifying, thank-the-greek-love-goddess-Aphrodite-that-wasn't-me, "caught" videos revealing one blissfully-oblivious, self-gratifying, gogglebrick-faced, sex pig after another. Esh. There but for the grace of God...Non-porn AR entertainment on the other hand, settled into a more casual entertainment role, generally serving arcade and puzzle games that utilized objects or textures in the space around you, or ignored the room altogether.This is not to say that was all AR was good for. Not at all. AR was massive in so many other, non-entertaining ways. AR was indispensable at work, in communication, education and productivity.After all that, I guess it would be a good time for me to tell you that actually, the difference between Augmented Reality and Virtual Reality, was trivial.  They were just modes.  A toggle.

Tap, AR.

Tap, VR.

Tap, AR again.

Got it?

So turning off your view off the real world and entering an immersive new one was trivial.

But although mode switching was trivial, it was here in VR mode - the real world blocked out entirely - that entertainment non-trivially reigned.

VR Storytelling

What constituted a VR entertainment experience? What made a great VR story?

This isn't VR either.

Today the closest discrete relative you have to VRs, as they manifested in future, is games. But don't get all excited. These weren't sisters.  Today's games are that second cousin who voted for Trump, eats way too many Cheet-Os, smells her toenail clippings, and buys cheap jewelry on the Home Shopping Network. Comparatively speaking, the best games you have today are fiddly junk. And the platforms that power them are - so - fucking - slow. I mean really, today's games are cryptic. How did I ever enjoy them? Ug. All these ridiculous limitations.

"No, you can't kick that door down because we meant for you to find the key. Oops, you can't walk over there because, well, you just can't. You can break the window... oh, but no, you can't use the broken glass to cut the rope, because, well, honestly none of us thought of that."

If a specific action was not preconceived by a creator, you can't do it. No room for your creativity and problem solving, unless the creators thought of it first.  And the whole time you have to twiddle these stupid little game controller buttons and joysticks. God forbid I should wish to say something to a character, with all those coy responses designed to side-step the fact that this so-called "AI" can't logically respond to anything outside of some arbitrary, branching, preordained multiple-choice quiz.

I mean, imagine how you would feel if you were back in 1978 and all the millennial, hipster news bloggers were fawning over Coleco's Electronic Quarterback as "awesome video games in your pocket" and you were the only one in the world who'd spent 5 years playing a PS5?  You wouldn't know where to start.

And yet games are still the closest, discrete thing you have to actual VR.

VR Storytelling: The User Strikes Back

A great story, as you think of it today, depends on structure, timing and sequence. Without a story's structure and a sequence of events, how can there be a story at all? Fair point. A traditional, linear auteur very carefully structures a story, and times a sequence of specific events, that build to a conclusion. The linear storyteller definitively owns these decisions.

But in an interactive domain, glorified by VR, the user throws that structure, timing and sequence into chaos. Because the user makes those decisions. Not the storyteller.

So structurally speaking, "Storytelling" and "Interactive" are polar opposites.

Now at first this sends all the linear story tellers into a tizzy because it sounds to them like a kind of absolute chaos. But that's because they're just used to having absolute control over structure, timing and sequence.

"Yes, but what do I do then? How do I tell a story?! What are the tools of my trade?"

To this I would remind the linear storyteller that story is about more than structure, timing and sequence. Story is about character. In fact, the very best storytellers will instruct, much better than I can, that "character is story". That embedded in every character are countless stories that might manifest fascinatingly under a near infinite range of meaningful challenges. Writers of the world's greatest stories start with who their characters are. The way characters react to conflict drive the story forward. In fact when a story is perhaps not about character, the story is usually bad.

Character backstory was a critically fundamental part of interactive storytelling in the future. The platforms were fast enough and the AI intelligent and improvisational enough to literally perform characters that you could relate to. As a writer, you could create people.There were other tools: environments, objects, and acts of God (or maybe acts of "storyteller"). Along with character these were all parts of the interactive storyteller's palette. The story was literally embedded in the assemblage of programmatic objects.  And "acts of God" did allow the storyteller to take some control. To affect timing and events.  To a point.And in the end, that was the game. Not sequence and timing, but potential and likelihood.

The art of VR storytelling was the masterful design and assembly of character, environment, objects, and acts of God in the construction of a narrative theme park - saturated with latent story probability.

The art of VR storytelling was the masterful design and assembly of character, environment, objects, and acts of God in the construction of a narrative theme park - saturated with latent story probability. All powered by sophisticated, improvisational AI.

VR storytellers were experts in human behavior, and understood how to encourage motivation and read and manipulate emotions in action.

Crying

Despite the skill of great VR storytellers, one of the things they struggled with for years were sad stories. There were so few successful dramas that brought you to an emotional place. And at some point we realized - it was a result of the very medium.

In real life we experience emotional pain due to our lack of control over the universe, and that is what makes us cry. For example, say Bosco your puppy gets run over by some (yes, self-driving) Uber. You would do anything to stop that from happening, but it's real life so you can't.  The same is true for movies and books where you are limited to a preplanned path. You can't change it.

But in VR, we were in control. When you saw some pending tragedy, you had the immediate ability to fix it, undo it.  We could choose what we wanted to happen. Bosco can be saved!  Hooray!

VR, as a medium, existed to provide wish fulfillment.

As a result, you almost never cried in VR. And VRs that wrested away your control to try to make you cry, just didn't do that well.

Actually, Size Matters

The hipster millennial reporters, who thought awesome VR could ever make its way to your phone, took years to accept that despite Moore's Law, there was a consistent, significant, qualitative improvement afforded by large, dedicated, wired systems. The quest for perfected VR ended up being a never-ending hole of technical advancement because the target was so high (the convincing recreation, abstraction and manipulation of the real world and all nature's elements and laws) and so far beyond what was technically possible at any given time, that any version of miniaturized, portable VR always seemed grossly inferior. The state of the art physical set up, the sensors, haptic projectors and computing power required to run great VR still had not, by the time I popped back to this time, become small enough to carry with you, and nor would you want it to.

The target was so high and so far beyond what was technically possible at any given time, that any version of miniaturized, portable VR always seemed grossly inferior.

The reason you might not fully appreciate this is because you are still defining VR resolution as you think of it today. Oh, that gets miniaturized, sure, but that was lame.For example, by the time I popped back here what you’re calling haptic holography was a critical part of both the experience and the interface. This was not just some dull Apple Watch pulse. I mean you could bruise yourself on a virtual rock if you weren't careful. Wide ranges of textures, heat, cold, fluid dynamics (wind, water, etc) could all easily be replicated. If you got haptic water on your hands, they really felt wet. Which lead to all sorts of applications. You could wave your virtually wet hands and feel the coolness of evaporation; you had to dry them off on something. You could feel VR clothes and the weight of objects.

And get this, they could even create haptic effects inside your body. Again there were safety limitations, but it allowed the system to adjust your sense of orientation, to create the illusion that you were flying, or falling, or accelerating or decelerating, or standing. Even when you were just sitting in a chair you could feel like you were walking. And seriously - don’t get me started on porn.

As you can see, by the time your Apple Watch (the only device most of us carried) had enough thrust to power aural/visual-only experiences, the larger, wired, in-home rigs were producing massively richer, more jaw-dropping experiences that just made the phone version seem, well, kind of stupid.

You'd see businessmen fiddling with some portable version on the Hyperloop, but there just wasn't much to that.And so it went for some time. Until our senses could be bypassed entirely, computers became sentient, and all hell broke loose.  But that's another post.

Generation VRI

Back in the "moving meat days", and despite VR-proofing rooms (which basically involved padding, like you would do for a baby, but only for a full-grown, 250-pound man) everyone of my friends had some awful VR injury story . VRI was a thing you bought insurance for. As you neared walls and objects in the real world, most VRs would alert you in various ways. However, you would be surprised how strong the drive to do what you'd intended could be in the heat of the dramatic moment. You would invariably push, just that little bit further, to accomplish your goal, despite the warning. This tendency was called "elevening" (11-ing "push it to 11"). Elevening caused stubbed toes, noses and fingers. People tripped, collided, broke bones, knocked things over, fell off balconies, knocked people out windows, got electrocuted and burned, and in too many cases, died. To counter this, some VRs employed something between VR and AR called Reality Skinning, where your real room and objects like chairs or whatever was in it, were all rendered as themed objects in the virtual one. But I always found that a bit lame.

Getting injured in VR, however, was the least of our issues.

Although VR was awesome, it's problem was that is was really awesome.

Pretty much an entire generation weaned on VR grew up, at best, bored stiff with the real world. But usually worse. Leaving the virtual world and reentering the real one of inconvenience, dirt, ailments, limitations and an oppressive lack of control was such a profound let down. Your ego was once again forced to accept your oh so many pathetic imperfections.

Pulling out was universally met with depression. Often severe.

People slept there. It was vilified as being addictive, but how could it not be? An always-on Vegas casino, perpetual early-dusk; party-time forever.  Like heroin addicts, VR users suffered from a wide range of ailments, severe nutritional deficits and health problems related to hours on end foregoing attention to their real-world meat. Dieting stopped being something anyone tried to do. You think you have a sedentary population today?  You have no idea.

Users exhibited all sorts of bizarre behaviors and tics due to reflexively gesturing virtual actions that had no impact in real life.

Intelligent users were often confused and distrusting of base-reality.

A surprising number of people drowned when they discovered they couldn't actually breathe under real water, let alone swim. Others jumped off buildings because they actually did, with complete certainty, believe they could fly. Empathy plummeted. Samurai sword violence shot up dramatically.  And we generally stopped procreating, having been profoundly overstimulated by wildly perfect, surreal fantasy surrogates, and because we'd also become far too insecure in the presence of other equally damaged, relatively ugly, real live biological people to build relationships anyway.

I mean, no duh, seriously? What did you expect?

There is so much more to this story, but suffice it to say that VR changed everything. You could do anything, be anywhere, be anyone...

As such, VR was not just another medium.

VR was an alternate world in which our wishes were granted.

Think about that while you fiddle with your phones.

Oh, and Oculus Rift didn't end up ushering in anything. They just became a peripheral company.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

Messages From The Future: The Decline of Apple

I’m sure you’ve had your own debates with the “Apple is about to die” crowd. I’ve had those too. Except that being from the future, of course I’m the only one who actually knows what I’m talking about. And yet even though the future is not always rosy for Apple, even though some of these people sometimes have a point, they still piss me off just like they did the first time I was here.

Usually the argument centers around the tired meme that Apple has nothing significantly visionary or profitable to jump to that comes close to the potential of the iPhone, which of course supposedly means that Apple is going to die under its size and obsessive and unsustainable inclination to polish and “perfect” in the face of speedier, less precious, competition.

But that is so not how it goes down.

MESSAGES FROM THE FUTURE: THE DECLINE OF APPLE

I’m sure you’ve had your own debates with the “Apple is about to die” crowd. I’ve had those too. Except that being from the future, of course I’m the only one who actually knows what I’m talking about. And yet even though the future is not always rosy for Apple, even though some of these people sometimes have a point, they still piss me off just like they did the first time I was here.

Usually the argument centers around the tired meme that Apple has nothing significantly visionary or profitable to jump to that comes close to the potential of the iPhone, which of course supposedly means that Apple is going to die under its size and obsessive and unsustainable inclination to polish and “perfect” in the face of speedier, less precious, competition.

But that is so not how it goes down.

The other day Marco Arment read about Viv the AI virtual assistant , still being developed by creators of Siri. This, in particular, he told me years from now, after we’d met online which hasn't happened yet (Hi Marco - you dropped it in the potato salad - remember I said that), coupled with highly cited reports of the AI efforts of Google and Facebook, inspired his first post on the topic of this possible kink in Apple’s armor. It was about then that he, and a handful of others, came to their conclusion; one that was not too far off from what actually happened.Though it wasn’t quite as simple as “Apple showing worryingly few signs of meaningful improvement or investment in…big-data services and AI…”, nor as some had suggested, “When the interface becomes invisible and data based, Apple dies”.

Actually interfaces remained visible, tactile and exceptionally alive and well in the future. AI (via natural language interfacing) did not herald the death of the visual or tactile interface.  We used each for different things and in different places. Trust me - there are still a million reasons you'll want to see and touch your interfaces, and maybe more importantly, a million places in which you still don't want to sound like a dork talking to your virtual assistant.  Even in the future.But there was some truth floating within the "big-data services" thread.

But there was more going on here than the advance of AI. There was also the ongoing fragmentation of your platforms.

Apple mastered the hardware/software marriage. With rare exception, Apple exceeded in virtually any device category they ventured into. So you might argue that so long as there were devices to build and software to make for it, even if it was indeed powered by advanced AI, Apple, with resources beyond any other company, stood a chance. But there was more going on here than the advance of AI.

There was also the ongoing fragmentation of your platforms.20 years ago most of you still had one computer: a desktop. 15 years ago you probably had two, including a laptop.  10 years ago you added a smartphone and a “smart” TV. 5 years ago you added a tablet. Last year you added a watch. Now you have six computing devices plus peripherals and are only a few years from adding the first real VR platforms (incidentally real VR catches all the currently uncertain silicon valley trenders, and mobile hypeists off-guard. VR is so not this unbelievably temporary, phone-based, turn-your-head-to-look-another-direction ridiculousness. What a joke. That’s the equivalent of the 1998-chose-your-ending interactive CD-ROM, or red and blue 3D glasses. Trust me, speaking from the future - your phone didn't become much “VR” anything. Took a lot more hardware. If you’re investing, focus on in-home solutions and content creators, that’s where it all went down in the future. Another post.)

And do you think this platform fragmentation will stop? (Spoiler: it doesn’t.) Having to remember to put your device in your pocket is totally kludgy, you can see that even now, right? That you have to manage where all your various form factors are and independently charge, set up, maintain and connect them all, is grossly unwieldy, right?

You should know this, because it has been commonly theorized by now, that computers will continue to get cheaper and more powerful so as to become ubiquitous. In everything. Not just the so called internet of things, but the interconnection of EVERYTHING. And indeed, that happened. Exponentially. And yet few were ready. Businesses were disrupted and died.

Computers became part of almost everything. Seriously, for example, computers were in fish. And for that matter your toilet became a massively parallel processing platform…

Seriously, for example, computers were in fish. And for that matter your toilet became a massively parallel processing platform… I’m not kidding, 1.84 YottaFLOP power flushers.  Everything that touched it, and went into it, was measured, DNA-decoded, and analyzed in a million ways.  And this, more than any other advance in medicine, lead to quantum advances in health care and longevity. Who knew. There was a toilet company on Forbes Top 100 US Companies list.  No, really.  Though Apple never made a toilet.

Aside from your watch (and a headset which eventually you didn’t need anyway), you didn’t carry anything. Everything was a potential display, even the air, when it was dark enough. My Apple Watch was still the last and only device I carried with me (I posted about this before - it is laughable that people today think Apple Watch has no future. Oh man, just you wait.) But I’m getting ahead of myself.

This platform fragmentation, and not just AI, and not the feared loss of interface, was what ultimately changed things for Apple. Suddenly - there were thousands of device form factors everywhere. A virtual fabric. Rampant commoditization. Experience democratization.In hindsight, it became clear that what Apple required to be at its best was a market limitation in consumer’s access to devices. Limited form factors, limited production and distribution. These limitations, which were common during the PC and mobile eras, allowed Apple to qualitatively differentiate itself. To polish and make something insanely great - in contrast to the others. To design a noticeably superior experience.

But the more the device ecosystem fragmented, the harder it became for Apple to offer uniquely valuable and consistent experiences on top of all those infinitely unique functions and form factors. It just became unmanageable. I mean Apple had an impact for sure. Companies knew design mattered. And Apple went down in history as the company that made all that so.

Your six devices became upwards of fifteen devices, at which point I think most of us just stopped counting. And the number kept growing so fast. The car was a big one. Yeah, Apple did a car. A few actually. The car was perfect for Apple then because there were still significant market limitations in that category. And Apple could focus on the qualitative experience. Later, as the dust settled, the watch, being the last device we needed to carry with us, also served as a different kind of market limitation - a focal point for Apple's strengths.

the idea of a “device” of any specific, personalized sort had begun to lose meaning

But as device fragmentation continued to explode, as the hardware market massively commoditized, the idea of a “device” of any specific, personalized sort had begun to lose meaning. In the future, the term “device” sounds much the way “mainframe” or “instamatic” might sound to you today. Quaint, old; it’s just not a relevant concept anymore. Everything was a device. Focus instead shifted to the services that moved with you. Which was part of Apple’s problem.

Like Facebook, Apple, the wealthiest company in the world at the time, did a good job buying its way into various service categories (including AI, banking and hospitals), and innovating on some. Apple had a huge finance division, they had media and content production, utilities, but then so did other companies by then. Ultimately, none of it was in Apple’s sweet spot.

Without discrete devices, it was services and systems that became consumers' constant. I hope you can see that when this happens, it fundamentally changes Apple’s proposition, the so-called perfect marriage of hardware and software itself becomes an antiquated paradigm.

No, Apple did NOT die, but became some significant degree less a focal point for all of us. And yet... maybe now armed with this knowledge, they can change how things played out.In the meantime I’m watching the toilet sector. And I plan to invest heavily.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

The Presentation of Design

There was an excellent post on Medium recently called: 13 Ways Designers Screw Up Client Presentations, by Mike Monteiro which contained thoughtful, if rather strident, recommendations related to the selling of design work. It was a relatively enjoyable read. I agreed with all 13 points. However in his first paragraph, establishing the primary rationale for the article, Mr. Monteiro made a statement that caused me to choke on my coffee:

“I would rather have a good designer who can present well, than a great designer who can’t.”

I had to reread it a few times to make sure I’d read it correctly. After reading the article I kept coming back to that line. “Really?” I kept asking myself.

He went on to say:

“In fact, I’d argue whether it’s possible to be a good designer if you can’t present your work to a client. Work that can’t be sold is as useless as the designer who can’t sell it. And, no, this is not an additional skill. Presenting is a core design skill.

My emphasis added.

Undoubtedly that pitch goes over super well in rooms filled with wannabe designers who can present really well, busy account executives and anyone whose primary tool is Excel. Certainly for people who look on the esoteric machinations of designers as a slightly inconvenient and obscure, if grudgingly necessary, part of doing business.

But surely it can't be the mantra of someone who cares supremely about the quality of the design work - about achieving the greatest design?

The Presentation Of Design

There was an excellent post on Medium recently called: 13 Ways Designers Screw Up Client Presentations, by Mike Monteiro which contained thoughtful, if rather strident, recommendations related to the selling of design work. It was a relatively enjoyable read. I agreed with all 13 points. However in his first paragraph, establishing the primary rationale for the article, Mr. Monteiro made a statement that caused me to choke on my coffee:

“I would rather have a good designer who can present well, than a great designer who can’t.”

I had to reread it a few times to make sure I’d read it correctly. After reading the article I kept coming back to that line. “Really?” I kept asking myself.

He went on to say:

“In fact, I’d argue whether it’s possible to be a good designer if you can’t present your work to a client. Work that can’t be sold is as useless as the designer who can’t sell it. And, no, this is not an additional skill. Presenting is a core design skill.

My emphasis added.

Undoubtedly that pitch goes over super well in rooms filled with wannabe designers who can present really well, busy account executives and anyone whose primary tool is Excel. Certainly for people who look on the esoteric machinations of designers as a slightly inconvenient and obscure, if grudgingly necessary, part of doing business.

But surely it can't be the mantra of someone who cares supremely about the quality of the design work - about achieving the greatest design?

I never do this, but I posted a brief opinion of disagreement on this point in the margin comments of Mr. Monteiro’s article. And I would have moved on and forgotten all about having done that, but my comment was subsequently met with some amount of resistance and confusion by Mr. Monteiro and other readers writing in his defense. It frankly depressed me that there were professionals in our industry that might sincerely feel this way and more so that the article might convince even a single talented young designer for whom presentation is a non-trivial challenge, that this particular thought, as worded, has any industry-wide merit. And then I came across this recent talk he gave where he doubled down on the idea.

So I wanted to explain my reasoning more fully- it’s an interesting debate- but the limited word-count allotted to side comments didn’t allow for meaningful explanations or exchanges (particularly by people who are as verbose as I am). So rather than pollute Mr. Monteiro’s otherwise fine article further, I decided to explain myself more completely in a post of my own. Maybe more as personal therapy than anything.

No matter your field or role, you will never do worse by having strong presentation skills. It will help you align the world in your best interest, without question.

Let me first state — presentation proficiency is a useful skill. No matter your field or role, you will never do worse by having strong presentation skills. It will help you align the world in your best interest, without question. Everyone should cultivate these skills to the best of their ability.

But to what degree does this affect a designer? Does its lacking utterly obliterate one’s potential as a great designer, as Mr Monteiro asserts? And should designers further be “had” principally on presentation skill over other attributes?

Language Logic

Linguistically speaking, his choice of words “A good designer”, and “who can present well” clearly contemplate two separate states of being. This compels one to infer that not all good designers can present well, which is further supported by the fact that Mr. Monteiro evidently turns away “great designers who can’t.”

Which leaves me wondering:What is demonstrably “great” in a “great designer” who can’t present well, if presenting is a core design skill that dictates the ultimate usefulness of the entire role? Wouldn’t that mean then, that there never was any such “great designer” to begin with? That this designer must have been, rather, a “poor designer” for lacking presentation skill? Perhaps a better way to say what I believe Mr. Monteiro meant is:

“There is no such thing as a great designer who can’t present well, because presenting is a core design skill.”

On the one hand this revised statement at least avoids contradicting itself, but on the other I still absolutely disagree with it because, to me, it inexorably expands into the following thought:

“I would rather have a designer who has relatively weaker creative problem-solving, conceptual, aesthetic and technical skills so long as they can, alone, persuade the client to pay for the work, than I would a designer who has vastly superior creative, conceptual, aesthetic and technical skills  who unfortunately happens to lack presentation skill.”

Based on his specific wording, I think Mr. Monteiro would have to concede, at the very least, that design and presentation are separate, independently measurable skills unrelated to one another except within the context of what he prioritizes - in this case the selling,  as opposed to the designing of  work.

Part of what troubles me then, is that no other option further appears to exist to Mr. Monteiro except that every designer present and sell their own work - full stop.  And that the quality of the design work is naturally the first thing that should be compromised to enable this.

And I think that’s an unnecessary, limited, nonrealistic supposition.

When Design Requires Explanation

Great design does not exist in some vacuum, opaque and impenetrable until, thank God, some good presenter comes to our rescue and illuminates it.

Nor is presentation inexorably required in order to perform the act of designing. If it were, that would mean that a, say, tongueless person, who also perhaps further lacked the ability to play Charades, could never be a designer. Which is ridiculous none the least of which because I cannot name any tongueless designers who could not also play Charades, but I trust that within the expanse of probability such a person could nevertheless exist.

But what about basic language and cultural barriers?

I work and live in Switzerland with a team of highly international designers: German, Swiss, Swedish, French, Ukrainian, British.  And so perhaps I see this more acutely than Mr. Monteiro, who lives and works in America.  But the native languages and references of these great designers are all quite different - and this would obviously affect their ability to present to, say, an American audience. If I valued their American/English presentation skill above their great design skill, well - there would be no team.

That said, it would be interesting to see Mr. Monteiro present to a roomful of native Chinese executives.  I wonder whether he would attempt to learn Mandarin, or choose to have a Mandarin translator interpret his words and meaning, or ask the Manderin-speaker on his team (if he has one) to assist in the presentation.  More critically, I wonder if he would be eager to define his presumed lack of fluid, confident Mandarin presentation skill as weakness in his design, or in his skill as a designer.

I'm admittedly being obtuse here, but only to illustrate the fault in the mindset. Great design is worth defending with presentation support, and I would argue there are even those projects where, counter to Mr. Monteiro's opinion, design actually does speak for itself.

 ...design which is not “great” rather usually does require a fair amount of explanation. Enter “good design”

This is because design is, in part, a language of its own. Indeed great design results in, among other things, the communication of function.

So where design is truly “great”, as opposed to “good”, its value must be nearly, if not sometimes wholly, self-evident. Great design is observable — at the very least, by the designer’s own team, for example. More on that later.

In contrast I find that design which is not “great” rather usually does require a fair amount of explanation. Enter “good design”, or worse, which may in fact require some presentation skill merely to compensate for its relative lower quality, its relatively weakened ability to self-communicate.

Supporting Talent

If you limit what you value in design talent by requiring that it absolutely be accompanied by self-sufficient sales skill, then you are shutting yourself off to some of the most creative and talented people in the world.  Indeed many people become designers and artists in part specifically because their brains don't connect with the world the way people who are good presenters do!  From my point of view it rather requires a kind of tone-deafness to the psychology of creatives to not see this.

My old friend and business partner, Sir Ken Robinson, speaks on the topic of creativity all over the world, and he often points out that exceptional intelligence and creativity take many forms. That rather, our reluctance and systematic inability to recognize and accommodate these varied forms of intelligence and creativity - our resistance to individualizing our interaction and support of it - results in an utterly wasted natural resource. He points to many famous creative people — at the top of their respective fields — who simply didn’t fit in “the box”, they didn’t easily align with the system. And that only through acknowledgment of their unique skills and provision of personalized support, could their inordinate brilliance find its way into the world. These are the people who often dominate their profession once the standardized models surrounding them are challenged to support their unique strengths. And I suppose I feel something similar is certainly true here. From my perspective, great talent must always be nurtured and supported. Even if, no, particularly if, that merely requires the support of a presentation.

From my perspective, great talent must always be nurtured and supported. Even if, no, particularly if, that merely requires the support of a presentation.

My expectation is that the people who buy into Mr Monteiro’s stance don’t like this idea in part because, for them, it probably perpetuates an old archetype of entitled, high-maintenance designers; insulated royalty who idealistically prefer to ignore business realities and design in a bubble. Of the managers and the operational and sales functions having to serve and adapt to the designers whims— of having to support and compensate for someone who isn’t carrying his weight in the business sense.

In reality, the type of extra effort required to support the development of truly great creative work in any field is exhausting and something that anyone lacking sufficient constitution gets quickly fed up with. So it must feel good, refreshing even, to be able rally behind this concept, to shed all those feelings of subordination and responsibility, and demand that designers do that work themselves, to say:

“Designer, if you can’t sell the work yourself you’re not good enough! Because guess what, it’s always been your job - alone!”

And although that stance may feel refreshing and proactive, it’s misguided.

The Business of Design

“Work that can’t be sold is as useless as the designer who can’t sell it.”

With this excerpt from the article, here again, I take issue. Sure, in the business of design, work that can’t be sold is (usually) useless. Agreed. But why on Earth is it the only option that the designer alone sell the work? And why does that make one's world-class, insanely-great design “useless”? This designer obviously works on a team, since Mr. Monteiro “would rather have” one of a different sort. So where is the rest of this team?

Of course in business, presentation must happen — it’s a requirement in the client-based sales of design. But how we go about accommodating that requirement within our agencies, I think, is a fair debate, and a relevant topic.

In my teams we frankly rely on one another. Does that sound odd?

Since we have already established that great design can be identified in isolation without the accompaniment of a formal sales presentation, that means great design is observable. At the very least, it’s certainly not going to be missed by a seasoned team. Especially, I assume, by someone like Mr. Monteiro, or his fans, who have all undoubtedly worked in design for a very long time. Surely each would acknowledge being able to recognize great design work if it were shown to them without the benefit of a sales presentation?

In my teams we frankly rely on one another. Does that sound odd?

So when this truly great designer who can’t present comes to you, lays an unbelievably brilliant piece of design work on your desk, perhaps the best you’ve ever seen, and mumbles to his feet:

“Yeah, um…well, this is what I did. ….er… I uh…. don’t know what else to say. (inaudible… something about “…my mom… ”)

What does Mr. Monteiro, or any of the people who would argue with me do?

I’ll tell you what I wouldn’t do, I wouldn’t yell:

“Somebody get a worse designer in here and start all over! Pronto!”

I would sit down patiently with this great designer who can barely put two words together, along with members of our team, and talk it through.

This is where a couple things happen, first it’s at this time that a strong director is sometimes called upon to be a mentor, a psychologist, a parent or friend to nurture, to listen and understand, to pull words and thoughts from someone whose mind literally doesn’t work that way. Yes, that sometimes takes work, but in my world-view,  great design is well worth it.  This is also when the team comes together to build our common language.   The fact is, the whole team needs to understand the project anyway. We all need to internalize why it works and what makes it so insanely special. Each of us.

If the design is actually great, at most, this exercise takes one hour.  Usually quite a lot less.  Rather I find we enjoy discussing truly great work, it sets the bar.  And we probably spend more time than necessary doing that because we love doing it.

And I have never in my 30+ year career been faced with a situation where someone on the team who was indeed exceptionally skilled at presenting could not assist a great designer who can’t present well.

Oh sure, it’s super-duper convenient to have great designers who are also great presenters — but those are rare creatures. Unicorns. You better believe that your search results get exponentially narrower with each search term you add. To combat this natural rarity, Mr. Monteiro claims he would rather broaden his search results by dropping the term “great design” from a search that includes “can also present”.

Whereas I prefer the reverse.

What the Client Wants

Obviously Mr. Monteiro is a busy person who runs a company that hires designers. This company cannot survive if design work is not paid for by clients. Perhaps because he has very little time, he has therefore decided that he needs his designers to be able to present well, as well as design. In fact, his preference for designers who can present is so strong, he will choose a designer with lesser design talent to accommodate that.

Hierarchically this clearly places the quality of the work below one’s ability to persuade the client to buy it.

Hierarchically this clearly places the quality of the work below one’s ability to persuade the client to buy it.

If one were to take this to heart (and I am not suggesting that Mr. Monteiro necessarily takes his own advice in running his studio), to me this would be a very cynical, virtually dishonest, platform on which to operate a design firm that promises great design solutions. Indeed it’s a hiring platform perfectly optimized to lower, rather than raise, the qualitative bar. One that prioritizes not the best work, but the ease of financial transactions. One that takes advantage of unsuspecting clients.

Where I come from that’s called selling out, and as a client, if truly great work is what I’m in the market for, any team that operates that way is a team I wouldn’t knowingly hire.

Good and great are relative of course, but in principle, I simply cannot imagine passing on what I would perceive of as great design in favor of something lesser-than just so that the rest of my team and I don’t have to put effort into assisting with a presentation. Because in the end — that’s all this boils down to — a willingness to apply the required effort to sell the greatest solution.

If you’re not willing to support a great designer with help in presentation — you might as well tell your clients you routinely compromise on quality because you don’t like to work that hard. Surely your clients would vastly prefer having the best possible talent in the world on their project.

Common Ground

Honestly when I originally wrote this post I just didn’t think Mr. Monteiro probably had the opportunity to be as critical as I am being about the language of that particular statement yet. I guessed he might have accused me of splitting hairs.  Of playing semantics. I was sure if he were really pushed against the wall on this topic he would probably concede that this particular stance seriously needs to be re-worded.  But his recent lengthy magnification of the idea at his recent talk makes me think he sincerely believes it, as I interpreted it.

That said, the two skews in which I think Monteiro’s language and my beliefs on this topic are in absolute alignment are:

  1. Regarding a fully independent designer, one who wishes to work in total solitary — not part of any team — contracting design skills directly to paying clients. Then I agree, having presentation skills will be critical in the event that you wish to support yourself on the sales of that design work.

  2. And that as a universal rule, presentation is an excellent skill to nurture in yourself if you are a designer, or occupying any other role on the planet, quite frankly. Presentation is just a good skill to have no matter what field you are in or role you play. It’s a skill that will always serve you well in affecting the world to suit your interestes. Everyone should do what they can to improve their presentation skills.

The lone swordsman aside, if you have even one other partner or team member, there are almost always alternatives that will allow great work to be presented and sold.

And if you are indeed a great designer, a lone swordsman, and feel genetically incapable of presenting well, I’d suggest you develop a professional relationship with a strong sales/presentation partner.

FYI — that’s generally called starting a company.

Apology

I’d like to sincerely apologize to Mr. Monteiro for being so hard on him in this piece. I’m sorry. I rather respect his thinking in every other way so I have felt conflicted the whole time writing this. I think if you haven’t, you really should read his article because it otherwise contains some solid advice and can help you be a better presenter.

…Just maybe completely skip the first two paragraphs. Truth is, the thrid paragraph is actually a much better opening bit.

Lastly —  To any truly great designers out there who can’t present well or don't feel comfortable doing so:

I admire and respect you. I’m hiring great designers all over the world.  Send me a message.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

iOS Ad Blockers: Why Advertisers Are Suddenly Going Diarrhea In Their Pants

Apple recently released ad blocking capabilities in iOS, and the ad and publishing industries began frothing at the mouth. Every emotion from spitting panic to disdain have been hurled into the webversphere over the capability. And as a consumer, and an ex-advertising shill, I love it. I am particularly fond of the most vicious ad blockers, the so-called ‘blunt instruments'. The ones that leave gaping, blank maws between thin slices of actual content. The ones that so severely disable Forbes ’welcome page’ (an interruptive page of ads feigning value with some irrelevant ‘quote of the day’) that you are required to close the resulting blank window and click the article's original link again to see the content.

Yes, I even revel in the extra effort it requires to get past all the newly broken, well-blocked bits. It's harder in some ways. But you know what? It's payback time. And that extra effort? It's a pleasure. I know that each tap and empty window is sending a message.With every whiny press release and industry insider wailing about the "end of content as we know it" a delightfully warm, glowing feeling washes over my insides.

I admit it, it's an unhealthy pleasure in general. And in any other context I wouldn't celebrate it. But here? I'm gonna party like its 1999, because for all the ad industry has learned since then, it might as well still be.

IOS AD BLOCKERS: WHY ADVERTISERS ARE SUDDENLY GOING DIARRHEA IN THEIR PANTS

Apple recently released ad blocking capabilities in iOS, and the ad and publishing industries began frothing at the mouth. Every emotion from spitting panic to disdain have been hurled into the webversphere over the capability. And as a consumer, and an ex-advertising shill, I love it. I am particularly fond of the most vicious ad blockers, the so-called ‘blunt instruments'. The ones that leave gaping, blank maws between thin slices of actual content. The ones that so severely disable Forbes ’welcome page’ (an interruptive page of ads feigning value with some irrelevant ‘quote of the day’) that you are required to close the resulting blank window and click the article's original link again to see the content.

Yes, I even revel in the extra effort it requires to get past all the newly broken, well-blocked bits. It's harder in some ways. But you know what? It's payback time. And that extra effort? It's a pleasure. I know that each tap and empty window is sending a message.With every whiny press release and industry insider wailing about the "end of content as we know it" a delightfully warm, glowing feeling washes over my insides.

I admit it, it's an unhealthy pleasure in general. And in any other context I wouldn't celebrate it. But here? I'm gonna party like its 1999, because for all the ad industry has learned since then, it might as well still be.

I'm gonna party like its 1999.  Because for all the ad industry has learned since then, it might as well still be.

This is what selfish, self-inflicted industry ruin smells like. Banners in ashes, melted trackers. A stockpile of suddenly outmoded scripts and tactics, all in embers. The dumbfounded expressions of dim-witted middlemen watching the gravy dry up.  Ah, there's that warm glow again.

Unfortunately, ruin is what this will take.I realize there is a risk that the arms race will result in even more devious forms of advertising, that the penicillin will result in resistant strains. But the relief for now is unquestionably worth it.

Even so, some are feeling guilt.  Under peer pressure, I assume, a few creators of Ad blocking technology are trying to give a crap.

Marco Arment pulled his ad blocker from the iOS app store, after 3 days as the top seller, I assume, with a last-minute guilty conscience.He said: “Ad blockers come with an important asterisk: while they do benefit a ton of people in major ways, they also hurt some, including many who don’t deserve the hit.”

I believe his observation is mostly correct but his response was wrong. And his kids will probably hate him someday for leaving a sizable portion of their inheritance to someone else's family. To wit, other excellent ad blockers have already moved in happily.  At least he hopefully slept better that week.

Then there is the new "AdBlock Acceptable Ads Program" where the previously dependable ad blocker now whitelists so-called 'acceptable ads' - allowing these ads through by default.  They define acceptable ads as adhering to a manifesto they've concocted which attempts to qualify approved types of interruptions.  I commend the attempt - but it is critically flawed,  a fundamentally incomplete manifesto, that sits precariously on an arbitrary portion of the slippery slope.

In an article posted to the Verge, Walt Mossberg wrote: “browser ads represent both an unwanted intrusion and a broken promise”. I read that and wanted to virtually high-five him since I momentarily thought he shared a core belief. But then I kept reading and discovered that the only ‘intrusion’ he referred to was the surreptitious collection of your information, and the ‘broken promise’ was the delivery of ads that weren’t as personalized and useful as he felt should be possible.

Well, ok he has a point, a reasonable one, but completely misses THE point. He’s a Kool-Aid drinker debating flavors.

So, What Is the Point?

Those of you who have read this blog in the past know that my world view of interactive media has, since the early 90s, been based on a small handful of very stable principles: Interactive Axioms.

The most sweeping of all, what I call "The First Axiom of Interactive", is that the user is, by definition, in control. “The User is your King. You, the creator, are merely a subject.”

People don't often acknowledge that this medium would simply not even exist if delivering control to the user was not the singular top-most goal.  There is nothing inconsistent or squishy about this reason for being.  Any functional capability you can point to will distill upwards to the quest for control.

The sheer existence of an affordance, a button say, anywhere on a remote control, or a website, or app, is a promise. It’s not one that we talk about much. But the obvious, unspoken promise is that it will react predictably and instantaneously.

The medium itself is an affordance - and the expected result of that affordance is control.

THAT is the promise.  Said another way, the medium itself is an affordance - and the expected result of that affordance is control

.If you remember DVDs and you happened to be in the USA, you might recall the FBI Duplication Warning at the start of every movie. Upon seeing these warnings, every one of us pressed the “skip” button. And then we subsequently experienced a moment of inner outrage because the button had been temporarily disabled requiring us to view the FBI warning in its entirety.The promise of control had been intentionally wrested away from us. And it felt like a violation.Because it was.Today interactive media is based on an even wider and more articulate provision of such control. It is a ubiquitous and fundamental condition of the medium. As such, any time anything happens that is not what we wish, we feel something similar to a sense of injustice. A violation of the medium.

So, yes, of course Walt Mossberg is right, spyware and irrelevant ads sit somewhere on the spectrum of broken promises. But what he does not acknowledge is that the mere existence of interruptive ads in the first place, ads that were not explicitly requested, is the spectrum.

That's further the problem with the Adblock Acceptable Ads Program manifesto.  It attempts to carve out a little plateau on the slippery slope that allows for *some* control to be wrested away from you.  But they miss the point which is that sheer interruption of any kind, not degrees of interruption, is the violation.  My rewritten manifesto would be very simple and would contain only one test, "Acceptable ads do not, in any way, interrupt the user's attention."

Acceptable ads do not, in any way, interrupt the user's attention.

That would be acceptable.

But the problem for advertisers, then, is that such an ad will take up no screen real estate.  It will call no attention to itself. It will not seek to draw the user.

In short therefore, it will not exist - until explicitly sought out. That is an acceptable ad, because that is an ad that honors the promise of the medium.

John Gruber occasionally points to his ad partner The Deck, as a viable ad model, intimating that it is less invasive, and more relevant, and therefore an appropriate ad format. Ads, but not “garbage”. He claims not to understand someone who wants to block ads. But I hope you can see that he is still defining the Deck’s format merely by contrasting it with the grosser violations of other advertisers. Yes, it’s a degree less offensive, sure. A comparison to "garbage" ads actually makes sense because they are, after all, genetically closer, interruptive cousins. But we are not comparing it in context to, say, the content the user sought out in the first place. Because if we did that we would see that such an interruptive ad is still quite a lot further away.

If you’re an advertiser, or an interruptive-ad-funded writer or publisher, I’m sorry if your livelihood may yet suffer as a result of ad blockers. That’s no one's goal. But it's you who've chosen to base your livelihood on such a patently inauthentic payment format, one that defiles the very medium it exists in. Tidy and convenient though it may have seemed for you at the start.

It’s a kind of Faustian bargain. Content creators agree to include interruptive advertising to afford creation of their content or derive wealth. But the ads are, by definition, not the content. I seriously doubt a single one of these content creators would choose to include an interruptive ad on the merit of the ad alone. Which reveals a truth.

That interruption in the user’s quest, the user’s wishes, is not allowed in this medium. If you break this rule - you must accept the penalties.

You say, "But ads are the necessary cost of receiving content!"  No, actually they are not. It’s the cost of receiving your content. And if you stop, unable to afford creation of your content any longer, don’t worry, someone else will be there to take up the slack.  And I think you know that.

"But ads are the necessary cost of receiving content!"  No, actually they are not. It’s the cost of receiving your content.

Do you seriously think that without advertising content creation will go away?  Please. It will result in industry upset perhaps. It will inspire more authentic payment systems, or not. But it won't go away.  Fees from advertising is not a prerequisite for creation of content.

All these publishers and content creators who complain about the bluntness of the ad blockers, arguing about which interruptive ads should be blocked, are already working way outside true-use of the medium. Ignoring the basic fact that they stand on stolen ground to begin with. They rather seem to be suggesting that there is a way to break the law of the medium in a good way. They remain hopeful that they can remove maybe just a little of your control. And that should be totally ok with you.Well, sorry, I appreciate the work many of you do - but you’re wrong. It’s not ok. You have merely gotten away with the violation until now.

Authentic Advertising

Authentic advertising (if you can even call it advertising) requires an advertiser to be part of the very business it’s selling. To promote the product through authentic interaction with the product itself (I've written about this before). And/or to create something that is so inordinately valuable and powerful that it will be sought out. To become the very content, services and products that people want.

To create authentic advertising you must embrace that you must be CHOSEN (or ignored) by the King. If you interfere in any way in your King’s journey to suit your own interests - even daring to appear when the King doesn’t wish it - you are a violator. A criminal.

Since you are not allowed to present yourself until invited, authentic advertising is hard. Much harder than the ad industry is accustomed to.  Traditional interruptive ads need only be good enough that users maybe won’t look away after their control has been wrested away. That kind of traditional, interruptive advertising of course is much easier to produce.But rather, honest to god valuable content that people might be willing to pay for, or invest their time and networks into, takes the same effort, risk and expense that developing a successful product does.

Interruptive ads need only be good enough that users maybe won’t look away after their control has been wrested away.

Do not confuse this with so-called ‘native advertising’ as it’s been disingenuously referred to, which is little more than a cheap ad aping the appearance of content.

Authentic advertising in interactive is not easy to produce, and it's often the subject of inordinate luck. This means advertisers wishing to defensibly game that system have to resort to great expense and extravagance. And precious few are willing to do that.Conversely, interruptive advertising requires little to no luck, and demands roughly the same work and expense that advertisers are used to applying. The difference is that these advertisers are still, unbeknownst, spending wildly. The resource these advertisers have been spending rampantly without qualm is your goodwill. Your willingness to continue to tolerate their violations.

Well advertisers, you’re in a deficit now. A really big, fat overwhelming deficit. Hope you enjoyed the ride, because interruptive advertising has drawn down your accounts and built tremendous debt.And ad blockers are just the latest means of putting holds on your well-worn credit cards.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

An Open Letter to the Creators of the New Muppet Show

Dear Disney and ABC,

HOLY CRAP, HOW COULD YOU ASSHOLES SO MONUMENTALLY BLOW IT ?!!!

Whoa… I’m… I’m so sorry, that just came out. I totally meant to intelligently build up to that point. Sorry, let me start over:

Dear Disney and ABC,

There is precious little joy in our world. …

AN OPEN LETTER TO THE CREATORS OF THE NEW MUPPET SHOW

Dear Disney and ABC,

Holy crap, how could you assholes so monumentally blow it!?

Whoa… I’m… I’m so sorry, that just came out. I totally meant to intelligently build up to that point.  Sorry, let me start over:

Dear Disney and ABC,

There is precious little joy in our world. Such little magic and wonder. Far too little care-free innocence.

When we connect through media today, our lives are more commonly associated with terrorism, disease, economic meltdowns resulting from greed, natural disasters, child shootings, police brutality, suffering, intolerance, hatred and the ongoing horror and assault of the seedy bottom half of real life.

Hold on, before you jump into writing me off as some “the world is going to hell, whatever happened to the values of this great country” kind of person, who worries that video games are perverting our youth or thinks television should be censored or whatever, please know that I enjoy violent video games and I like seeing movies and TV shows that push the limits of acceptability. Things change. Boundaries get crossed. That’s progress, art and evolution. Live and let live. No I truly don’t give a crap about breaking the rules and pushing the limits of inappropriateness.

But over the last several weeks I’ve discovered that I actually do give a crap - a really big, fat, loving crap, about the Muppets.

A New Muppet Show!

When I heard that you were bringing the Muppet Show back - with a new, more modern take - I remember thinking, “YES! It’s about time!”

The news was reason enough for celebration. I told my wife and some friends and they too shared the very same sentiment. Who wouldn’t? One would have to carry a very cold, dark heart not to feel that way.

But as I have been exposed to your new pre-show clips, teaser, pseudo press releases and marketing, a slow dawning has crept over me. It took a while, but I have begun to feel something unsettling that has taken some effort to define.In fact my initial elation has now settled into deep disappointment.

Although as of today the show has yet to premiere, I believe (but continue to hope you’ll prove me wrong) that you have mistranslated and misunderstood Henson’s great, iconic legacy. Worse, I believe you may be in the process of undermining it, surely unintentionally. But surely nonetheless.

Until I saw the teaser for the New Muppet Show (UPDATE: the video has been pulled) I confess I took the long-standing values of the Muppets and the reality of their world quite for granted; how they behaved, how they deftly interacted with our real world at arms length. They made it seem effortless.

And one of those values, a key attribute, perhaps the most critical of all, is that the Muppets never - ever - fell below THE LINE.

The Line

When I talk about “the line”, I mean the line above which the Muppets remain arguably pure creatures at heart, connected to the joyful world they came from, and largely driven by the pursuit friendship and the spreading of happiness. Sounds a bit corny - but in fact isn’t. And the line below which the Muppets would become just another part of real-life’s ugly bottom half, inconsistent, undependable, self-centered and cynical.

Naturally, good humor demands breaking boundaries, stepping over some line. And at their strongest, the Muppets were so very good at doing that.  The Muppets always broke boundaries. They understood magic - of playing with the medium (whatever medium they were contemplating) - of breaking the 4th wall - of being surprisingly self referential. And in so doing concocted their own, very recognizable, brand of magic.

And I imagine, aside from Henson's obvious challenge of inventing, or rather, raising the art form to a new level, it must have taken tremendously hard work and commitment to that vision to maintain that position - above the line.

Henson, Oz and company always stayed above the line. Dependably. They clearly worked very, very hard to find new humor and boundaries to break above the line. Satire and social comment are all possible above the line of course. Tear-inducing laughter is possible above the line. Pixar, for example, dependably and successfully lives only above the line. Boundaries can be broken above the line. And like it or not, the Muppets made clear that being above the line was a fundamental tenant of the brand.

As I reflect on my feelings upon seeing your new teaser, the pseudo PR and marketing for the new show, I believe the tone with which you are approaching the new Muppet Show, the direction your underlying compass is aimed, is fundamentally inauthentic and careless.

I further argue, that this approach you’ve taken, this direction, required very little effort. You merely chose the easy path.

You've just drug The Muppets way below the line, sacrificing everything that came before it, in exchange for a few cheap laughs.

You chose the dark side.

You've just drug The Muppets way below the line, sacrificing everything that came before it, in exchange for a few cheap laughs. You chose the dark side.

Did you think, for one second, that the temptation to do what you have just done was not an easy temptation all along to the original teams, just as it was for you?

Do you think that living in 2015 somehow suddenly makes such a thing a good idea? Perhaps that only now would we “get” such a joke? Give me a break.

Yes, yes, the Muppet Show was made “for adults”. And quite often the show would venture briefly into comically dark places. But these ventures always fell short of true cynicism of cold reality. Never would the Muppets cross the line into the seedy underbelly of real life. Of genuine cynicism, grime and fear. The Muppets were never cynical, they were never crass. They always reassured us with a deft wink. They were always tethered to a balloon that kept them floating, kept them from descending. And in so doing they defended and insulated us from the bottom half of life. That was their role and very reason for being after all! They gave us a world that we could to escape into. One that wasn't reality.

Why then have you concluded that being an“adult” today must equate to being cynical, inwardly conflicted and cold?

I have no doubt that as the new show and its tone was being developed words like “edgy, fresh, and real” were used. Which always, bar none, sounds like a good idea in any board room.  Who wants to be the opposite of that?  Further that you probably felt the writing of the later Muppet movies and presentations were growing stale and you must have talked at length about breaking through that staleness with a “modern, fresh take”.

I do not believe, as you must, that the Muppets innate lack of cynicism, and consistent distance from the grotesqueness of the bottom half of real life, was the reason the material was not compelling enough, not fresh. That, I believe, would be a misdiagnosis on your part.

We can debate the quality of much of the writing in later movies and years. Some of it was admittedly a bit tired and occasionally not very good. Not as good as Pixar. Some of the later movies suffered a kind of lack of meaningful stakes for the characters to respond to (one might also argue this was true of Most Wanted). But, and I suppose this is one of my main points, I do not believe, as you must, that the Muppets innate lack of cynicism, and consistent distance from the grotesqueness of the bottom half of real life, was the reason the material was not compelling enough, not fresh. That, I believe, would be a misdiagnosis on your part.

What’s worse, by depicting The Muppets in our often tragic, imperfect real world via the reality-TV, documentary style, and imbuing the characters with peculiar new behaviors, inconsistent with their legacy, you have, perhaps unintentionally, established that anything the Muppets may have been before - any purity or innocence they may have shown us in the past - all of that was actually unreal, an illusion, just show. That by rewriting their characters and motivations to be able to exist in our world, you have introduced the idea that whatever we thought they were - with their original personalities, these were just parts, roles they'd been playing all along. That only now are we seeing the “real” Muppets, their real lives, behind the scenes, for the first time. The suggestion is that they were actually like this all along, we were just never exposed to what they do off-camera before now.

As a result, you have instantaneously undone and debunked Henson’s entire great legacy.

What a shame.

As a result, you have instantaneously undone and debunked Henson’s entire great legacy. What a shame.

Yes, Miss Piggy, and others as well, have made occasional appearances in our “real world” for decades.  And it was always met with a level of heightened enthusiasm from audiences.  It's pretty transparent that this partly inspired your approach.  But it was not so simple.  These appearances, and the occasional overlaps with our real world was always a delicate balancing act. Those brief appearances were a magic trick that only worked because their world, the Muppet’s world, still existed somewhere. Piggy was only visiting us, breaking the 4th wall of their world.  And we were all in on the joke. Her dips, so precariously close to the line on those occasions, were handled with extreme care and awareness.

And on a superficial level, yes, most of the movies even appear to happen in our world - but they never did. It was always the muppets own world, and it only looked a lot like ours. On the Muppet Show and in every Movie, human actors always joined the Muppets in their world.  Not the other way round.

...one instantly feels that we were never meant to see any of this.

But by eliminating the existence of the Muppet’s safe, insulating world, as the new show appears to have done, you have scraped them raw. Laid them bare. There is no Muppet world left to poke through or join into. The magic has been surgically removed. Like pulling the skin off a live animal, we now see with discomfort, the organic muscle, ligaments and bones hidden underneath. And one instantly feels that we were never meant to see any of this.

Fozzie

I think the appearance of Fozzie in your teaser best captured this problem and as such caused my heart to sink most of all.

COMIC howard the duck magazine 7

So, Fozzie has a sexy human girlfriend. Um… ok. Feels quite out of character and slightly creepy, but alright, I’m sort of with you, maybe, MAYBE that could work, Miss Piggy was briefly attracted to William Shatner years ago (although that WAS absolutely in character for her).

But then you bring us to the real home of his girlfriend’s disapproving human parents, they reveal their “secret” romance, she calls him “Honey”, Fozzie’s panic over his pending unemployment, and all that stark reality is run through a way-below-the-line, icky exploration of a kind of cartoon bigotry and a clear intimation of sexuality. Such ideas were somewhat funny in the Howard the Duck comics - a comic world specifically designed to explore these topics - but feels utterly out of place and even grotesque here because this is not some random bear. This was, we all thought, our innocent, beloved Fozzie. But it slowly dawns on us that, no, this is not our Fozzie, it’s a strange imposter. Even Fozzie’s voice change, no longer performed by the brilliant Frank Oz, might have passed by without bothering us much, but packaged within a sweaty real world just makes us feel queasy.

As a result of this scene, we are not so subtly asked to consider Fozzie’s underlying drive for survival and even his reproductive needs. Requisite mental images of the two of them “sleeping together” conjure naturally as a side effect of your scenario. Oh, sorry, you didn’t even think of that, right? Images of the two of them having sex? Never occurred to you? Uh huh, sure, how sick of me to even think that. Yeah, right, convince yourself of that.

Images of the two of them having sex? Never occurred to you? Uh huh, sure, how sick of me to even think that. Yeah, right, convince yourself of that.

Face it, the joke of that scene - the uncomfortable humor - comes from the fact that a real woman is really truly dating a bear puppet. Ha ha ha. The rest of the mental images are just falling dominoes.

Gonzo’s love affair with Camilla the Chicken never had this kind of real-world context and intimated followthrough.

You’re showing us inauthentic things that, speaking as a viewer, we never wanted to see. You’ve pushed deep below the line and opened a big ol’ can of slimy worms: if Fozzie can be unemployed, does he get unemployment checks? Since he’s in our world, well, one must assume that he does. If he can’t afford food does he go hungry or beg? Does he mooch off his friends? Either way this is all a kind of undeniable, below the line thread that just feels icky. But why stop there? One is almost encouraged then to wonder all sorts of things - perhaps whether Fozzie gets feces stuck to his fur when he defecates. Does he wear a condom? No, don’t feign surprise at all this. Please see that this is the natural result of pushing below the line as you have. You have broken those boundaries, and opened these thoughts, not us. Though undoubtedly you would feign surprise at such implication.

Good god, Disney! You have whole buildings full of departments in place devoted to ensuring that Mickey is never caught in compromising positions. How dare you turn around and do this to our dear old friends.

Good god, Disney! You have whole buildings full of departments in place devoted to ensuring that Mickey is never caught in compromising positions. How dare you turn around and do this to our dear old friends.

Kermit and Piggy

Really, now Kermit actively dates and is “hopelessly attracted to pigs”… in general? Ugh, too much information, yet again.

IMG_0933b

Gonzo was insane - and loved chickens. That worked. It never dipped into sexuality because he was truly an eccentric. But Kermit is sane, he’s our hero, the reasoned one - and therefor this new intimation that Kermit sleeps around is once again moving towards the too-real grotesque. And this focus on his and Piggy’s TMZ break up - as though they actually ever had a relationship - seems totally misguided.

Yeah, yeah, we get it, if they are "broken up" it gives the characters and narrative something to build to. And it's tabloidy which plays into the whole theme, and maybe most important of all, serves as free marketing.

Brilliant.

Hey, you're the writers, but throughout the Henson years, the beauty of their story was that, well,  Kermit and Piggy never really had an official relationship to break up over.  They flitted around the idea, flirted with it you might say, Piggy always on the offensive, and Kermit never quite connecting. Like so many other things, even their relationship always hovered just above the line. They’re so-called relationship was a slippery and elusive concept. Totally non-committal.  By design.

But by bumbling into the the Muppet universe flailing, mouth-breathing and drooling as you seem to be, you are knocking over these delicate constructions, it seems, without much care for the original rationale or their great benefits.

Gonzo

This failure, like the others, is so obvious and easy to see.

In your teaser you chose, of all characters, Gonzo to criticize use of “the office interview” format.

Should have been funny. I wanted to chuckle because the observation was a good one. But I found myself wincing a bit. Don’t you see, you chose perhaps the only character in the entire Muppets main cast, next in line perhaps to Animal, who lacks enough self-awareness to even have such an opinion in the first place? So it just feels strangely “off” somehow. Not to mention that Gonzo’s, well “GONZO” has been completely denied. Now Gonzo is suddenly just some calm, rational guy? Seriously?

Hello?! Gonzo is many things, but calm and self-aware was never - and I mean like ever - one of them.

Gonzo recites the seven-times table… balancing a piano. Naturally.

This is the guy who overenthusiastically agrees to every insane, wrong idea, no matter how absurd - in the name of art. That’s who he is. You see that, right? The guy who shoots himself from cannons, wrestles a brick, tap dances in oatmeal, recites shakespeare while hanging from his nose, Plays bagpipes from the top of a flagpole, recites poetry while diffusing a bomb, hypnotizes himself, and wants to go to Bombay India to become a movie star because it’s not the “easy way”.

Did you, even for a second, consider that just maybe Gonzo was the completely wrong guy to feel vaguely self-conscious and introspective enough care about such a subtle little narrative device? Do you really think he, of all characters, would really care? Piggy sure, Fozzie maybe, but freaking Gonzo?! You’ve lost me. The reason that joke wasn’t funnier (and it should have been), is because you chose the wrong guy. You went fully against his long-standing character. And we all felt it. Maybe younger viewers don’t remember enough to care, and maybe most long-time viewers couldn’t quite put their finger on why - but sure enough - it just felt weird. And it’s another example of your apparent inability to defend and shepherd The Muppets at a most basic level.

What Worked

Lest you think I did not appreciate any of your effort, there were, what I would call, a few “authentic, above the line, classic Muppet moments” in the teaser too.Miss Piggy’s walk, smack, into the glass, leaving a nose print. A brilliant moment.

That creepy “incredibly obscure character” with glasses who talked with his tongue between his teeth was funny as crap.

I’m conflicted on Rowlf wearing the big surgery collar. I laughed authentically at that. And although that doesn’t sit above the line, maybe ON the line, a very careful, self-aware break like that can clearly work, so long as the Muppet universe is still intact.

Work Harder

Look, truth is - the idea that, say, a puppet is dating a real girl, probably has sex, meets her disapproving, real parents, and maybe loses his job and all that, that’s actually really funny.

Ted smokes pot.

Seth Macfarlane’s Ted did that a couple years ago and it was a good movie. A teddy bear that has sex, smokes weed, swears - it’s totally juvenile and funny as Hell. I loved it.

And then there’s “Meet the Feebles”, Peter Jackson’s obscure, disturbing puppet movie that includes a frog prone to vietnam war flashbacks, a pornography-directing rat, suicide, adulterous three-ways, alchoholism, drug-running and all sorts of other far below the line topics.

A gun to the head of a character in “Meet the Feebles”

But the Muppets? In one of your bumpers Rowlf talks about being followed by cameras into his bathroom at home. It’s kind of funny, but so now Rowlf uses the can? This is a very slippery slope you’re on.In the old days these topics could never find their way into the Muppet consciousness. The Muppet world was intentionally disconnected from all that. But now, stripped from their world, these  real-life concepts begin to co-mingle, and indeed they will.And that’s not who the Muppets are. You should have known better.“

Hey - you’re making all this up! We never said they had sex, and we definitely would never show them doing drugs or taking a dump!!” you say.

No? But that is the world you have directed them to inhabit. All these ideas, and a lot more, exist in our real world, and you have placed them in that exact real world. You have provided no buffers. No signals. No insulation from the edges of that very cold reality. Indeed, your every creative decision has amplified it. You have said, "They live with us here, amidst our real-life challenges, filth, and complexity."

What a monumentally bad call.

You have said, "They live with us here, amidst our real-life challenges, filth, and complexity."  What a monumentally bad call.

If that’s the show you wanted to make, why, oh why didn’t you just work harder, take some risk (e.g.. by not trying to rely on the automatic, positive associations we all have for the characters), and instead invent a new set of colorful characters of your own. Some who could more naturally play out the decidedly unMuppet-like topics you are shoe-horning our old friends into? I would have actually enjoyed seeing that show to be honest. I would have tuned in, and I’m sure I would have laughed. Ironically the connection to the the Muppets and every other pillar of innocent puppetry would have been obvious. But at least then you would have been arguably protecting and defending something the world still needs.

We needed the Muppets that Jim Henson left us.

We needed the Muppets that Jim Henson left us.

But instead you chose to exploit our gentle, rainbow-yearning friends into the same old, daily gutter that we were all, ironically, trying to escape. All in trade for a couple easy, if uncomfortable, laughs and the benefit of a built-in audience.

“Hey, the Muppets were all about cheap laughs.” True, but you did it at the utter expense of their very long and hard-earned legacy. You threw that gentle, magical, innocent legacy under the bus of reality. And that is not where The Muppets great and endearing humor belongs.

In doing so you have so far proven yourselves unworthy guardians of these beloved icons.

And from Disney of all places. Hard to imagine.

Well, I’ve made my point ad nauseam. So all I can do now is beg you, please, please be more careful.

These are our dear friends.And corny as it sounds, the world still needs that rainbow.  Maybe now more than ever.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

Messages From The Future: What Happened to Apple Watch

As some of you know by now, I am from the future. And slightly annoyed to be here. But anyway, this is what became of Apple Watch. Truth is, being back in 2015 is such a trip. All this talk about “wearables”. I have to laugh, I remember that! Ugh, It’s so quaint to hear that again. “Wearables”. For the record, in the future no one talks about “wearables” like it’s some classification of device. That’s just you guys coming to grips with the fact that technology is everywhere. It’s in everything, it’s networked, and no, you have no privacy. But that’s a different post.Today I wanted to let you in on Apple Watch since I guess you’re only now about to see it launch. Weird.

A lot of you are asking “Why would I use it?”, “What’s the killer app?”, “Why would I pay so much for it?”. Yeah, yeah. You do that every time Apple launches a new device, did you realize that? Android users are staring at it dismissively thinking they would never want one since it probably doesn’t do that much.

Admittedly what the first Apple Watch did was only a glimpse at it’s value.

MESSAGES FROM THE FUTURE: WHAT HAPPENED TO APPLE WATCH

As some of you know by now, I am from the future.  And slightly annoyed to be here.  But anyway, this is what became of Apple Watch. Truth is, being back in 2015 is such a trip. All this talk about “wearables”. I have to laugh, I remember that! Ugh, It’s so quaint to hear that again. “Wearables”. For the record, in the future no one talks about “wearables” like it’s some classification of device. That’s just you guys coming to grips with the fact that technology is everywhere. It’s in everything, it’s networked, and no, you have no privacy. But that’s a different post. Today I wanted to let you in on Apple Watch since I guess you’re only now about to see it launch. Weird. A lot of you are asking “Why would I use it?”, “What’s the killer app?”, “Why would I pay so much for it?”.  Yeah, yeah. You do that every time Apple launches a new device, did you realize that? Android users are staring at it dismissively thinking they would never want one since it probably doesn’t do that much.

“Why would I use it?”, “What’s the killer app?”, “Why would I pay so much for it?”.

Admittedly what the first Apple Watch did was only a glimpse at it’s value. A few years after Apple Watch was released it became pretty obvious what it was all about, and yet it still took a decade before absolutely everybody stopped doubting.Indeed Apple Watch not only survived a decade, but it survived quite a lot longer than that. It outlasted PCs. It outlasted iMac, iPhone and iPad. The Apple Watch strand functionally outlasted almost every other product strand of Apple device and consumer hardware model you are aware of today. It was still going strong when I popped back here, but by then auto-implanted alternatives were becoming pretty common - even though they gave me the willies.So, what did Apple Watch do that was so useful?Much to the chagrin of a fair number of iOS App developers in this time, Apple Watch was not a platform that was ideal for, well, running apps. At least not like they do on iPhone and iPad. Sure people tried. But in short order it became clear that Apple Watch was about being used in conjunction with other devices. If your app did not involve another device or platform, your app-life was probably short lived. As a result many of the best app makers were also developers of apps on other platforms or device makers.  You almost never made an app for Apple Watch alone.And that was a clue into Apple Watch’s true conquering strategy.Apple Watch became your key. First and foremost. It was your unique identifying digital self. Your ID for all manner of technical configuration in every other device and context.

Apple Watch became your key. First and foremost. It was your unique identifying digital self. Your ID for all manner of technical configuration in every other device and context.

When I look back, the clues are all around you today:Apple Pay, Continuity, Apple ID, iCloud, Apple TV.These are some of the "existing" components that dovetailed to make Apple Watch what it was.Ultimately, Apple Watch was not a device for consuming media, or even much in the way of experiences (with the exception of communication). Primarily, Apple Watch identified you, it was the key that unlocked your information and preferences and configured all your other devices and environments.Secondarily Apple watch served as an interface for simple tasks (related to these devices and environments) and as a communicator.This is not to say that the devices around you became dumb devices (dumb screens, dumb terminals etc). They were never that. They still carried the lion’s share of computing power required to perform their specialized tasks. But they were merely normally “un-configured”.My Apple Watch connected to any friendly Apple TV and suddenly all my movies and shows appeared. All my content was in “the cloud” after all. (Btw, we don’t call it “the cloud” in the future, in fact we don’t call that anything, it’s just “storage”.)Within a few years an iPad or iPhone in your household could switch between users depending on who was using it. Your unique desktop and apps would appear on any workstation you sat down to.  Because it knew it was you.Apple Pay was just another variation on the theme. Apple Watch validated your identity and gave you the choice of credit card to use.And I should mention, since there is a flurry of speculation, that yes, Apple Watch worked amazingly well with what you guys are calling the Apple Car (and other cars by the way). The Apple Car was particularly excellent. Your digital environment on wheels. Once identified, all your media was available, your seat, mirrors, mood-lighting, common destinations, and temperature adjusted to you, and of course you locked, unlocked and started your car with your Apple Watch.There was some other stuff of course - once Apple and others started making things for the home. Thermostats, lighting, door locks and home security. It all responded to and was partly controlled by, your Apple Watch.This system was ultimately more secure as well. None of your other devices had to hold content or information. It was encrypted in storage (sorry, in "the cloud"), and your Apple Watch merely unlocked it.  But in this way, none of your other devices became points of vulnerability.Do you see what I mean - Apple Watch - plus our finger print (and later more convenient biometric ID - another post) - was our digital key. So it was with us literally all the time.And this is why so many of us were so willing to spend so much on our Apple Watches. It was the most central piece of hardware we owned; a functional part of every other device we used and every modern environment we entered. It was perpetually on display, occupying the familiar, ornamental status of horological watches of the past. But even more important than that, it was the sole material manifestation of our digital selves.  And in the future, let's just say, our digital world doesn't get less important. For these reasons it was plainly worthy of inordinate expense and pageantry.

It was the sole material manifestation of our digital selves.  And in the future, let's just say, our digital world doesn't get less important. For these reasons it was plainly worthy of inordinate expense and pageantry.

It was so much more than critics today seem able to wrap their heads around.  More than a hobbled phone, more than the convenience of ready alerts and messaging.  It was your key, your hub, it was you.There was admittedly an awkward phase where Apple Watch was lovely, if a little bulky. You’re in that phase now, well, or are about to be. But Apple quickly slimmed the device, and generated many more models. Once the dimensions were improved, and battery life extended, Apple Watch found it’s sweet spot. One that lasted for many years.  I could have spelled that "maaaaaannnny", which is an actual word in the future, but I believe that's still bad grammar in this time.Anyway, having seen it all play out, I think Apple understood this larger system before most. Being the Apple with vision, they got all this at a time when other companies were scrambling around calling goofy, little, one-off technical experiments “wearable” when in reality few of them really were. No one wanted to wear visible gewgaws. It was just a fact. Existence of these technologies never sold anyone on wearing some device prominently on our bodies. Not on our clothes (except underwear for mostly medical reasons), and definitely not on our glasses. Not anywhere on display BUT OUR WRISTS. Oh, and our finger of course… ah, but that’s another post.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

Apple Watch is NOT Replacing the Mechanical Watch

My little voice is nothing in the breathless rush of chatter about the Apple Watch. But I keep hearing the same set of sentiments from my friends and I think they have it all wrong.In various ways, friends are lamenting the loss of the mechanical watch. Others are asking “Why do I need this accessory? What’s the killer app”?

Back in the day people had pocket watches. You’d dig in your pocket, and pull out your pocket watch to tell the time.

Then the wristwatch came along. It was smaller - but so much more convenient. The time was right there at a glance.

The thing people have wrong is that Apple Watch is not replacing the watch. It’s replacing your phone. Or it will rather. Apple is just hoping it can provide sufficient value through the form-factor in the meantime.

APPLE WATCH IS NOT REPLACING THE MECHANICAL WATCH

My little voice is nothing in the breathless rush of chatter about the Apple Watch. But I keep hearing the same set of sentiments from my friends and I think they have it all wrong.In various ways, friends are lamenting the loss of the mechanical watch. Others are asking “Why do I need this accessory? What’s the killer app”?

Back in the day people had pocket watches. You’d dig in your pocket, and pull out your pocket watch to tell the time.

Then the wristwatch came along. It was smaller - but so much more convenient. The time was right there at a glance.

The thing people have wrong is that Apple Watch is not replacing the watch. It’s replacing your phone. Or it will rather. Apple is just hoping it can provide sufficient value through the form-factor in the meantime.

“But they call it a watch.”

Yes, it’s called “watch”, but calling the Apple Watch a “watch” is akin to calling the iPhone a phone, and not, say, a pocket computer. The Apple Watch is a wrist computer and will eventually replace your pocket computer. All based on pure convenience.

The Apple Watch is a wrist computer and will eventually replace your pocket computer.

“But I need a bigger screen!”, friends have then said. Of course you do for some things, and bigger screens will become accessories. And that's another paradigm shift here - the watch is not the accessory, the screen is.

There is no way this first Apple Watch is the fully expressed big idea. This is just the first step.  Surely the plans for Apple Watch are long.

It's long been acknowledged that anyone under 30 who wears a mechanical watch today is essentially wearing jewelry. And that they use their phones to tell the time now.  For these users wrist watches are merely quaint objects on par with vinyl LPs and 50s geek glasses.  So for a generation of users who have abandoned mechanical watches for “pocket computers”, a wrist computer is so much more convenient, and it does not replace anything already there. For them, sheer convenience is the killer app.

For hipsters and us old farts who still think mechanical watches are beautiful and functional jewelry, yes, we need to “replace”. And if one is contemplating that switch there is no killer app. But there are 3 dozen small, functional features - in addition to telling time - that make the switch quite worthwhile.

Over time, I believe that switch will happen - even for them - as the Apple Watch replaces the iPhone.  

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

Die Hard and the Meaning of Life: The Undeniable Attraction of Loyalty

I was watching a movie with my wife when I had an epiphany. I don't want to tell you which movie because it doesn't matter, and I would really rather not reveal the ham-fisted taste I have in movies anyway. But I was watching this movie and there came a point in the story that you will recognize because it's part of every movie ever made - where the hero, who was obviously so committed... alright, I'm not going to be able to explain this without telling you which movie, it was Die Hard....Ok see? Now you're going "oh, one of those guys". Fine. Yes, I am. I am totally one of those guys. And so is my wife.

Anyway there came a point where I found myself delighting in the fact that John McClane was not going to stop trying to save the hostages, one of whom is his estranged wife, no matter what happens to him. No matter what challenges and risks are placed in his way - he is going to try to save them despite impossible odds. And I realized that it's really his unshakable, defiant loyalty to the innocent people he cares about that makes you cheer for this guy; his belligerent loyalty - in the face of possible death - to protect and honor the people he loves, that is so positive and attractive. I realized that in one way or another some display of loyalty is at the root of every moment I've ever cheered during a film - or conversely a lack thereof when I've been angry at a character. And as the thought rolled over me, quickly becoming more complex and patterned, I had this epiphany: that loyalty, in all its positive flavors, is maybe the most impressive, attractive, beautiful and powerful behavior humans can display to one another.

Die Hard

DIE HARD AND THE MEANING OF LIFE: THE UNDENIABLE ATTRACTION OF LOYALTY

I was watching a movie with my wife when I had an epiphany. I don't want to tell you which movie because it doesn't matter, and I would really rather not reveal the ham-fisted taste I have in movies anyway. But I was watching this movie and there came a point in the story that you will recognize because it's part of every movie ever made - where the hero, who was obviously so committed... alright, I'm not going to be able to explain this without telling you which movie, it was Die Hard....Ok see? Now you're going "oh, one of those guys". Fine. Yes, I am. I am totally one of those guys. And so is my wife.

Anyway there came a point where I found myself delighting in the fact that John McClane was not going to stop trying to save the hostages, one of whom is his estranged wife, no matter what happens to him. No matter what challenges and risks are placed in his way - he is going to try to save them despite impossible odds. And I realized that it's really his unshakable, defiant loyalty to the innocent people he cares about that makes you cheer for this guy; his belligerent loyalty - in the face of possible death - to protect and honor the people he loves, that is so positive and attractive. I realized that in one way or another some display of loyalty is at the root of every moment I've ever cheered during a film - or conversely a lack thereof when I've been angry at a character.  And as the thought rolled over me, quickly becoming more complex and patterned, I had this epiphany: that loyalty, in all its positive flavors, is maybe the most impressive, attractive, beautiful and powerful behavior humans can display to one another.

When someone says "No! There is one more guarantee you have. You can depend on me. I will be here." is there anything more powerful and uplifting?

Like I said - it doesn't matter that the movie was Die Hard, because it became clear that this was true of every movie I'd ever seen, of every character relationship I'd ever read.

And, I realized, it must be true of nearly every kind of interpersonal relationship we have as humans.

The world can seem unfair. The only practical guarantee you have is the end. We live life under a looming cloud of uncertain timing; in so many ways the universe is not aligned to favor us. But when another person rises and defies the dearth of life's promises and through action says "No! There is one more guarantee you have. You can depend on me. I will be here" - is there anything more powerful and uplifting? One person's will against universal entropy.

Lord of the Rings: the Return of the King, 2003

Step Brothers, 2008

Aliens, 1986

Good characters become bad guys when they are disloyal to the hero.

And bad guys redeem themselves when they demonstrate a turn of loyalty to the hero. Back in my screenwriting days one of the mantras we carried with us was "Characters are what they do, not what they say."

All sorts of interesting character dynamics emerge when we mix up what is said and done by a character. And when, despite claiming loyalty, a character sheds that and instead acts in his own self-interest, he transforms into a villain. That's how important we naturally feel loyalty is. It seems there is nothing tragically, unjustly worse than losing the loyalty of another. The emotion is innate.  And gaining loyalty is similarly immediately endearing.

Raiders of the Lost Ark, 1981 Indy: “Give me the whip!” Satipo: “Adiós, señor.”

The Lion King, 1994

Harry Potter and the Prisoner of Azkaban, 2004

So I came to realize, maybe too late in life, that loyalty is perhaps the most profound, meaningful, beautiful and useful behavior humans can give to one another. Indeed, loyalty is perhaps the only meaningful measure of humanity. Loyalty to your fellow man.

Some would say that love, sits on that throne. And I suppose it does sit above in principle. "Love conquers all" as they say. But loyalty is the action; the visible, tangible expression of that love. The "what characters do". One must act, sacrifice and possibly face critical risks to remain loyal. And let's face it, it's loyalty that makes love so wonderful in the first place.

The Notebook, 2004

Titanic, 1997

I don't mean to knock love, but I guess it's just that love is so abstract and effortless - love just happens. Why do you think we say "fall in love"? Love’s happenstance is captured in the iconic moment where two characters bump into one another at a corner. Or when they unexpectedly glimpse each other across a room - boom - "love at first sight". It's easy. No effort. No will. Indeed love has no real meaning until action is required. Love cannot be measured - except through displays of loyalty.

Marriage vows, although of course well-intentioned, are mere promises of eventual loyalty (remember, characters are what they do, not what they say). So long as life is easy, so long as there is no temptation or risk, love is easy to profess. And lets face it, it's never easier than when the future seems bright, a roomful of loved ones are smiling, and champagne and cake are in hand. Rather, it's when life becomes hard, perhaps many years and tragic events later, when the darkest of life's unfortunate challenges are faced, that's when love - through displays of loyalty - has meaning.

Billy Elliot, 2000

Avatar, 2009

Drive, 2011

Even in unexpected places, loyalty plays an important role. I look around myself at work and I realize how grateful I am for those people who have stuck by me and the company's mission, despite work's up and downs. You know, those people who stick with you and seem almost immune to the business world's constant seduction of self-interest. These are the people you want to reward. Because they have displayed such loyalty.

Skyfall, 2012

Schindler’s List, 1993

Star Trek II: The Wrath of Khan, 1982

Forgive me, I'm on a journey; this may seem simplistic and naive to you. And observations like this don't always have a practical application, but I suppose this one made me mindful of the importance of choosing my loyalties. Remembering that the measure of my loyalty is my action.  And it redefined what I look for and value in others.

Office Space, 1999

Léon: The Professional, 1994

The ebb and flow of loyalties can make us feel joyful and loved, or drop us into profound sorrow. But a life filled with mutual, positive loyalties is filled with meaning, and I'm not sure there is anything more important in the world. 

Cinema Paradiso, 1988

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

The Art of Conquering Problems at Work

All workplaces are rife with challenge and friction. Competitiveness and politics abound. Simply existing in a company that does what companies tend to do to their employees can weigh one down and demoralize. Although there are aspects of our jobs that we enjoy it's more likely that what we take home and talk about is the worry and obsession about the things that we wish were different.

There are all sorts of conditions in a company that at various times and in many ways make most of us feel demoralized, under appreciated, and generally poorly managed. And these can bring us so much stress, disappointment and pressure.

But I can say with certainty that there is something you can do that will meaningfully solve those problems. I don't mean mask them or bypass them, I mean actually, genuinely solve them to your great benefit.

It's a two-part process, neither part works without the other. But executed together you cannot fail.

THE ART OF CONQUERING PROBLEMS AT WORK

All workplaces are rife with challenge and friction. Competitiveness and politics abound. Simply existing in a company that does what companies tend to do to their employees can weigh one down and demoralize. Although there are aspects of our jobs that we enjoy it's more likely that what we take home and talk about is the worry and obsession about the things that we wish were different.

There are all sorts of conditions in a company that at various times and in many ways make most of us feel demoralized, under appreciated, and generally poorly managed. And these can bring us so much stress, disappointment and pressure.

But I can say with certainty that there is something you can do that will meaningfully solve those problems. I don't mean mask them or bypass them, I mean actually, genuinely solve them to your great benefit.

It's a two-part process, neither part works without the other. But executed together you cannot fail.

  1. Do good work, and

  2. Be patient

I hope you're not annoyed by this answer. People often prefer some quick trick to gaming a system. Like reading secret body language, or using special influential words. But meaningful change is never the result of easy gimmicks.

Rather, this plan is based on raw truths and results in fundamental, healthy change. The kind that will advance your career and eliminate all those pesky corporate politics and demoralizing conditions. This degree of change requires that you have your hands on the real levers of control.

Of course there are other steps to succeeding at work, being able to recognize opportunities mainly. Opportunities to:

a) Offer solutions and improvements

b) Share critical opinions

c) Take challenges outside your job description

But these opportunities only meaningfully come after you have mastered the big 2 - doing good work, and being patient. If you try to force these lower opportunities too early, it will be mistimed - the machine won't be ready for you. You won't be taken seriously, and/or your suggestions and comments will fall into the din of daily business. The machine has to be ready, primed. When it is, when the time is right, you will find your opportunities. Indeed, they will come to you. And your comments will then carry weight and meaning. Suddenly you will have control and impact.

Do Good Work

This should be your mantra. It should blow above every negative feeling work is delivering to you.

  • Are machinations in the company making you feel victimized?

  • Are you getting lame projects?

  • Do you feel your supervisor is an undeserving idiot?

  • Are the company processes (or lack thereof) causing chaos and confusion?

  • Is there some person you feel is bypassing you only by hiding weaknesses and playing politics?

  • Is the whole company such a mess that you don't even know where to start?

Whatever has you wound up, you must allow yourself to ignore the feelings these conditions engender for now. Because you can't do good work if you think that way. No, really, you can't. You may think you have your mind under control, but trust me here, if you approach your day with these thoughts in mind, you won't be doing the best work you can do.  You will be distracted and some percentage of your attention and energy will be misdirected.Doing good work requires joyful immersion, passion, and focus. Most importantly a belief that you will succeed. Your mind must be on your project and the unique greatness that only you can bring to what you do.

Doing good work requires joyful immersion, passion, and focus. Most importantly a belief that you will succeed.

You aren't capable of greatness if you feel beaten down by these conditions. If you see annoying work obstacles as barriers, as opposed to mere hurdles that you are capable of leaping over with creativity and persistence.

So you need to accept them for what they are and let go. Embrace the ambiguity. The good news is it's all going to change anyway. You are eventually going to help usher in that change. So why worry about it? Just take note and let it go, in time it will work itself out and blow away in the best possible way.But only if you do good work. Your best. And not just once. That's never enough. You need to do good work many times. And that's why you need to:

Be Patient

See, your emotion and thought processes have a given metabolism. It's actually a pretty fast metabolism, relatively speaking. But companies, and the systemic problems they experience, have a much slower metabolism. Much slower. So where you see a problem, and perhaps its solution, and where that maybe took you a few hours, a day or a week - for a company that week was a split second to which it is incapable of responding in kind. Companies are big, slow, dumb animals. They lumber. Information has to travel from person to person. Meaning and urgency has to build. Even the smallest, nimblest, most aggressive of companies lumber compared to your individual gnat-like emotions and decisions.

Companies are not individuals that can reason. They are systems- composed of budget plans, contracts, and relationships that must run their course and expire before any given change can occur. So of course real change is a slow process.

So don't fight that, be patient. It just takes time for good work to have an impact. But rest assured - it does.

Young workers often regard one year in a company to be a reasonably long time. A duration within which his or her working conditions should improve, promotions granted, the ability to affect corporate change, etc. But here, our young worker is being grossly impatient. In truth, as most of your seasoned mentors will tell you, one year spent in a company is merely the cost of learning enough about a company not to say dumb things. Offering truly good ideas requires a deep, intimate understanding of the company, its business, and its inner workings. And this typically takes at least a year. Any employer who expects more from an employee must be himself, inexperienced.In the meantime, listen, watch, and do good work.When you do good work a number of things happen around you:

  • good work sits in contrast to mediocre work (which itself usually abounds),

  • good work helps the company, your department, your boss, and the world,

  • good work gets noticed

  • most importantly, good work causes people (your supervisor and management) to ask questions, "can I have some of that?", "why didn't the last project turn out that well?", "what was different on this project?" "What can we do to make sure we always get this result?", "why has that department been doing such good work all year, and the others not so much?" Etc.

And this is how companies change. This methodical awakening is how they improve.

Sometimes they don't know why the work was better. Maybe that self-promoting worker convinced them the reason the project worked out was because he was involved. Even though it was your good work that made it so. Don't worry about this. It all gets resolved in time. This is the power of patience and consistently delivering good work. Good work and patience is a relentless force within the context of corporate nature. And over time there are simply too many opportunities for your good work to slip through the cracks into plain view. And conversely for any subverter's weaknesses or negativity to become exposed.

Good work and patience is a relentless force within the context of corporate nature.

You're long on to your next project or two before any of these conversations happen. Again because the corporate metabolism is so much slower than yours. But be patent.

Maybe it will take 3 or 8 really good projects before these questions are asked and your trail is sniffed out. But eventually it will. It's inevitable.In the meantime you must continue doing good work- that's your trail. Don't worry, you may think you have a boss who takes credit for everything you do, but keep doing good work and be patient, and the trail will stay warm. No such boss has ever been able to maintain such an illusion for long.

See, when those questions are asked, you can go back up to that list of corporate crazy-making conditions and every one of them will change under the force of good work and patience.

Doing good work and being patient is how you ensure poor performers get fired or reassigned, it's how necessary systems and incentives get put in place or change, it's how you earn better more important projects, it's how great people get promoted and recruited, it's how other staff members learn to respect your process and your work, and it's how the company succeeds. It's how you will eventually be consulted to see what can be done to make the company better - and not in some empty, feel-good, "team-building retreat" way either, but the real kind, in a quiet executive office, where decisions get made, and where they will really care, because you do such good work.

Do good work and be patient.It all works out. You just need to embrace the ambiguity of the current condition for a while. Embrace the fact that the company is not right-configured at the moment. It will change. It will.I'm sorry if this sounds horribly tedious and tiresome. But this is the real way, no tricks, sure and steady.

Patiently and consistently doing good work will present you with the opportunities to solve every problem you see today.It's a fact of corporate reality- your good work will make it so. You just can't give up.

Read More
The Interactivist Joel Hladecek The Interactivist Joel Hladecek

The Social Network 2: Social Guesswork

The Interactivist has obtained the following pages from the upcoming sequel to The Social Network.

Title: The Social Network 2: Social Guesswork

Scene 27b INT. FACEBOOK HQ CONFERENCE ROOM, DAY.

We see a pair of bloodshot eyes. We ZOOM OUT to reveal Mark Zuckerberg staring into space. ZUCK sits at huge black conference table surrounded by middle-aged people who probably used to be cool.

On the table in front of him sits an Oculus Rift developer's kit. ...Right behind 37 lines of cocaine.

ZUCK chews his lip nervously. Finally he speaks in short quick clip...

ZUCK: That's cool.

The room nods.

MIDDLE-AGED PERSON WHO PROBABLY USED TO BE COOL #1: Very cool.

ZUCK does a quick line of coke - grimaces - and pounds the table. Everyone jumps.

ZUCK: Whooo! Yeah - THIS... (he points at Rift) THIS - is totally awesome.

His eyes dart across the room in spastic jerks.

ZUCK: It's awesome, right?

People nod.

ZUCK: I mean, and I'm just doing my magic here, could you imagine... just imagine... if THIS... was Facebook's "iPhone".

Inhales heard around the room.

RANDOM PERSON: Wow.

ZUCK: Right?

CTO, MIKE SCHROEPFER, sitting across table, squints disconcertedly.

ZUCK: What!? Shit, seriously? What, Mike? Fuck you're such a downer!

CTO MIKE: I didn't even say anything...

ZUCK: I see your eyes! You don't think I see your eyes getting all squinty and judgmental??

The Sequel

THE SOCIAL NETWORK 2: SOCIAL GUESSWORK

The Interactivist has obtained the following pages from the upcoming sequel to The Social Network.

Title: The Social Network 2: Social Guesswork

Scene 27b INT. FACEBOOK HQ CONFERENCE ROOM, DAY.

We see a pair of bloodshot eyes. We ZOOM OUT to reveal Mark Zuckerberg staring into space.  ZUCK sits at huge black conference table surrounded by middle-aged people who probably used to be cool.

On the table in front of him sits an Oculus Rift developer's kit. ...Right behind 37 lines of cocaine.

ZUCK chews his lip nervously.  Finally he speaks in short quick clip...

ZUCK: That's cool.

The room nods.

MIDDLE-AGED PERSON WHO PROBABLY USED TO BE COOL #1: Very cool.

ZUCK does a quick line of coke - grimaces - and pounds the table. Everyone jumps.

ZUCK: Whooo! Yeah - THIS... (he points at Rift) THIS - is totally awesome.

His eyes dart across the room in spastic jerks.

ZUCK: It's awesome, right?

People nod.

ZUCK: I mean, and I'm just doing my magic here, could you imagine... just imagine... if THIS... was Facebook's "iPhone".

Inhales heard around the room.

RANDOM PERSON: Wow.

ZUCK: Right?

CTO, MIKE SCHROEPFER, sitting across table, squints disconcertedly.

ZUCK: What!? Shit, seriously? What, Mike? Fuck you're such a downer!

CTO MIKE: I didn't even say anything...

ZUCK: I see your eyes! You don't think I see your eyes getting all squinty and judgmental??

CFO DAVID EBERSMAN: That's not fair.

ZUCK: Oh, you too?!? You're an even bigger downer David!

CFO DAVID: Mark, we're just looking out for the company.

ZUCK: Oh, I'm sorry, so you're not a downer!? Oh ok, lets see, uh, Instagram wasn't the future, and it was too expensive. Paper was a lame app, and it was too expensive. These are your words! QUOTE! WhatsApp is "JUST" another app - it will get replaced by some other app in a year or two and was galactically, monumentally too expensive... and... and - what am I missing?

SHERYL SANDBERG: (snorts line of coke) Facebook has no vision and is randomly grasping to find relevance?

ZUCK: Right Sheryl, thank you! - Facebook has no vision and is randomly grasping to find relevance. Your words, David.

CFO DAVID: Look I'm... I'm being honest. And Mike agrees with me.

MIKE SCHROEPFER looks down at his hands.

ZUCK: What are you even doing here David?

CFO DAVID: I just want to help Facebook Mark.

ZUCK: (stares) ...well you're a fucking downer, David. A complete fucking Debbie Downer.

The room is silent.ZUCK does 8 lines of coke.

ZUCK: Fuck - even the coke doesn't UN-DOWN you guys! OK WHAT!? What's wrong with it?!

CTO MIKE: um... well - I mean it's cool, yes. But It's not a platform, Mark.

All eyes back on ZUCK.

ZUCK: What do you mean it's not a platform!? Have you ever experienced that before??

CTO MIKE: No, but Oculus Rift owns no content, you use this device to interact with someone else's content. The content exists on a computer and probably over the internet. Manufacturing devices like this has nothing to do with creating and owning the experiences people will have in the future any more than manufacturing headphones has to do with creating and owning the music people listen to. If you want to own the social experience as VR emerges, you needed to create the killer software experiences that people will use. Lots of companies will make headsets like these. It's like a DVD player, it's dumb hardware! This headset in no way buys you into the world of VR enhanced social networking. Oculus Rift is... well, it's just a peripheral. Like headphones and monitors. The content is the experience.Long silence.

ZUCK'S eye dart around the room. He looks at some 17-YEAR-OLD-LAWYER-LOOKING-KID who shrugs.

ZUCK: FUCK!! ...Why the fuck didn't you tell me that before I bought it?!

Gasps around the room.

CFO DAVID: WHAT!? You already bought it? Oh God.

ZUCK: Well fuck David, you're always such a downer - I didn't want you in the room. ... I did it this morning.

CFO DAVID: But you only saw the device for the first time yesterday...! Did you talk to anyone??? Oh Christ - how much did you spend this time??!

ZUCK: Less than last time.

CFO DAVID: Mark. Look at me. Last time you bought an iPhone app for the price of a small country. What - did - you- spend?

CFO DAVID looks around the room.

CFO DAVID: WHAT DID HE SPEND??!

17-YEAR-OLD-LAWYER-LOOKING-KID: ...um 2 BMil (unintelligible)

CFO DAVID: What?! 2 what? Million?

17-YEAR-OLD-LAWYER-LOOKING-KID: eh, um no... 2 um... B... billion. 2 Billion.

Several people in the room visibly deflate.

CFO DAVID: (frozen) Good jesus christ.

CTO MIKE slumps in his chair and closes his eyes, visibly shaken.We hear a loud snort and ZUCK sucks up another line of coke.

ZUCK: SHIT YEA! (laughs maniacally - coke all over his nose) AWESOME, RIGHT? FUCK YEA! WE CAN PUT OUR LOGO ON IT MAN! FACEBOOK! RIGHT THERE BRO!

CFO DAVID: ...right where the user won't see it because it's covering his FUCKING EYES, MARK!

ZUCK: You don't think I know that?! I KNOW THAT! And that's why.... (sly smile) we also put advertising... in the fuckin' content, baby!

CTO MIKE: ...in the content. (sighs) Right, um, Mark, the content doesn't... it's not running in this device - it's just showing up there! The content is running on a computer.

ZUCK: (stares, beat) Well why not? We can just make a smaller computer and cram it in there! CRAM - IT - RIGHT - IN! WHOO!

He snorts more coke.

CTO MIKE: (under breath) Jesus christ. (to ZUCK like talking to a child) Mark, the kind of experiences that people will want to see on a VR device - and there will be many other VR devices on the market to choose from - will, for the foreseeable future require a lot more processing power than you can cram into this thing. Like in gaming, where resolution and responsiveness of VR is a moving target. A bigger box will always yield a superior experience. Which is why people will prefer having a cable - connected to a bigger game box that gives them a way more kick ass experience, than having a self-contained device that runs 10-year-old looking graphics and laggy response times. Again - this device is not VR. This device is only a peripheral that serves it up. Advertising can exist in the software - and if you, Mark, really have a vision for how Facebook can be enhanced by VR, you should have started making that software - WITHOUT ever having to buy this device.

Long pause.

ZUCK: But it's FUCKING COOL MIKE! NOW WE'LL BE COOL AGAIN, MAN! DON'T YOU GUYS GET IT?  See YOU'RE OLD, and I'M YOUNG!  I HAVE a vision man! I'm gonna hang them all over the place! Sheryl!!!

SHERYL: (finishes a huge line of coke) uhf! Yeah? (closes eyes) Oh Shit.

ZUCK: Everywhere I like to chill with my board homies - I want to see these badasses hung all over the walls - decorate the fuck out of HQ Sharon. Shit this is going to be the coolest batch-eh-lor pad in da world holmes! OCULUS RIFT WALLPAPER BABY!

CFO DAVID: Mark...

ZUCK: And YOU! You don't even get one David. You either, Mike! 'Cause you're totally blowing my high, bitches. (Snorts another line of coke) FUCK! I LOVE BUYING SHIT FOR BILLIONS! DON'T YOU JUST FUCKING LOVE BUYING SHIT FOR BILLIONS?? FUCK!   C'mon Sheryl, I'm hungry, Let's go buy In-N-Out Burger and Coke.-Scene end.

Read More