Understanding the Pitfalls of Solutionism
How understanding this idea can help improve your thinking on technology generally
Outline
About Solutionism
Champions and Detractors
Shiny Object Syndrome
Thought Experiment: Is Generative AI Solutionism?
About Solutionism
There’s this unforgettable scene in the show Silicon Valley where Dinesh and Gilfoyle spar over the new smart fridge Jian Yang installed. Dinsesh is marveling at the wonder. Not surprisingly Gilfoyle takes the opposing view. Gilfoyle goes on to explain: “This thing is addressing problem that don’t exist. It’s solutionism at its worst.” He says this with his typical cryptic, monotone disapproval.
I’d heard the word solutionism before but had never stopped to look into it until this moment. It was at this moment I finally had a name for the thought I’d been having for so many years.
Solutionism is the belief that technology is the remedy to all our woes. Rather than starting with a problem then crafting a solution through technology, it begins with the solution (ala technology) and seeks the problem to address. In the case of the smart fridge, solutionism assumes the issue is that consumers’ biggest pain point is that they can’t see what’s inside the refrigerator when away thus the need for a camera connected to a mobile app. Gilfoyle dryly points to the fact the fridge has a glass window in the front.
Sometimes our fascination with tech gets the best of us so we create solutions out of whole cloth or magnify a minor inconvenience into a significant issue. Is it easier to develop a smart fridge or just help people develop the habit of looking at their food inventory before they go to the market?
The technology landscape is littered with stories of seemingly great ideas that died on the vine for lack of any real utility. Just think back to the first time you saw someone wearing Google Glass. What problem was this device solving? You know who this technology helped? The Chinese government was ecstatic to start using this sort of augmented reality technology because it improved its ability to monitor and regulate large swaths of its citizens. AR glasses for American consumers? No. AR glasses for Chinese police and intelligence services? Yes please - because they have a need. It’s the need that defines the value and ultimately the reason why people spend money on the application over and over again.
This same story can be seen as so many early adopters return their Apple Vision Pros faster than Apple can sell them. Anyone who’s spent time in a Virtual Reality headset will tell you it’s an impressive technology but when asked they are hard pressed to explain what problem it is solving in daily life. It’s that problem-solving element that divides the solutionists from the creative innovators.
Biometrics is a great example of applying an innovative, emerging technology for real-world problem-solving. The exponential growth in our digital lives results in an equal explosion in the number of usernames and passwords we have to remember. What’s our solution? We reuse the same, easy-to-remember username/password combinations over and over again. While this is a simple and common sense solution, this approach makes the user easy prey for hackers. The end result has been tools like LastPass, Chrome browser key files or physical biometric authentication similar to either FaceID on your iPhone or the fingerpad on your MacBook. And while it’s great to have options, it’s the repetitive nature of the task that gives the biometric solution the edge. We have to log into multiple different services hourly. What’s easier - accessing a third party service every time you need to log in? Or just looking into a camera?
How do I remember all my usernames and passwords?
How do I remember all my usernames and passwords in a way that minimizes vulnerability?
How do I remember all my usernames and passwords in a way that minimizes vulnerability but also maximizes convenience given how many times I have to perform the task every day?
Answering this last question helps us understand why consumers are opting into biometrics at an amazing rate. More than half of device owners routinely use biometric authentication despite the fact biometrics has only been available for less than a decade. In reality Biometrics and other similar successful adoption case studies are often the exception and not the rule.
Solutionist ideas come and go with such regularity it’s easy to forget them or at least let them blend into the scenery whether it be 3D TV, shared scooter company Bird Global filing for bankruptcy or XBox Kinect’s machine vision system. To be fair it’s easy to throw stones and much harder to put pen to paper to come up with a good idea. To that end, solutionism has its champions as well as its detractors.
Champions and Detractors
The solutionism I describe would appear to be a recent evolution of the term. Per Jason Crawford’s 2021 MIT Technology Review article, “Why I’m a proud solutionist”…
“The term ‘solutionism,’ usually in the form of ‘technocratic solutionism,’ has been used since the 1960s to mean the belief that every problem can be fixed with technology. This is wrong, and so ‘solutionism’ has been a term of derision. But if we discard any assumptions about the form that solutions must take, we can reclaim it to mean simply the belief that problems are real, but solvable.”
So it would seem the issue is less solutionism itself but rather the most recent high-tech iteration of the philosophy. Many agree this idea of techno-solutionism was coined in 2014 by author Evgeny Morozov in his book, To Save Everything, Click Here. Morozov’s dust jacket extends the purported dangers well beyond addle-minded inventions all the way to dystopian futures…
“In the very near future, ‘smart’ technologies and ‘big data’ will allow us to make large-scale and sophisticated interventions in politics, culture, and everyday life. Technology will allow us to solve problems in highly original ways and create new incentives to get more people to do the right thing. But how will such ‘solutionism’ affect our society, once deeply political, moral, and irresolvable dilemmas are recast as uncontroversial and easily manageable matters of technological efficiency?”
Regardless of where you fall on the spectrum (solutionism is good or bad), it’s clear that the topic has both deep historical roots and complex potential futures. But what’s driving all of this?
Shiny Object Syndrome
The idea I keep coming back to as I think through this topic is Shiny Object Syndrome (SOS). We all suffer from it in some form or another whether it be Elon Musk’s acolytes looking to emulate his success, that look we all got when Steve Jobs turned the first iPhone sideways or that creepy feeling many of us had when we first saw ChatGPT spitting out content faster than Vegas dealer handing out cards. It’s that moment where the mundane drudgery of daily life and all of its predictability is put on hold by something truly magical - something you wouldn’t expect.
It turns out we are biologically wired for novelty. The brain actually has a “novelty center.” Per Science Daily…
“…the major ‘novelty center’ of the brain--called the substantia nigra/ventral tegmental area (SN/VTA)--might be activated by the unexpectedness of a stimulus, the emotional arousal it causes, or the need to respond behaviorally. The SN/VTA exerts a major influence on learning because it is functionally linked to both the hippocampus, which is the brain's learning center, and the amygdala, the center for processing emotional information.”
We are wired to thrive on new things. Many might argue the newer the better. And that wiring sits at the nexus of emotion and learning. When you combine our wiring with the ceaseless content stream we’re all exposed to, it’s no surprise there’s an innate urge among many of us to take that new-new thing and make it their own through development or invention regardless of any real stated need.
Sadly, the term shiny object syndrome isn’t actually a clinical diagnosis even though it’s on display almost everywhere. I asked my AI assistant Pi about this and here’s what it said:
“While Shiny Object Syndrome (SOS) is a commonly used term within various fields such as management literature, popular psychological literature, and social and computer sciences, it is not a formally recognized clinical diagnosis.
However, some people believe that it shares qualities with Attention Deficit Hyperactivity Disorder (ADHD), such as impulsivity, restlessness, and a fear of missing out (FOMO). SOS can manifest as losing sight of the big picture, getting distracted by minor details, or constantly shifting goals.
It's important to note that while SOS is not a documented psychological condition, its symptoms can be disruptive to both individuals and businesses, leading to unfinished projects, missed timelines, and constantly evolving goals and roadmaps.”
Regardless of its clinical status as a diagnosable issue, it’s clear that Shiny Object Syndrome is at play when thinking about the issue of Techno-Solutionism. We look at a novel application and our novelty center kicks in sitting between our learning and emotional centers. “Just think of the possibilities!”
Thought Experiment: Is Generative AI Solutionism?
It’s important to point out how solutionism is in many ways a rearview mirror test. Per the logic of the critique, if the solution works, it must have been needs-based. After all, people don’t have time or appetite for things they don’t want or need. If it fails, it must have been the result of too much focus on the technology itself at the expense of rationalized utility. But this line of thinking ignores other factors such as timing, product market fit, technology maturity, etc. So it’s likely more accurate to say solutionism is a potential reason a technology concept would fail and this refined thinking helps us think through the question of whether Generative AI is a giant solutionist boondoggle or the true prophetic future its creators suggest.
Generative AI has its champions and detractors just like solutionism. You can even put them on a linear scale with OpenAI’s Sam Altman on the far right and AI-enterpreneur and author Gary Marcus on the far left. Altman sees Generative AI as the inevitable conclusion of mankind’s journey. By his own admission he and others like him are focused on creating the holy grail of Artificial General Intelligence. He is GAI’s biggest cheerleader. Gary Marcus on the other hand sees the current era of Generative AI as a needless hype bubble that does its best to paper over the myriad flaws, inconsistencies and engineering challenges needed to actualize its purported promise. Both men are highly credible with deep, demonstrated technology and AI expertise. So who do we believe?
Altman on AI via his interview with Time Magazine…
“I think AGI will be the most powerful technology humanity has yet invented”—particularly in democratizing access to information globally. “
“If all we had was ChatGPT, we could say, hmm ‘maybe hallucinations are just a bug,’ and fantasize that they weren’t hard to fix.
If all we had was Gemini, we could say, hmm ‘maybe hallucinations are just a bug.’
If all we had was Mistral, we could say, hmm ‘maybe hallucinations are just a bug.’
If all we had was LLAMA, we could say, hmm ‘maybe hallucinations are just a bug.’
If all we had was Grok, we could say, hmm ‘maybe hallucinations are just a bug.’
Instead we need to wake up and realize that hallucinations are absolutely core to how LLMs work, and that we need new approaches based on different ideas.”
There are other, equally-credible experts like Andrew Ng and Micheal Wooldridge who sit somewhere closer to the middle wondering that all the hype is about - wanting to recognize the power and potential of these technologies while tempering overblown expectations. (Full disclosure, I tend to follow these two more than anyone given their balance of aspiration and sobriety on the topic).
I don’t know that we will find clues to the answer listening to Altman and Marcus duke it out. But we can find clues in recent headlines. Per cnbc.com’s recent article, AI engineers report burnout and rushed rollouts as ‘rat race’ to stay competitive hits tech industry…
Clue 1: Promise, but no real potential
“In an emailed statement to CNBC, an Amazon spokesperson said, the company is ‘focused on building and deploying useful, reliable, and secure generative AI innovations that reinvent and enhance customers’ experiences,’ and that Amazon is supporting its employees to ‘deliver those innovations.’”
Not to be crass, but this is some grade-A technobiz horseshit. Companies are spending billions of dollars, burning out their teams in the process. But for what? What sits over the horizon? Chatbots? Image generators? Are these the the silver bullet applications we’ve all been pining away for if only we had the words? The answer is of course no. Ask yourself without extrapolating, how much economic value do cartoon-like images really create? Granted, co-pilots and intelligent agents purport to reduce or potentially eliminate the need for developers. But per the Bureau of Labor Statistics, this represents around 1.5% of the U.S. workforce with a median wage of around $90,000. There’s a huge gap between either of these use cases and the idea of a $7 trillion economic gain.
That same article quoted some of the most influential technology business leaders going on about how AI is a trillion dollar market. Sam Altman believes it’s a seven trillion dollar market. It’s very telling how so many people can tell you they’re putting all their chips down but come up short when telling you where they’re putting them and why. It’s just a race to get the chips on the table which is a major pre-cursor tell when thinking about whether Generative AI as a whole is techno-solutionist folly.
Non-business headlines tend to reinforce the view the market is suffering from a collective dose of SOS. Both in terms of the hysterical negative:
Adobe's new generative AI tools for video are absolutely terrifying - I’ve seen it, it is not. In fact part of the demo froze on stage.
Generative AI Is Coming for Video Games. Here's How It Could Change Gaming
As well as the overly-optimistic positive:
“AI is the best thing to ever happen to content creators in the web3 era” - setting aside the fact Web3, Defi, NFTs and Crypto all could also, easily be seen as techno-solutionist hindenburgs.
Generative AI will be designing new drugs all on its own in the near future - avoiding how drug ideas are great, but it’s the trial and approval process that takes years.
These headlines tend to obscure more sober takes like:
This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot
Perhaps the most telling headline comes from Altman himself when he was recently quoted as saying,
“Whether we burn $500 million a year or $5 billion—or $50 billion a year—I don’t care, I genuinely don’t,” he continued. “As long as we can figure out a way to pay the bills, we’re making AGI. It’s going to be expensive.”
AGI is an end point. But it still lacks demonstrated value for a stated need. The idea he would publicly double down on burning endless capital in pursuit of a deliverable vs. a solution tends to strongly imply Generative AI and the resulting Holy Grail of AGI is a techno-solutionist paradise.
Shiny Object? Check…
Tech-first thinking? Check…
Clue 2: History repeating?
Remember the Google Glass example? Remember how it was a failure for one audience but not the other and how this made the application solutionist for one group but not the other? That could be at play here as well.
Scott Galloway is a well-known and highly-regarded voice when it comes to the business of technology. He’s written several books including The Four, the Hidden DNA of Amazon, Apple, Facebook and Google. He also writes a blog called No Mercy/No Malice.
He recently penned a post entitled Corporate Ozempic that got a lot of attention. The thrust of the article was how he noticed headcount reduction as a means to cut weight from the corporate body, improving financial performance much like Ozempic cuts weight in the human body and AI was the secret ingredient. Per his post…
“If you want to understand how AI is reshaping business, picture it as the other massive innovation of our time: GLP-1 drugs. Both shed weight by suppressing cravings; both exacerbate existing inequities (aka the rich get richer) before generating wider prosperity; and both are having a greater impact than projected as early adopters are hesitant to admit they’re using.”
Following this chain of reasoning it’s easy to see how the issue isn’t whether Generative AI is techno-solutionist. Rather, it’s how it’s so big it can be different things for different people. Just like Google Glass AR technology was a fail for the average consumer but a win for Chinese authorities we can see how Generative AI might be a fail for the broader market but highly valuable for bottom-line conscious executives feeling pressure to add a few points to the bottom line. For said executives, generative AI is a solution to the employee headcount and operating margin question.
This may sound cold or critical but the point isn’t to judge. It’s to clarify and it seems pretty clear. Generative AI is both solutionist and functionally valuable at the same time depending on which stakeholders you’re talking about.
But these are just my thoughts. What do you think? I’d really like to know.