Software Is Eating Democracy
We have to get smart about the risks of even "good" AI.
Here is a photo of the first controlled flight—it’s the Wright Brothers in 1903:
Here’s where that moment led to in 1969:
The Bulwark is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
That’s 66 years of progress. Imagine you were 10 years old standing around at Kitty Hawk and I told you that before you died, this flimsy, two-wing, half-glider would evolve into a rocket ship that would put a man on the moon. Would you have believed me?
Technology can advance at a terrifying pace.
Not always. We are 124 years into the Age of the Zipper yet the zipper on your jacket is pretty much the same thing we saw in 1909. But maybe you don’t think of zippers as technology.
Let’s look at cars.
The first car dates to around 1800. It was a steam-powered curio and it took almost a century to get to something approximating a modern car—a mass-produced vehicle with an internal combustion engine.
The modern automobile is significantly different from all of those early vehicles. However the differences are evolutionary, not revolutionary: Bigger, faster, cheaper, with different forms of fuel and advanced safety, sure. But even 220 years in, these vehicles are foundationally the same: Powered conveyances designed to carry individuals through two-dimensional space.
Point being: Not all technology undergoes rapid, revolutionary change.
So which one is AI? Is it flight or the automobile?
The first networked computer system was ARPANET, built in 1969. Over the last 54 years we’ve gone from a computing network that could transfer small files between remotely connected terminals to this:
Yes, I know. Having AI mimic Darth Vader is small beer. What I want you to focus on is the speed. We’re only 54 years into the era of networked computers. AI is much more likely to be a revolutionary technology (like controlled flight) than an evolutionary technology (like the car).
And revolutionary technologies move incredibly fast.
Tim talked about AI with Scott Galloway last week and their conversation is very much worth your time.
Galloway is bullish on AI. He believes that AI advances will increase productivity, add value, and (eventually) create more jobs than they destroy. He believes that some sectors will see tremendous benefits from AI—particularly healthcare.
I’m willing to believe much of that. Certainly healthcare is the most promising place for AI. The prospect of AI assisting in radiology, for example, could be extremely valuable. Ditto drug development.
But there are two large-scale risks AI presents to the liberal order.1
The first is labor displacement. Let’s pretend that AI creates another economic revolution. How fast will that revolution proceed?
Pretty damn fast. Probably faster than the Industrial Revolution took hold.
Fast revolutions cause more displacement because they deny existing systems time to adapt. So even if, in the medium-run, AI creates more jobs than it destroys, the short-run is important, too.
Because labor displacement leads to social unrest and political instability. I’m not sure how much slack our society has right now to absorb any more of those.2
The second risk is more specific and Galloway talks about it at some length with Tim: In Q1 and Q2 of 2024, we are likely to see massive efforts from the Russians and Chinese to leverage AI into helping Donald Trump become president again.
It is not clear to me that either the media or the American citizenry is prepared to deal with the impact of AI on a presidential election.
Techno-optimism is one thing. Techno-complacency is another.
A dozen years ago Marc Andreesen noted that “software is eating the world.” What he meant is that computer software was touching every aspect of our lives: From how the internal systems of our cars worked to how we rented hotel rooms, or bought books, or found mates, or hailed taxis.
Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.
Andreesen’s observation was correct.
What nobody thought to ask was: Will software also eat democracy and/or the liberal order?
Or to put it another way: Why would democratic institutions and the liberal order be immune to the effects of this revolution when literally no other aspect of society has been?
AI is a toy today. It’s the chatbot that answers questions with varying degrees of accuracy. It’s a picture-maker. It’s an audio-processing algorithm. It can mimic Darth Vader.
But we’re in the early stages and the technology is moving fast. We can already see some of the risks. Others will become clear as AI develops and deploys further. Still others won’t reveal themselves until we’re already in the midst of a problem.
Sure, maybe AI won’t eat the world, too.
But I wouldn’t bet against it.
Every day I try to help you see around corners. Things may seem fine today, but we are on a collision course illiberalism. I’m trying to guide our thinking and wrap our heads around this reality, right now. Bulwark+ members help to sustain our work and make it available to the widest possible audience. Upgrade today to join our community.
You can cancel any time.
3. Gun Guy
Pro Publica profiles the man who made the AR-15 into a industry.
When the public asks, “How did we get here?” after each mass shooting, the answer goes beyond National Rifle Association lobbyists and Second Amendment zealots. It lies in large measure with the strategies of firearms executives like [Richard E.] Dyke. Long before his competitors, the mercurial showman saw the profits in a product that tapped into Americans’ primal fears, and he pulled the mundane levers of American business and politics to get what he wanted.
Dyke brought the AR-15 semi-automatic rifle, which had been considered taboo to market to civilians, into general circulation, and helped keep it there. A folksy turnaround artist who spun all manner of companies into gold, he bought a failing gun maker for $241,000 and built it over more than a quarter-century into a $76 million business producing 9,000 guns a month. Bushmaster, which operated out of a facility just 30 miles from the Lewiston massacre, was the nation’s leading seller of AR-15s for nearly a decade. It also made Dyke rich. He owned at least four homes, a $315,000 Rolls Royce and a helicopter, in which he enjoyed landing on the lawn of his alma mater, Husson University.
Although his boasts of military exploits and clandestine derring-do caused associates to roll their eyes, he was actually no gun enthusiast. As a teenager, he dreamed of becoming a professional dancer. Once, when his brother Bruce persuaded him to go deer hunting, Dyke sat in his Jeep reading The Wall Street Journal, rifle out of reach as a deer ambled safely past.
Along the way, Dyke and his team capitalized on the very incidents that horrified the nation. Sales typically went up when a mass killer used a Bushmaster. After a pair of snipers in the Washington, D.C., area murdered 10 people with a Bushmaster rifle in 2002, Dyke’s bankers noted that the shootings, while “obviously an unfortunate incident … dramatically increased awareness of the Bushmaster product and its accuracy.”
For the sake of this discussion we’re going to assume that both the forms of AI we have now and any AGI we create won’t be appocalyptic world destroyers.
Maybe we shouldn’t stipulate to that! But that’s a different conversation. So just roll with me.
We are 30 years after NAFTA and the political and social instability created by the birth of a globalized economy is still increasing. Not great, Bob.