Discover more from The Bulwark
ChatGPT and Open AI: We Are *Totally* Forked
Even if it never becomes self-aware, AI is going to screw over the internet and everyone else.
Every week I highlight three newsletters.
If you find value in this project, do two things for me: (1) Hit the Like button, and (2) Share this with someone.
Most of what we do in Bulwark+ is only for our members, but this email will always be free for everyone.
Entirely possible that when we look back on 2022, the most significant event will be the release of ChatGPT. Open AI might well be the revolution that everyone thought blockchain was.
For instance: Hackers are already using ChatGPT to help them build malware.
Over at Platformer—the best tech newsletter I read—Casey Newton writes about the possibility of using “radioactive data” to save the internet from
Bit by bit, text generated by artificial intelligence is creeping into the mainstream. This week brought news that the venerable consumer tech site CNET, where I worked from 2012 to 2013, has been using “automation technology” to publish at least 73 explainers on financial topics since November. . . .
This week the New York Times’ Cade Metz profiled Character A.I., a website that lets you interact with chatbots that mimic countless real people and fictional characters. The site launched last summer, and for the moment leans heavily on entertainment uses — offering carousels of conversations with anime stars, video game characters, and the My Little Pony universe.
Sorry—let’s pause to contemplate what Character AI + Rule 34 will create.
Okay. Moving on:
When you read an article like “What Is Zelle and How Does It Work?,” the text offers no clear evidence that it was generated using predictive text. (The fine print under the CNET Money byline says only that “this article was assisted by an AI engine and reviewed, fact-checked and edited by our editorial staff”; the editor’s byline appears as well.) And in this case, that probably doesn’t matter: this article was created not out for traditional editorial reasons but because it satisfies a popular Google search; CNET sells ads on the page, which it generated for pennies, and pockets the difference.
Over time, we should expect more consumer websites to feature this kind of “gray” material: good-enough AI writing, lightly reviewed (but not always) by human editors, will take over as much of digital publishing as readers will tolerate. Sometimes the true author will be disclosed; other times it will be hidden.
The quiet spread of AI kudzu vines across CNET is a grim development for journalism, as more of the work once reserved for entry-level writers building their resumes is swiftly automated away. The content, though, is essentially benign: it answers reader questions accurately and efficiently, with no ulterior motives beyond serving a few affiliate links.
What if it did have ulterior motives, though? That’s the question at the heart of a fascinating new paper I read this week, which offers a comprehensive analysis of how AI-generated text can and almost certainly will be used to spread propaganda and other influence operations — and offers some thoughtful ideas on what governments, AI developers, and tech platforms might do about it.
That said, I am . . . not optimistic?
At the dawn of the internet—or at least the beginning of mass adoption of the internet—there was a lot of worry about how, on the internet, nobody knew you were a dog.
We worried about the disintermediation of gatekeepers. We worried about the spread of misinformation. We worried that having a couple hundred million Americans anonymously shouting at each other would be bad for social cohesion.
And guess what?
Mission Accomplished. Thirty years in, we still haven’t solved these problems.
Did the internet give us good stuff, too? Sure. It has delivered tremendous value to society. But at a not-insignificant cost. On the whole, the internet is (probably) a net good. But that’s not the point. The point is that we saw a bunch of the problems early and even with a lot of brain power and resources thrown at them, we still couldn’t solve them.
Open AI—even a non-scary, non-apocalyptic AI—seems likely to cause a bunch of problems, too. Many of them foreseeable.
And also maybe not solvable?
2. Freddie deBoer
Freddie deBoer also looked at AI this week, but through a different lens:
[I]t’s important that everyone understand what this kind of AI is and is not doing. Let’s pick one particular issue for AI that must parse natural language: the dilemma put forth by Terry Winograd, professor of computer science at Stanford. (I first read about this in this excellent piece of AI skepticism by Peter Kassan.) Winograd proposed two sentences:
The committee denied the group a parade permit because they advocated violence.
The committee denied the group a parade permit because they feared violence.
There’s one essential step to decoding these sentences that’s more important than any other step: deciding what the “they” refers to. (In linguistics, they call this coindexing.) There are two potential within-sentence nouns that the pronoun could refer to, “the committee” and “the group.” These sentences are structurally identical, and the two verbs are grammatically as similar as they can be. The only difference between them is the semantic meaning. And semantics is a different field from syntax, right? After all, Noam Chomsky teaches us that a sentence’s grammaticality is independent of its meaning. That’s why “colorless green ideas sleep furiously” is nonsensical but grammatical, while “gave Bob apples I two” is ungrammatical and yet fairly easily understood.
But there’s a problem here: the coindexing is different depending on the verb. In the first sentence, a vast majority of people will say that “they” refers to “the group.” In the second sentence, a vast majority of people will say that “they” refers to “the committee.” Why? Because of what we know about committees and parades and permitting in the real world. Because of semantics. A syntactician of the old school will simply say “the sentence is ambiguous.” But for the vast majority of native English speakers, the coindexing is not ambiguous. In fact, for most people it’s trivially obvious. And in order for a computer to truly understand language, it has to have an equal amount of certainty about the coindexing as your average human speaker. In order for that to happen, it has to have knowledge about committees and protest groups and the roles they play. A truly human-like AI has to have a theory of the world, and that theory of the world has to not only include understanding of committees and permits and parades, but apples and honor and schadenfreude and love and ambiguity and paradox….
The punchline is that ChatGPT actually can solve this coindexing test. It can navigate semantics even without a theory of the world.
There’s this old bromide about AI, which I’m probably butchering, that goes something like this: if you’re designing a submarine, you wouldn’t try to make it function exactly like a dolphin. In other words, the idea that artificial intelligence must be human-like is an unhelpful orthodoxy, and we should expect artificial general intelligence to function differently from the human brain.
Ultimately, Freddie doesn’t think this defense line of argument is very compelling. I’m not sure whether or not I agree.
What do you guys think?
One last bit about AI: Last week Ben Thompson tried to think through what AI will mean to tech’s Big Five: GOOG, APPL, FBOOK, MSOFT, and AMZN.
The most interesting case is Google, which Thompson believes is uniquely vulnerable to disruption from AI:
Google invented the transformer, the key technology undergirding the latest AI models. Google is rumored to have a conversation chat product that is far superior to ChatGPT. Google claims that its image generation capabilities are better than Dall-E or anyone else on the market. And yet, these claims are just that: claims, because there aren’t any actual products on the market.
Why is AI dangerous for Google? Because what if the idea of “search” changes from how we perform it now, to make searching more like how ChatGPT works?
For example: You want to know how to change a tire. Today you go to Google and type “how to change a tire” and you get a bunch of links to videos and websites which will tell you how to change tires—along with a bunch of ads that Google is being paid to serve.
In an AI world, you type “how to change a tire” and ChatGPT simply explains how to change a tire to you.
Could Google do this, right now? Probably. The problem is: How do you sell ads against such a search?
Google’s empire is based—even today, 25 years in—on ad-based search. Close to 80 percent of the company’s revenues still come from search ads.
Thompson expanded on this danger in a second newsletter:
To what extent should [Google] care about tail risk, and screw up their current business model to respond to something that may never end up being a thing? Go back to the last time Google was thought to be in trouble, when the rise of apps led to widespread predictions that vertical search apps would peel away Google’s market share; Google significantly overhauled Search to deliver vertical specific results for things like local, travel, etc., and introduced answers for common questions. Google also got a lot faster — it’s notable that “I’m feeling lucky” doesn’t actually exist in practice because Google delivers search results now as you type. . . .
I suspect that Google will try to take a similar tack now: it helps that the current iteration of chat interfaces are mostly useful for questions and topics that aren’t particularly monetizable anyways. Google could very well introduce chat-like responses, with the option to go deeper, for the topics that make sense, while still delivering search results for everything else, including the questions that actually monetize. And, frankly, it will probably work. Distribution and habit really matter, and Google dominates both.
And if you find this newsletter valuable, please hit the like button and share it with a friend. And if you want to get the Newsletter of Newsletters every week, sign up below. It’s free.
But if you’d like to get everything from Bulwark+ and be part of the conversation, too, you can do the paid version.