This past weekend, my husband participated in a Berkeley conference sponsored by the LessWrong community, an online group focused on rationality and decision-making. Today he’s sporting a t-shirt he got there, which features the title of a forthcoming book by two leaders in the community, Eliezer Yudkowsky and Nate Soares.
(And what a nifty way to publicize a book – send people out into the world wearing its provocative title!)
The authors are highly concerned about the prospects for artificial intelligence. Recently it’s become very convenient to use AI to answer quick questions – as long as one then follows up by checking the sources, as today’s AI is known to “hallucinate.” In the future, however, we may be in for some serious problems.
The main issue is that it would be impossible for humans to fully supervise an artificial superintelligence (ASI). It may set its own goals, and maybe even hide them from us – and there’s no way to guarantee that those goals would be compatible with our continued existence. It may take over all the resources it needs to meet those goals, and any side effects for humans and the rest of the natural world would be irrelevant. I like this paragraph from a booklet my husband brought home from the conference:
“Unless it has worthwhile goals, ASI will predictably put our planet to uses incompatible with our continued survival, in the same basic way that we fail to concern ourselves with the crabgrass at a construction site. This extreme outcome doesn’t require any malice, resentment, or misunderstanding on the part of the ASI; it only requires that ASI behave like a new intelligent species that is indifferent to human life, and that strongly surpasses our intelligence.”
Some leaders estimate the extinction risk for humanity at 10% in the coming decades; others think the risk is considerably higher.
Okay.
So I wanted to stop and think for a moment about the emotional impact of the book title. It’s a meta-narrative – a story-like idea about a group (all of humanity) and its status over time (we could all die!). It’s also a highly emotionally laden expression, fully charged with what I call “salience markers” to grab our attention, associate an idea with emotion, and potentially inspire us to act (vote, tell our neighbors, buy a book, attend a rally, change our entire lives).
My concern is that the title is so extreme that it could very well provoke the same type of denial we see with climate change.
As we all know, climate change has been in the news for decades now, and although scientists and activists alike have stressed that we’re in it for the long haul whether we act quickly or procrastinate, many activists have used very strong statements to get attention, like “We have only 12 years to save the Earth,” otherwise “billions will die.”
The problem is, whenever we encounter something so strongly phrased, our first motivation is to defuse the threat. The simplest way to do so is to discredit it, not to fix the underlying problem. The stronger the statement, the greater the desire for denial.
Factual accuracy is not relevant in this context.
And when the problem is new and strange and something only a very few people know much about, it’s probably even easier to dismiss.
This book title is using salience markers amped up to the max. “All” and “everyone” are extremes, focusing on thoroughness. There’s an us/them binary, pitting “us” against “anyone,” a threat that could come from anywhere, which also ties in to the family of salience markers that references hidden information. And of course “kill us” and “dies” are evoking that highly potent life/death binary. They’re using language for maximum emotional impact.
That means the motivation to dismiss what they’re talking about as melodrama is also maximized.
One advantage that Yudkowsky and Soares have over the climate situation is that they’re experts themselves. Although concern about climate change was originally bipartisan, many of us learned about it from Al Gore, and he had severely damaged the cause simply by being obviously partisan. He’d been the Democratic vice president, after all, making him a “them” to Republicans who see U.S. politics as an us/them binary. If Gore had made a point to always appear with a conservative who shared his concerns, like a high-ranking member of the military or a leader from our Christian communities, we’d probably be much further along in addressing the problem by now. We don’t know as much about Yudkowsky and Soares so we’re much less likely to dismiss them as one of “them” – a group our very identity sets us against.
Another advantage is that superhuman AI requires investment – a lot of work would have to go into making it possible – whereas climate change is something that would happen if we don’t do anything. In theory, we could solve the problem simply by stopping the work on AI development. Or, at a minimum, we could figure out how to align the interests of future superhuman AI with the rest of the planet, including humanity. It’s my understanding that many in today’s AI community are currently researching that very problem.
A third advantage is that AI is more like CFCs than climate change – if we can get the world’s leaders to agree, regulations could more or less solve the problem for us. We should note, however, that the GOP’s “Big Beautiful Bill” would severely constrain our ability to supervise whatever it is the tech bros are brewing up.
Another complication is the partisan nature of messaging style. Ever since Newt Gingrich, Republicans have understood the importance of using emotionally charged language to engage the public. As George Lakoff has explained at length, however, the Democrats have long emphasized facts and figures and “rational” argument – the kind of discussions that are perfect for ironing out the details of policy but not for the earlier and at least equally important step of selling their ideas to the public. As long as the Republican party is beholden to its high-tech backers, they’re unlikely to agree to regulate their behavior, and the Democrats aren’t even listening to language like “kill us all” because it’s the very opposite of cut-and-dry.
If the authors of this new book want to sway public opinion, it’s vital that they get their message right. Perhaps they’ve already weighed the risks of appearing overly dramatic – or perhaps their publisher insisted on a title like that, and they decided the gamble was worth it.
In the environmental literature, though, books like The Population Bomb have had problematic histories – sparking a wave of activism and action, but also a backlash where people end up focusing on many of the book’s more extreme speculations and dismissing the entire issue out of hand because “it never came to pass” (as if the book itself didn’t play a role in averting the crisis). I hope they don’t have the same thing happen to them.
One of my colleagues (Branden Johnson) is embarking on a study about public views of global catastrophic risks and extinction events. I’ll be interested to hear what he learns and whether the public is yet aware of superhuman AI.
Meanwhile, we should probably all be educating ourselves, right?
Sources:
“Unless it has worthwhile goals”; extinction risk: Machine Intelligence Research Institute. The Problem. 2025.
“12 years”: Wagner, Gernot and Constantine Samaras. “Do We Really Have Only 12 Years to Avoid Climate Disaster?” New York Times, 19 September 2019. At https://www.nytimes.com/2019/09/19/opinion/climate-change-12-years.html
“billions will die”: Lovell, Jeremy. “Gaia scientist Lovelock predicts planetary wipeout.” Reuters, 20 January 2007.

Pingback: Psst, don’t tell the machines! | The Meta-Narrator