What Peter Thiel gets wrong about existential risk.
Peter Thiel — tech billionaire, libertarian polemicist, Trump donor — recently gave a speech at the Oxford Union, one of the oldest and most prestigious student debating societies in the world, to kick off its 200th year. That’s hardly news — we’ve all heard Thiel’s spiel many times before on campus conformity and how only tech can save us.
But my ears pricked up this time as he specifically criticized my field. I’m an existential risk researcher at Cambridge University, where my colleagues and I study the risks from nuclear and biological weapons, climate change, and emerging technology such as synthetic biology and artificial intelligence. All of these technologies pose incredibly high risks — we think it’s plausible that one or more of them could lead to civilizational collapse or extinction, affecting everyone alive today. As many in the effective altruism community have argued, I think tackling these risks is a key priority of our time.
Thiel seems to have had a passing interest in these topics a decade ago, speaking at some conferences and donating some money. But to my knowledge he has not engaged with the existential risk reduction community for as long as I have been involved. Instead, he seems increasingly interested in seasteading and the alt-right.
So why was he criticizing the field of existential risk reduction? Thiel seems to suggest we in the community are Luddites, bearing some responsibility for the stagnation in real wages and technological progress since the 1970s. He claims a leading cause of stagnation is that scientists effectively have become too scared of their own technology. He told the Oxford Union audience that “the single answer as to why it is stalled out on the part of the universities is something like science and technology are just too dangerous.”
Existential risk isn’t a hoax
I don’t want to minimize the situation. It really is true that real wages for many workers in the UK and US have been stagnant since the 1970s, especially through the last grueling decade of austerity. And too much technical effort and venture capital has been suboptimally invested into e-commerce (like PayPal), social media and online advertising (like Facebook), or surveillance (like Palantir). (What’s the connection? All three companies helped to make Thiel’s estimated $8 billion fortune.)
But as someone who spends a fair amount of time encouraging technologists to consider their responsibilities for the technology they create, let me say that an overabundance of fear is not typically what I encounter.
Powerful technologies are often “dual-use”: They can be used to help or to harm people. Take nuclear physics. Nuclear weapons are in many ways the original existential risk, the one that has loomed over the world since 1945. However, nuclear power is also a reliable, zero-carbon source of power.
Advances in biotechnology enabled the quickest vaccine rollout in history, as well as further medical breakthroughs. But they also enable “gain-of-function” experiments where scientists purposely try to make diseases more virulent.
Language models like ChatGPT are wonderful and amazing but in the wrong hands could enable the mass production of disinformation, as we warned about in “The Malicious Use of Artificial Intelligence” in 2018. The problem with arguing for “more speed” is that these dual-use technologies are already moving faster than we can keep up.
Thiel is simply wrong if he thinks that “slow it down” is the only response. The Founders Pledge Climate Change Fund, a community of entrepreneurs who pledge to donate a portion of their exit earnings to charity, tries to speed up innovation in low-carbon concrete and steel. Alvea is a biotech startup aiming to speed up vaccine production. Anthropic is an AI safety company trying to speed up interpretability and alignment. We call this ”differential technology development”: speeding up safe or defensive tech relative to harmful or offensive tech.
But it’s also clear that the old Facebook motto of “move fast and break things” won’t work. Just speeding up technology won’t be enough to keep us safe. We need sensible domestic regulation: supporting the green transition, raising safety requirements in biological labs, and ensuring that high-risk AI systems go through safety tests. We need international agreements, like the Paris agreement and the nuclear arms control treaties that Donald Trump — whose presidential campaign Thiel donated to — ripped up.
A libertarian approach to X-risk doesn’t work
But Thiel doesn’t seem to want this. He’s an Ayn Rand libertarian. On ideological grounds, he doesn’t believe that government action can help. He thinks regulation makes things worse. When asked by an Oxford Union audience member about how he “would fix” the UK’s National Health Service (NHS), he said we need to get over our “Stockholm syndrome” as a country and privatize it already. Everyone is entitled to their own opinions, even billionaires. But it’s a fringe ideology. And it’s one that could do a huge amount of harm.
Thiel seemingly adopts this fend-for-yourself mentality in his planning too. He has long had a bolt-hole that he could escape to in case of societal collapse. In 2015, he bought a 193-acre plot of land (bigger than the Disneyland theme park in California) on the South Island of Aotearoa New Zealand for a reported $13.5 million. In May 2022, he was denied planning permission to build a luxury lodge on the plot with space for 24 guests, a theater lounge, a spa, and a “meditation pod.”
A mate of mine was traveling around the South Island a few years ago and had a pint at a pub. One of the locals pointed out a spot on a hill: “See there? That’s Peter Thiel’s house. If anything goes bad, that’s where we’ll go for food and water.”
It is a fantasy to think that existential risk can be reduced — or survived — just with individual action, the invisible hand of the market, and a “go faster” sign. It also needs collective action, wisdom and patience, and sensible and proportionate regulation.
That’s the approach that the existential risk and effective altruism communities are taking. But that, unfortunately, is the approach that Thiel appears to disagree with.
Haydn Belfield has been academic project manager at the University of Cambridge’s Centre for the Study of Existential Risk (CSER) for the past six years. He is also an associate fellow at the Leverhulme Centre for the Future of Intelligence.
0 Comments