On different worldviews
AI 2027 versus AI as Normal Technology
AI 2027, a research-backed fictional story that predicts rapid AI progress followed by human extinction in a few years, was published a few months ago. It basically went viral (JD Vance has reportedly read it), and it’s fair to say it’s fairly representative of the views of a significant portion of those working on AI safety.
This is in stark contrast to AI as Normal Technology1 (henceforth AIANT), a position that talks about AI deveelopment as, you guessed it, normal technology. It may not have went as viral as AI 2027, but it gained a fair bit of attention.
(On as side note, people from both these camps even got together to have a debate, although there wasn’t enough time for them to properly talk about their cruxes.)
I’ve read a fair bit of the writings from both camps, and long story short — I think their difference is less about concrete predictions, but more about worldview. What I mean by this is that AI 2027 primarily uses an AI-centric framing, while AIANT takes a more non-centric framing.
But before I continue, here’s a full disclaimer: I personally find myself much more aligned to the AIANT camp, so I’d obviously be a bit biased.
So what’s the AI 2027 worldview about? It’s the view that AI is a big deal — not just any other big deal, but the biggest of big deals. It’s the thing that’s going to completely change the world. And these changes are going to be unfathomably drastic. AI becomes the centre of the universe.
Hence, the focus is on things like forecasting AI capabilities, model evaluations, and talking about AI takeover scenarios. It’s all about the AI (or the AIs). AIs are going to be big and scary, they will develop dangerous capabilities and exhibit scary behaviors, and they will take over control from humans. There is a great deal of (selective?) anthropomorphization — they will deceive, scheme, manipulate, fake alignment, reward hack, you name it — but they are also going to have alien values that are sufficiently misaligned that lead to a catastrophe.
On the other hand, AIANT would think of AIs as a big deal, but not in an extraordinary way, but just like other big deals that have happened throughout history. It’s obviously going to change the world, but the world is always changing anyway. The world is what it is, and AI is part of it, just like every other technology.
Hence, the focus is on the diffusion of technology, its impact on society, and improving resilience. It’s about the world being the way it is, but AIs will slowly but surely be a bigger part of it.
This kind of difference in worldview is ubiquitous. Discussions on adjacent topics have had contrasting worldviews in a similar nature as well. For example, there is an argument that we currently live in the most important century, and there is also a counterargument that we don’t. In terms of worldviews that you may come across in your eveday life, you could say that the those on the political left has a worldview that centres around the injustice towards the oppressed, while the political right has a worldview that centres around threats to social order. Without going into their more detailed arguments, neither of them are obviously wrong, but they look at the world from a specific lens and disproportionately focus on certain things.
So sometimes people look at things through a different lens, using a different framing, with a different worldview, whatever you want to call it. Does it actually change anything, though? Isn’t it just a matter of different people seeing the world through diverse perspectives?
I think a lot of things changes depending on the worldview. For a start, research agendas are shaped by these worldviews. In line with the AI 2027 worldview, it seems that a significant part of the ‘AI alignment’ agenda is basically about how to prevent superintelligent AIs from going rogue. For example, see Open Philanthropy’s research areas for their request for proposals (disclosure: I am a receipient of a similar grant), and UK AISI’s Alignment Project’s research areas. On the contrary, I’m not sure if the AIANT worldview drives much research agenda — their position is not quite the ‘opposite’ of say AI 2027, but more like the null hypothesis. In this sense, it’s not really a position per se, it’s more like the absence of a strong position that argues for a specific thing.
So where does this leave us? Honestly, I don’t know. You can’t just get someone to see things from a different worldview overnight. Worldviews are shaped by a lot of things, have all sorts of implicit assumptions, and are reinforced all the time.
But for a start, beyond trying to challenge our believes and arguments, we could also try to go a few levels deeper and see what worldview these hinge upon.
When I first thought of writing this post, I have not yet found their new Substack called AI as Normal Technology. In their guide, they discuss how their worldview is different from AI 2027. Maybe their post makes my post pointless, but whatever, I thought I should just say what I wanted to say anyway.

