Antidote to situational awareness
Context: In June 2024, Leopold Aschenbrenner, a former OpenAI researcher, published an essay called "Situational Awareness" that called for a Manhattan Project to build artificial superintelligence. This essay deeply influenced the national security apparatus and the current administration (retweeted by Ivanka Trump), pushing the United States into a more pro-AI stance. While the majority of the essay - especially the first half analyzing AI research - is actually high quality, the section justifying the Manhattan Project is based on faulty assumptions and reaches wrong conclusions.
Recently, an essay of similar quality called "AI 2027" dropped and pushed in the opposite direction. Despite having similar reach to Situational Awareness, it failed to shift the national security debate because it only focused on the AI takeover scenario and didn't attempt to dismantle the race dynamics themselves.
What I want to do is create a new essay (target length 100-140 pages) of similar quality and style with the main message that creating artificial superintelligence will end in disaster in ANY scenario (both controlled and uncontrolled end in mass extermination and worthy successor is impossible with artificial intelligence)
I then dismantle the traditional narrative framing the race to superintelligence as a Prisoner's Dilemma and provide the correct game theory model that actually represents the underlying dynamics, where stopping superintelligence permanently is a stable equilibrium - why we actually race and how to get the great powers to adopt the correct strategy. I analyze in detail the consequences of delaying action and focus on the political and economic aftermath from a national security perspective to convey the urgency after convincing the reader that stopping superintelligence is inevitable.
Recently, an essay of similar quality called "AI 2027" dropped and pushed in the opposite direction. Despite having similar reach to Situational Awareness, it failed to shift the national security debate because it only focused on the AI takeover scenario and didn't attempt to dismantle the race dynamics themselves.
What I want to do is create a new essay (target length 100-140 pages) of similar quality and style with the main message that creating artificial superintelligence will end in disaster in ANY scenario (both controlled and uncontrolled end in mass extermination and worthy successor is impossible with artificial intelligence)
I then dismantle the traditional narrative framing the race to superintelligence as a Prisoner's Dilemma and provide the correct game theory model that actually represents the underlying dynamics, where stopping superintelligence permanently is a stable equilibrium - why we actually race and how to get the great powers to adopt the correct strategy. I analyze in detail the consequences of delaying action and focus on the political and economic aftermath from a national security perspective to convey the urgency after convincing the reader that stopping superintelligence is inevitable.