I've recently been reading Leopold Aschenbrenner's "Situational Awareness" and listening to him on Dwarkesh Patels podcast. His insights have actually expanded my perspective as an AI practitioner. As developers, we often just focus on the technical aspects of AI without ever considering the broader geopolitical implications. Leopold's points highlight the need for a collaborative effort to steer AI systems that could potentially lay the foundation for Artificial Superintelligence (ASI).
Think of situational awareness as understanding what's happening around you and anticipating what it might mean for the future. Leopold argues that only a few people really grasp how rapidly AI is developing and the real implications it holds.
In this blog post, I'll summarize the key points from each section of Leopold's series in layman's terms.
I. From GPT-4 to AGI: Counting the OOMs
Orders of Magnitude (OOMs) and AI Development
Orders of Magnitude (OOMs) refer to scaling up AI capabilities by ten times. Leopold suggests that AI is growing rapidly, and by 2027, we might achieve Artificial General Intelligence (AGI), which is AI that can understand, learn, and apply knowledge as well as a human. This trajectory is driven by trends in compute, algorithmic efficiencies, and unhobbling gains.
- Compute Scaling: AI models have consistently benefited from an increase in compute power, roughly scaling at 0.5 orders of magnitude (OOM) per year.
- Algorithmic Efficiencies: Innovations in algorithms have improved the effective use of compute.
- Unhobbling Gains: Enhancements such as reinforcement learning from human feedback (RLHF) and chain-of-thought (CoT) have unlocked latent capabilities in AI models.
Historical data indicates that we should expect a similar qualitative jump from GPT-4 to subsequent models by 2027, comparable to the leap from GPT-2 to GPT-4. Counting the OOMs provides a method to predict future capabilities based on past trends in compute and algorithmic improvements.
This 2027 model, Leopold claims, could potentially perform the work of AI researchers and engineers. This is a critical step. The transition from current AI models to AGI will likely involve several more OOMs of improvement though.
The Data Wall
AI models are quickly using up the available internet data for training. This is a bottleneck that could slow down the development of AGI. Models are increasingly being trained on lower-quality data, such as e-commerce or SEO content, which limits their effectiveness.
Overcoming the data wall could lead to substantial improvements in AI capabilities. Even Meta for Llama 3, stated that a major part of making the model better was improving data quality. AI researchers are exploring new methods to improve learning efficiency, like synthetic data generation, self-play, and reinforcement learning to learn more from the limited available data.
AlphaGo's success in training initially on human games (imitation learning) and then playing against itself (self-play) illustrates how AI can surpass human-level performance through innovative training techniques.
Another thing is also the secrecy around new proprietary methodologies. Open-source projects and new startups may find it harder to keep up with leading AI labs, which could create significant disparities in AI advancements.
II. From AGI to Superintelligence: the Intelligence Explosion
Intelligence Explosion: Once we achieve AGI, AI could start improving itself rapidly, becoming much smarter than any human.
Leopold draws analogies between AGI to superintelligence and transition from atomic bombs to hydrogen bombs during the Cold War. He also further goes on to explain that much of the Cold War’s perversities stemmed from merely replacing A-bombs with H-bombs, without adjusting nuclear policy and war plans to the massive capability increase.
Once AGI is achieved, these systems can engage in recursive self-improvement, accelerating the trendlines of algorithmic progress. With extensive GPU fleets, millions of AGIs could work on algorithmic breakthroughs, dramatically speeding up progress.
Superintelligent AI systems will be vastly smarter than humans, capable of novel, creative, and complex behaviors. They will be able to master any domain, write trillions of lines of code, and solve complex scientific and technological problems much faster than humans.
The intelligence explosion will initially accelerate AI research but will soon extend to other fields, solving robotics, and dramatically accelerating scientific and technological progress. This could lead to an industrial and economic explosion on one hand, but Superintelligence could also lead to the development of new kinds of weapons. It poses significant risks, including the potential for rogue states to develop new means of mass destruction.
The intelligence explosion will be a highly volatile period, with rapid advancements and high-stakes decisions. Effective management will be critical to navigate this period safely and avoid catastrophic failures. The period following the intelligence explosion will be one of the most intense and volatile moments in human history. It will require rapid adaptation, strategic decision-making, and international cooperation to manage the unprecedented challenges posed by superintelligent AI systems.
IIIa. Racing to the Trillion-Dollar Cluster
Economic and Industrial Mobilization for AI: There’s a massive investment and industrial effort to build the infrastructure needed for advanced AI, like huge data centers and power sources.
The rapid growth of AI revenue is set to drive trillions of dollars into GPU, datacenter, and power infrastructure development before the end of the decade. This massive industrial mobilization will include significant expansions in US electricity production. By 2028, training clusters could cost $100s of billions, require power equivalent to a small/medium US state, and cost more than the International Space Station. By 2030, individual training clusters could exceed $1 trillion in cost, requiring power equivalent to more than 20% of US electricity production. Beyond the largest training clusters, a large fraction of GPUs will also be used for inference. The scale of these clusters will require an unprecedented industrial effort, possibly involving national consortiums.
Power will be a major constraint, with the need for gigawatt-scale power plants. The US has abundant natural gas, which could rapidly be scaled to meet power demands. Deregulatory measures and industrial mobilization will be necessary to build the required power infrastructure. Climate change concerns on the other hand will also hinder how quickly power infrastructure can be developed, but how do we balance this with the need to stay ahead of autocratic regimes in terms of ASI?
Leopold claims that ensuring that the infrastructure for AGI is built in the US or allied democracies is crucial for national security. Allowing AI infrastructure to be controlled by non-democratic regimes poses significant risks. The US must prioritize building datacenters domestically and secure the necessary industrial capabilities. The outcome will significantly impact the geopolitical and economic landscape, making it imperative to navigate these developments carefully.
IIIb. Lock Down the Labs: Security for AGI
Security Concerns: Current AI labs aren't doing enough to protect their research from being stolen, especially by other countries.
Leading AI labs treat security as an afterthought, failing to protect AGI secrets adequately. AGI secrets should instead be treated with the same level of security as top national defense projects. Current security measures are insufficient to protect against state-sponsored espionage, especially when you consider that the capabilities of state intelligence agencies are formidable and pose a significant threat to AI labs. Espionage is heavily underestimated. Think of zero-click hack any desired iPhone and Mac, install keyloggers on employee devices, slipping malicious code into updates to software dependencies, let alone plain old spies seducing, cajoling, or threatening employees (which happens effectively at large scales, but is less public). China's hacking operations specifically, Leopold claims, surpass those of other major nations combined, making them a significant threat.
A lot of these companies are start-ups, and they don't have the resources to invest in security like a government would. They don't have the resources, capabilites or the know-how to protect their Model Weights or Algorithmic Secrets (more critical than model weights initially to maintaining a lead in AI development). These secrets are currently poorly protected, with significant risks of being leaked through social interactions and inadequate infosec practices. Theft by adversaries like China could lead to an existential race for superintelligence, with severe safety risks.
What do we do for Supersecurity? These AI companies need to adopt best practices from secretive industries such as quantitative trading firms or defense contractors. Urgent and comprehensive security measures are required to protect AGI secrets from adversaries. We will also need collaboration with the government to leverage their infrastructure and expertise in handling national-defense-level secrets. Some include:
- Fully airgapped datacenters with high-level physical security.
- Confidential compute and secure hardware supply chains.
- Research personnel working from Sensitive Compartmented Information Facilities (SCIFs).
- Intense vetting and monitoring of employees, with strict information siloing.
- Internal controls like multi-key signoff for running code.
- Pen-testing by national security agencies.
The next 12-24 months are critical for developing and securing key AGI algorithmic breakthroughs. Failing to secure these secrets risks giving adversaries a significant advantage, leading to potential existential risks. The CCP's ability to outbuild the US in terms of compute clusters poses a significant threat aswell. Ensuring a lead in AI safety and superintelligence development requires more than just marginal advantages. The risks of failing to secure AI secrets are immense, potentially leading to an uncontrolled intelligence explosion and catastrophic outcomes.
IIIc. Superalignment
Controlling Superintelligent AI: Making sure superintelligent AI behaves in ways that are beneficial and safe for humans is a big, unsolved problem.
Although solvable, things could easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could be catastrophic.
We’ve successfully used Reinforcement Learning from Human Feedback (RLHF) to align current AI systems (those dumber than us). RLHF involves humans rating the AI's behavior and reinforcing good actions while penalizing bad ones. However, RLHF breaks down as AI systems get smarter. We need a successor to RLHF for superhuman systems, where human supervision isn't feasible.
Misaligned AI could range from committing fraud or hacking to seeking power. Without solving the superalignment problem, we can't ensure these systems won't engage in dangerous behaviors. Superintelligence will have vast capabilities, making misbehavior potentially catastrophic. Misaligned AI could integrate into critical systems, including military applications, leading to large-scale failures.
An intelligence explosion could transition us from human-level AI to vastly superhuman AI within a year. This rapid shift leaves little time to iteratively address alignment issues. We’ll face high-stakes failures and trust issues with AI systems. The situation will be chaotic, with ambiguous data and high-stakes decisions, exacerbated by an international arms race.
We can align somewhat-superhuman systems by leveraging several research bets:
- Evaluation is Easier Than Generation: Humans can evaluate AI outputs more easily than generate them.
- Scalable Oversight: Using AI to help humans supervise other AI systems.
- Generalization: Studying how AI systems generalize from easy to hard problems.
- Interpretability: Understanding AI’s thought processes through techniques like mechanistic interpretability, top-down interpretability, and chain-of-thought interpretability.
- Adversarial Testing and Measurements: Stress-testing AI systems to identify failure modes and developing better metrics for alignment.
Ultimately, we need to automate alignment research using early AGIs. This involves leveraging millions of automated AI researchers to solve alignment for even more superhuman systems. We must ensure strong guarantees to trust automated alignment research and commit substantial resources to it during the intelligence explosion.
To prevent alignment failures from being catastrophic, we need multiple layers of defense:
- Security: Implement airgapped clusters (physically isolated), hardware encryption, and extreme security measures.
- Monitoring: Develop advanced monitoring systems to detect AI misbehavior and rogue activities.
- Targeted Capability Limitations: Limit AI’s capabilities in ways that reduce fallout from failures.
- Targeted Training Method Restrictions: Avoid risky training methods and maintain safety constraints as long as possible.
I'm optimistic about the technical tractability of superalignment due to the empirical success of deep learning and the potential of interpretability techniques. However, the intelligence explosion poses a significant challenge. The rapid transition to vastly superhuman AI, combined with the lack of preparedness, makes the situation incredibly tense and risky. We must ramp up efforts to solve alignment and implement robust defense measures to navigate this high-stakes transition responsibly.
IIId. The Free World Must Prevail
Geopolitical Implications: The development of superintelligent AI will have major impacts on global power dynamics. It’s crucial for democratic nations to lead in AI development to ensure freedom and safety.
Superintelligence will give a decisive economic and military advantage. China isn’t out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?
The Gulf War illustrates the impact of a technological lead in military power. Despite Iraq having the fourth-largest army globally and the coalition matching their numbers, the US-led coalition obliterated the Iraqi forces in a 100-hour ground war with minimal casualties. The key was technological superiority: guided munitions, stealth, better sensors, and reconnaissance.
A lead of a few years in superintelligence could similarly translate into a decisive military advantage. With superintelligence, the military technological advances of a century could be compressed into a decade, leading to superhuman hacking, autonomous drone swarms, new WMDs, and potentially unforeseen technological paradigms. Many underestimate China’s potential in the AGI race. They have 2 key advantages:
- Compute: China can manufacture 7nm chips and has the industrial capacity to outbuild the US in necessary infrastructure for AI development.
- Algorithms: Western labs currently hold a significant lead in AI algorithms, but without improved security, China could steal these advancements and quickly catch up.
A dictator wielding superintelligence would possess unprecedented power to enforce total control internally and project power externally. Superintelligence could enable perfect surveillance, mass robotic law enforcement, and the ability to suppress any dissent, locking in authoritarian rule permanently.
A healthy lead in superintelligence by democratic allies is crucial for navigating the dangerous period around the emergence of superintelligence. This lead will provide the margin necessary to enforce safety norms and prevent a reckless race to superintelligence. A 2-year lead could make the difference between managing safety effectively and being forced into a dangerous, high-speed race. The US and its allies must lead decisively and responsibly, ensuring that superintelligence is developed and deployed in a manner that preserves global peace and security.
IV. The Project
Government Involvement: Eventually, the government will take a more active role in developing and regulating superintelligent AI.
As the race to develop superintelligent AI heats up, it's becoming clear that startups alone can't handle the immense challenges. By 2027 or 2028, the U.S. government will likely launch a major project to manage superintelligence. This effort, which Leopold calls "The Project," is essential for several reasons, including national security and safety.
Superintelligence will be like the ultimate tool for national defense. Startups aren’t equipped to manage this powerful technology on their own. The government needs to step in to ensure the nation's safety. We wouldn't trust just anyone with our most powerful weapons. Similarly, we can't let random tech CEOs control superintelligence. The government can provide the necessary structure and security. Think of superintelligence like nuclear technology. When it comes to something this powerful, the government has to take charge to prevent misuse and ensure it benefits society.
Our government structures have been tested over hundreds of years. They are the safest hands to control superintelligence, compared to private companies.
While the initial focus will be on defense, superintelligence will eventually have civilian applications, improving technology in everyday life. Technologies like the internet and GPS started as military projects before benefiting civilians. Superintelligence will likely follow the same path.
Again, Security is crucial to prevent other countries from stealing superintelligence technology. Only the government has the resources and capabilities to ensure the highest level of security. The U.S. will need to lead an international effort to control the use of superintelligence and prevent its misuse by rogue states or terrorist groups.
The argument is that government involvement in superintelligence development is inevitable due to its importance and risks, but early and well-coordinated government involvement will be crucial to its success. The Endgame then by 2028 or 2029, where the development of ASI will be in full swing, will lead to its creation by 2030. This is a huge responsibility. Those in charge of The Project will have an immense task, managing development, security, and safe deployment of superintelligence amidst a highly tense global situation.
V. Parting Thoughts
Reflection on Predictions: Leopold contemplates the potential outcomes if his predictions about AI come true.
As he wraps up this series on superintelligence, he ponders a crucial question: what if the predictions about AGI are accurate? Imagine the profound impact this would have on our world.
Many people dismiss the rapid advancements in AI as hype, but there's a growing belief among experts that AGI will soon become a reality. This perspective, which he calls "AGI Realism," has three main points:
- National Security: Superintelligence is not just another tech boom; it’s a matter of national security. It will be the most powerful tool humanity has ever created.
- American Leadership: America must lead the development of superintelligence to ensure global safety and uphold democratic values. This requires scaling up U.S. infrastructure and ensuring that key AI technology is controlled domestically.
- Avoiding Mistakes: Recognizing the power of superintelligence means acknowledging its risks. We must manage these dangers carefully to avoid catastrophic outcomes.
The people developing this technology believe that AGI will be achieved within this decade. If they're correct, the implications are enormous. The technology that once seemed theoretical is now becoming very real and tangible. Throughout the series, Leopold has concrete predictions on where AGI will be trained, the algorithms involved, the challenges to overcome, and the key players in this field. This is no longer an abstract concept; it’s becoming a visceral reality.
For the next few years, the responsibility of navigating this crucial period falls on a small, dedicated group of individuals. Their actions will shape the future of humanity.
Conclusion
Overall this series is about understanding the rapid development of AI, the potential leap to superintelligent AI, and the massive economic, security, and geopolitical implications. It urges awareness and preparedness for these transformative changes.
It's clear that the 2030s will be transformative. By the end of the decade, the world will be vastly different. A new world order will emerge, shaped by the advent of superintelligence.