“AI 2027” is one of the most informative prediction post I have read. I would highly recommend it for anyone who wonder how AI might shape our future. They also have audio version on Spotify and interview with Dwarkesh Patel.
Here are some points I find important and my thought about it:
First, we will highly likely reach Artificial Super Intelligence (ASI) within 5 years. In the prediction, no matter whether the leading AI company slows down or races, we will have ASI in the end. This is due to competitive pressure between companies and countries and large productivity multiplier from the AI itself. I personally have longer timeline than this, but this will happen in the end.
Second, AI R&D plays a key factor. For a large speedup, the agents need to be able to do research by themselves. This is a hard task. It requires superhuman coding capability and most importantly, research taste. The former is used to create reliable environment and synthetic data. The latter is for pivoting directions. I don’t have a strong reason about why the model cannot achieve them in the end. But I do think they are hard to achieve given how dumb and unreliable the models are currently.
Third, chain-of-thought (CoT) faithfulness is necessary for alignment. This means we need to keep AI’s CoT and make sure they are reflecting the agent’s true intention. Without it, the agent can easily go misaligned without being detected. This is a hard ask because I think even with CoT, we still cannot ensure they are aligned because hiding intention is easy (or not?), especially when the task is hard for human to do or understand. For example, if you explain a piece of code to a non-programmer, you can basically describe the code with a lot of flexibility, expecting they will agree as long as you sound reasonable.
Let me explain further about my third point: Suppose we work with an LLM agent, the agent needs to convince us or a weaker verifier to do certain actions. Usually the agent will use CoT to get or justify an action. I claim that when an agent is powerful enough, it doesn’t need CoT to get an action, but mostly for justification purpose so that another weaker verification agent/human can have an easier time. If so, then a seemingly good (but incomplete) reasoning might hurt verification process. You might disagree with me that verification is easier than generation, so the weaker model can do verification well without the main agent’s reasoning. I agree, but in practice, we might just have a much weaker verifier than generator, especially in production. Okay, let’s come back to the main thread.
Fourth, China plays an important role. Suppose there is no China, the rate of AI development will be slower because the government has less incentive to protect or help the leading AI firm. US worries about China gets AGI earlier and shakes its dominant position in the world. This is reasonable but I think racing is not the only option. When things starts going wrong, both countries should be willing to stand out and share transparently for collaboration to avoid further bad outcomes. I might be too naive but I think that’s an important topic in AI governance.
In the end, I hope the prediction won’t be a prophecy but work as an warning or motivation for policy makers to act earlier and be better prepared for the potentially dangerous future. Peace.
FYI, below is the rough time line in AI 2027:
Mid 2025: Stumbling Agents
Late 2025: The World’s Most Expensive AI
Early 2026: Coding Automation
Mid 2026: China Wakes Up
Late 2026: AI Takes Some Jobs
January 2027: Agent-2 Never Finishes Learning
February 2027: China Steals Agent-2
March 2027: Algorithmic Breakthroughs
April 2027: Alignment for Agent-3
May 2027: National Security
June 2027: Self-improving AI
July 2027: The Cheap Remote Worker
August 2027: The Geopolitics of Superintelligence
September 2027: Agent-4, the Superhuman AI Researcher
October 2027: Government Oversight
Then we get to choose the Slowdown vs Race condition. Suppose I choose Slowdown. It would be:
November 2027: Tempted by Power
December 2027: A US-China Deal?
February 2028: Superhuman Capabilities, Superhuman Advice
March 2028: Election Prep
April 2028: Safer-4
June 2028: AI Alignment in China
July 2028: The Deal
August 2028: Treaty Verification
September 2028: Who Controls the AIs?
October 2028: The AI Economy
November 2028: Election
2029: Transformation
2030: Peaceful Protests
If I choose Race, it would be:
November 2027: Superhuman Politicking
December 2027: The Agent-5 Collective
2028: The AI Economy
2029: The Deal
2030: Takeover