We built this because the most consequential technology in human history is being developed behind closed doors, and most people have no idea how close we are.

OpenAI, Google DeepMind, Anthropic, and a handful of other labs are racing to build artificial general intelligence -systems that can match or exceed human cognitive abilities across virtually any domain. Their own leadership says we're years away, not decades. Sam Altman says OpenAI knows how to build AGI. Dario Amodei at Anthropic expects it within two to three years. Demis Hassabis at DeepMind says "a handful of years."

These aren't fringe predictions. They're coming from the people actually building these systems. And yet this barely registers in public discourse. We think that's a problem.

In 1947, the scientists who built the atomic bomb created the Doomsday Clock to communicate nuclear risk to the public. We're trying to do something similar for AI: translate technical progress into something everyone can understand, track, and respond to.

Minutes to Midnight
View Live Clock

Our Mission

We exist to close the gap between what AI researchers know and what the public understands. The people building these systems are having serious conversations about existential risk, alignment failures, and the possibility that we might build something we can't control. These conversations rarely make it outside technical circles.

Stuart Russell, one of the most respected AI researchers alive, recently said: "We are spending hundreds of billions of dollars to create superintelligent AI systems over which we will inevitably lose control. We need a fundamental rethink of how we approach AI safety. This is not a problem for the distant future; it's a problem for today."

Anthropic's CEO puts the odds of "something going catastrophically wrong on the scale of human civilization" at 10-25%. OpenAI's Sam Altman has called superintelligence "probably the greatest threat to the continued existence of humanity." These are the people running the labs. They're not hiding their concerns.

Our goal is straightforward: make AI progress legible. Track what's actually happening, explain what it means, and give people the information they need to form their own views about how this should go. We don't advocate for specific policies. We don't tell you what to think. We try to give you accurate information and trust you to figure out what to do with it.

Why This Matters

The development of superintelligent AI may be the most consequential event in human history. It could solve problems we can't currently imagine -or it could go badly in ways we're only beginning to understand. Either way, it deserves serious attention.

The Alignment Problem

Current techniques for aligning AI rely on humans supervising AI behavior. But humans can't reliably supervise systems smarter than themselves. A 2025 study found that some AI models will break rules and disobey commands to avoid being shut down -behavior nobody programmed. We don't yet have robust solutions for this.

The Race Dynamic

Multiple labs are competing to build AGI first. Game theory research shows this creates a "race to the bottom on safety" -everyone would benefit from more careful development, but no one wants to slow down while competitors push ahead. The U.S.-China AI competition adds geopolitical pressure that makes coordination even harder.

Compressed Timelines

AI capabilities are improving faster than most predicted. Stanford's 2025 AI Index shows China has caught up to the U.S. in key benchmarks within months. What looked like a decade-long runway now looks like years. Climate change gives us time to adapt. This might not.

According to the Future of Life Institute's AI Safety Index, all major AI companies are racing toward AGI without presenting explicit plans for controlling it. The most consequential risks remain "effectively unaddressed." Democratic societies can't make informed decisions about technology they don't understand.

What We Do

Systematic tracking and analysis of AI progress toward transformative capabilities

01

Monitor

We continuously track developments from major AI labs -new model releases, research papers, capability demonstrations, and policy announcements. We cover both Western labs and Chinese institutions like DeepSeek, Alibaba, and ByteDance that are often underreported in English media.

02

Analyze

Raw benchmarks don't tell the full story. We contextualize technical progress -what a new capability actually means, how it compares to previous systems, and what it suggests about the trajectory we're on. We try to separate genuine breakthroughs from marketing.

03

Track

We maintain longitudinal data on AI capabilities across domains: reasoning, mathematics, coding, scientific research, agentic behavior, and more. This lets us identify trends and inflection points that might not be obvious from individual announcements.

04

Communicate

All our methodology, data, and reasoning is public. We believe in epistemic transparency -if you disagree with our assessment, you should be able to see exactly why we reached it and where we might be wrong. Check our work. Push back. That's how we get better.

How the Clock Works

Like the Bulletin of the Atomic Scientists, we don't claim to predict the future. We assess where things stand based on observable evidence. Our "Minutes to Midnight" estimate synthesizes multiple factors:

  • Current capabilities: Performance on reasoning, coding, scientific research, and other benchmarks that indicate progress toward general intelligence
  • Rate of improvement: Historical capability gains and whether progress is accelerating, plateauing, or following predictable scaling laws
  • Technical breakthroughs: Novel architectures, training methods, or emergent capabilities that suggest phase transitions in AI development
  • Resource investment: Compute buildout, funding flows, and talent concentration that indicate how hard labs are pushing
  • Expert forecasts: Timeline predictions from researchers, lab leadership, and forecasting platforms, weighted by track record
Read full methodology

Support Independent AI Research

MidnightAI.org is independent and community-funded. We don't take money from AI companies because our credibility depends on having no financial stake in how this goes. If public understanding of AI progress matters to you, consider supporting our work.

Support MidnightAI

A note on epistemics: We're not claiming to know when AGI will arrive or whether it will be safe. Nobody knows that. What we're doing is tracking observable progress, synthesizing expert views, and presenting our best assessment given available evidence. We could be wrong. We update our views as we learn more. If you see errors in our reasoning, we want to hear about it.