In December 2024, Geoffrey Hinton stood before an audience in Stockholm to accept the Nobel Prize in Physics. He used the occasion not to celebrate, but to warn. AI, he said, had already created divisive echo chambers, enabled mass surveillance, and empowered cybercriminals. In the near future, it could be used to create terrible new viruses and autonomous weapons. And beyond all of that, there was what he called “a longer-term existential threat”—the prospect that we might create digital beings more intelligent than ourselves, with no guarantee that we could stay in control.
This was not a fringe activist. This was the person whose foundational research made modern AI possible, speaking at the most prestigious scientific ceremony on Earth.
He is not alone. Yoshua Bengio, the most-cited living computer scientist and Hinton’s fellow Turing Award winner, has described AI labs as “playing dice with humanity’s future.” The CEOs of OpenAI, Google DeepMind, and Anthropic—the companies actually building the most powerful AI systems—have all signed a statement placing AI extinction risk alongside pandemics and nuclear war as a global priority.
The Gap
There is an extraordinary gap between what AI researchers know (or fear) and what the general public understands. There is an equally dangerous gap between what policymakers need to make informed decisions and what they currently receive.
On one side, technical safety research is published in venues that few non-specialists will ever read. On the other, public discourse about AI oscillates between breathless hype and dismissive skepticism, with very little territory in between.
The Canary Institute exists to occupy that territory.
What We Are
We are an independent, non-partisan, not-for-profit research and advocacy organization. We are not affiliated with any AI company, political party, or ideological movement. We have one interest: ensuring that the development of advanced AI proceeds safely and in the public interest.
Our work has three pillars. First, we conduct and synthesize technical and economic analysis of AI capabilities and risks. Second, we translate that analysis into language accessible to the public and to policymakers—without sensationalism and without false reassurance. Third, we advocate for evidence-based policy that keeps pace with the technology.
What We Are Not
We are not anti-technology. AI has the potential to be enormously beneficial. We are not doomsayers peddling fear. We are not affiliated with any company seeking regulatory advantage.
We are, simply, the people who believe that when Nobel laureates and Turing Award winners and the leaders of major AI labs say there is a serious risk that requires urgent action, the appropriate response is to take that seriously—and to build institutions capable of acting on it.
What Comes Next
In the coming months, we will publish our first technical analyses, including evaluations of frontier AI capabilities and projections of near-term economic impacts. Each technical paper will be accompanied by a public-facing summary here on this blog, because we believe that the people whose lives will be affected by these developments have a right to understand them.
The canary in the coal mine does not cause the danger. It detects it, and it communicates it, early enough for people to act. That is what we intend to do.