David Krueger, AI professor at University of Montreal and former UK AI Security Institute research director, reveals why we're not on track to solve AI alignment and what extinction-level risks lie ahead. He explains how concentration of power, human obsolescence, and the creation of destructive technologies pose threats even if alignment is achieved, and why the current AI development race resembles humanity replacing itself. Learn about Evitable, his nonprofit advocating for an internationally coordinated pause on advanced AI development, and discover the hardware-focused "third option" between uncontrolled AI progress and mass surveillance.
AI Safety Researcher & Professor | University of Montreal & MILA (Quebec AI Institute) David Krueger is an assistant professor specializing in machine learning and AI safety, where he conducts research on preventing catastrophic risks from advanced AI systems. He trained in deep learning under Yoshua Bengio starting in 2013 and has been instrumental in raising awareness about AI alignment challenges within the academic community.
Former Research Director | UK AI Security Institute (Founding Team) David served as a research director on the founding team of the UK AI Security Institute, helping establish one of the world's first government-led organizations dedicated to advanced AI safety. His work focused on developing frameworks for evaluating and mitigating risks from frontier AI systems.
Founder & Director | Evitable (Nonprofit Organization) David founded Evitable, a nonprofit organization dedicated to informing and organizing the public around AI extinction risks and advocating for coordinated international action. The organization focuses on building a mass movement to pause advanced AI development and advocates for hardware-based solutions to prevent uncontrolled AI proliferation.
AI Risk Advocate | Center for AI Safety David initiated the Center for AI Safety Statement on AI Risk, which brought together leading AI researchers and experts to publicly acknowledge extinction-level threats from artificial intelligence. This statement played a significant role in elevating AI safety concerns within both the technical community and public discourse.
Alignment Not Solved | We're not on track to solve AI alignment and that's the most important thing for people to know.
Beyond Technical Problems | Even solving alignment doesn't address concentration of power, destructive technologies, or human obsolescence risks.
The Replacement Race | We're racing to replace humanity with AI and it's not a sensible thing we're doing right now.
Nobody Was Thinking | Most people in the AI field just weren't thinking about what happens when we succeed at superhuman intelligence.
Human Labor Obsolete | Once human labor is no longer valuable, we don't have a plan for what happens next.
The Third Option | There's an alternative to YOLO development and mass surveillance—get rid of the AI chips.
Concentrated Supply Chain | Advanced AI chips require just a few key companies like TSMC and ASML to manufacture.
International Pause Needed | We need globally coordinated action because if one country stops, others won't necessarily follow.
Fifty Percent Chance | Tech companies are taking a calculated gamble with a coin flip between extinction and godlike power.
Mass Movement Required | Building political pressure across all walks of life is essential to winning against powerful AI lobbies.
00:00 - 03:30 | Will Humanity Solve AI Alignment in Time
03:31 - 07:00 | From Deep Learning Student to AI Safety Advocate
07:01 - 10:30 | The Collective Action Problem of Dangerous AI Systems
10:31 - 14:00 | Why Most AI Researchers Aren't Thinking About Success
14:01 - 17:30 | Human Obsolescence and the End of Valuable Labor
17:31 - 21:00 | Concentration of Power and Novel Destructive Technologies
21:01 - 24:30 | The Fifty Percent Gamble Tech Companies Are Taking
24:31 - 28:00 | Why Hardware Control Is the Key to Enforcement
28:01 - 31:30 | International Coordination and the TSMC ASML Bottleneck
31:31 - 35:30 | Building a Mass Movement Through Evitable Nonprofit
"I don't think we're on track to solve it, so I don't think it's likely to be solved on the current trajectory, and I think that's probably the most important thing for people to know."
"Even if we do solve alignment to the extent that a lot of people are imagining it being solved, I'm still quite concerned about what the future with AI looks like."
"If there's an AI system that just really shouldn't exist, it's too dangerous to exist, if anyone builds it, everyone dies. If such a system could exist, how do you make sure that nobody builds it?"
"A lot of people building it see this risk and are kind of taking a calculated gamble, hoping to get all of the power and not have it destroy everybody."
"Most people in the field just weren't thinking about what happens if and when we succeed at the grand goal of making human level and superhuman artificial intelligence."
"Once human labor is no longer valuable, and once humans are not only not able to make money from our labor, but also our participation in society, in politics, all of our intellectual and emotional labor becomes obsolete, I don't think we have a plan for that."
"It's a race to replace ourselves, a race to replace humanity. It's just not a sensible thing that we're doing right now."
"I see a third option, which is get rid of the AI chips. If we need billions and trillions of dollars of investments in data centers to build this thing, that's actually a great situation to be in because we just don't have to invest those billions and trillions of dollars."
"This is basically the most advanced technology in existence, the hardware itself, not even the AI software, but the hardware."
"If you don't want AI, we don't have to have it. We can solve this problem. It takes some work, but the first step is to get your government to prioritize this."
Accelerate your productivity & creativity using 47 custom prompts to grow your income, success & quality of life by downloading our free “AI Accelerator Pack” HERE.
START HERE