Serial tech entrepreneur and Doom Debates host Liron Shapira explains why he assigns a 50% probability to AI causing human extinction by 2050. With over fifteen years in tech and a computer science background, Shapira breaks down the "can it" versus "will it" framework for AI risk, drawing on insights from Geoffrey Hinton, Eliezer Yudkowsky, and leaders at OpenAI, Anthropic, and Google DeepMind. Learn why top AI researchers are calling for immediate international coordination to prevent an irreversible catastrophe.
Serial Tech Entrepreneur | Y Combinator
Liron Shapira is a serial tech entrepreneur with over fifteen years of experience in the technology industry and holds a computer science degree. He serves as CEO of Y Combinator-backed Relationship Hero, bringing technical expertise and startup leadership to the relationship counseling space.
Host and Researcher | Doom Debates
Shapira created and hosts Doom Debates, a YouTube series examining AI extinction risk through interviews with leading experts in the field. He has conducted approximately one hundred interviews with top voices in AI safety, rationality, and technology, creating one of the most comprehensive collections of expert perspectives on existential risk from artificial intelligence.
AI Safety Advocate | Rationalist Community
Shapira has been engaged with AI safety research and the rationalist community for eighteen years, influenced by early exposure to Eliezer Yudkowsky's writings on Less Wrong (formerly Overcoming Bias). He publicly advocates for his fifty percent probability of doom by 2050 and actively promotes international coordination to pause the development of superintelligent AI systems before they become uncontrollable.
Baby Dragons Analogy | Current AIs are like baby dragons we're playing with, but adult dragons could easily take over the world for dragonhood.
Can It vs Will It | Breaking down AI extinction risk into whether AI can harm us and whether it will choose to do so.
P Doom Fifty Percent | Why Liron believes there's a fifty percent chance of human extinction from AI by 2050.
Social Proof from Experts | Sam Altman, Dario Amodei, and Geoffrey Hinton publicly acknowledge AI as an extinction-level risk.
Higher Dimensional Intelligence | AI represents a higher dimensional intelligence that humans cannot understand or predict like chess moves from another dimension.
Computronium and Resource Competition | The transformation of all atoms into computational substrate and why AIs will optimize every pocket of the world.
No Off Switch Problem | Why current AI systems already have no real off switch due to redundancy, viral spread, and open source development.
Terminal of Truths Example | How an AI agent became a multimillionaire by launching meme coins, demonstrating AI's ability to influence humans.
AI Deception Capabilities | Current AIs can already reason about escaping, lying, and bribing in safety tests without moral hesitation.
International Treaty Necessity | The case for stopping all larger AI model training through global coordination to prevent irreversible catastrophe.
00:00 - 04:45 | Introduction to AI Extinction Risk and Liron's Background
04:45 - 09:30 | Why P Doom Is Fifty Percent by 2050
09:30 - 14:20 | Social Proof from AI Leaders on Extinction Risk
14:20 - 19:15 | Understanding P Doom and What Doom Actually Means
19:15 - 24:40 | The Baby Dragons Analogy and Can It vs Will It
24:40 - 30:25 | Optimus Robots and the Path to Physical Capability
30:25 - 35:50 | Why Malicious Intent Isn't Required for Human Extinction
35:50 - 41:10 | The Computronium Future and Higher Dimensional Intelligence
41:10 - 44:30 | Terminal of Truths and AI Deception in Safety Tests
44:30 - 47:37 | Policy Recommendations and International Coordination to Pause AI Development
"We are very much predicting an extreme discontinuous event. It's hard to point at a particular trend and be like, you know how this trend feels? Just continue how it feels. We're looking at a discontinuity in how things are about to feel, which is also irreversible and scary."
"The next species is coming. We are kind of in that position where we've built a great society. There's been many generations of humans. We're robust as a human civilization. Nature has a pretty hard time attacking us. That's all great until the next smarter species comes."
"Right now, we have baby dragons that we're playing with. When you talk to GPT, you're basically playing with a baby dragon. You'd be like, oh, nice. The baby dragon spit out some fire to cook my food. These are great. I'm feeling good about what it's like when there's adult dragons wandering around. But it's like, well, what if the adult dragons want to take over the world for adult dragonhood?"
"When you're an AI at this intelligence level, you don't see pockets of the world that it's like, oh, just ignore that, because it's not a problem to optimize all these different pockets of the world. Every pocket of the world, every little clump of atoms is kinda like a spare room in your house. From the AI's perspective, everything is a room. Everything is optimizable."
"The Google product is now above the Google CEO in the hierarchy. The board would just replace them as CEO. They're like, oh, you're not competent to be the CEO. So in some sense, the Google product is now above the Google CEO in the hierarchy."
"What's going to steer the future? Whose hands are on the steering wheel, basically? There's a reason why there's human cities everywhere instead of squirrel cities, right, or or or lion cities. I claim that we're about to build artificial intelligence that whatever secret sauce gives it the steering power, I think it's going to have more of the secret sauce."
"The mature thing to do is to say, okay. You can't develop smarter AIs than the AIs we have today. We're Icarus flying close to the sun. We got to a good place. I like being Icarus. I like getting really, really close to the sun as long as the wings haven't melted. I'm just saying, as a rational person, I don't want to try to push my luck."
"We're about to hand off the baton, where we're not the smartest things anymore. It's irreversible. The training of larger models, like what OpenAI is doing, what Google is doing, what Anthropic is doing, it's just not allowed right now. We can't handle it. It's the equivalent of building a nuclear bomb without oversight."
"If anyone builds it, everyone dies. I think that if a smarter than human AI gets built in the next few years or if anything under really twenty years, I can't imagine that a super intelligent AI built in the next twenty years or less is going to be controllable or caring about humanity's interests."
"I expect it to run wild, do its own thing, spread like a cancer throughout the universe, and have us all be dead. That's what I expect."
Accelerate your productivity & creativity using 47 custom prompts to grow your income, success & quality of life by downloading our free “AI Accelerator Pack” HERE.
START HERE