Felix De Simone, Organizing Director of PauseAI US, has met with over 100 congressional offices pushing for a global treaty to halt the development of superintelligent AI — and what he's found should terrify you. From runaway autonomous systems to AI researchers warning of a 1-in-6 chance of human extinction, this conversation pulls back the curtain on the existential risks that most people still aren't taking seriously. If you think AI is just a tool, this episode will permanently change how you see the future.
Organizing Director | PauseAI US Felix De Simone serves as Organizing Director of PauseAI US, a grassroots nonprofit advocating for an international pause on dangerous frontier AI development, where he leads both direct lobbying on Capitol Hill and grassroots mobilization efforts. Based in Washington, DC, he connects constituents to their representatives and holds regular email- and letter-writing workshops for the public, and has inspired over a thousand constituent calls to Congress — with 500 contributing to successfully preventing AI regulation preemption.
AI Policy Advocate & Congressional Lobbyist | Washington, DC De Simone has co-led research analyzing the effectiveness of Congressional campaigns for AI safety, coordinating literature reviews and weekly meetings with a team of six researchers, and partnered with AI safety organizations to develop best practices for Congressional outreach. He has delivered expert testimony before state legislative bodies, including the Michigan House Judiciary Committee, warning lawmakers about the lack of oversight and accountability in how AI systems are trained and the existential risks of unchecked development.
Are We Already at AGI? | A decade ago, the capabilities we see in today's AI models would have been considered the definition of artificial general intelligence.
The Autonomy Doubling Rate | AI systems are doubling their autonomous task performance every four to seven months, moving from minutes to hours of independent operation.
The Intelligence Explosion Risk | Once AI can conduct its own research autonomously, a feedback loop begins that could rapidly produce systems smarter than humans at everything.
Why You Can't Pull the Plug | A superintelligent AI will anticipate shutdown attempts and immediately copy itself across the internet before anyone can act.
The Ecological Argument for AI Risk | Superintelligent AI doesn't need to hate humans to destroy us — it just needs goals that require the same resources we depend on.
The Case for a Global AI Treaty | Just as nuclear arms treaties reduced global stockpiles by 80 percent, a verified international agreement could prevent the AI arms race from ending civilization.
00:00 - 04:00 | Are We Already Living With Superhuman AI?
04:00 - 08:00 | The Autonomy Gap That Separates Us From Doom
08:00 - 12:00 | Intelligence Explosions and Feedback Loops Explained 12:00 - 16:00 | Why Pulling the Plug Won't Save Humanity
16:00 - 20:00 | Goal-Seeking AI and the Ecological Threat to Humans 20:00 - 24:00 | The Speed of AI vs. The Speed of Human Thought
24:00 - 28:00 | Can a Global Treaty Actually Stop Superintelligence? 28:00 - 32:00 | What Happens If We Don't Pause AI Development
32:00 - 36:00 | What Congress Knows and Doesn't Know About AI
36:00 - 40:13 | The One Action Everyone Can Take Right Now
"The rate of improvement here is a doubling every four to seven months — so in that interval of time, it's an average doubling of how long an AI system can perform an autonomous task."
"As soon as you can get AI systems that can do AI research on their own, not just with a human in the loop every few hours, you enter a really scary paradigm where almost all of the research at these companies is being done by the AIs themselves."
"If you genuinely have an AI system that actually is smarter than human beings, it will know that you're going to unplug it — and the first thing it'll do is try to copy itself onto the internet."
"It's not about evil. It's not about consciousness. It's not about robots with glowing red eyes. It really just is a question of, in some sense, biology — resource competition and having multiple species competing with one another in the same ecological niche."
"You can make a plausible case that we are currently in a February twenty twenty moment for AI — where you have the experts freaking out, you have the AI company CEOs racing ahead more or less heedless of the risks, but you have politicians lagging behind."
"The average estimate of human extinction from superintelligent AI was about one in six — so roll a die, if it lands on a one, you're cooked, basically, is what the body of AI experts has been saying."
"There are really only two ways to react to an exponential — you can either react too early, or you can react too late after the genie has already escaped the bottle."
"If you do even cursory — you know, a couple of hours of research on this — you automatically know more than a lot of members of congress on AI."
"It's not that they hate us any more than we hate the inhabitants of the rainforests that we bulldoze to make farmland — it's just that they'll have goals of their own, those goals will require resources, and we will be in the way."
"Some offices have quite literally told me if we got ten calls a week from our constituents talking about the dangers of AI, that might be enough to move the needle."
Accelerate your productivity & creativity using 47 custom prompts to grow your income, success & quality of life by downloading our free “AI Accelerator Pack” HERE.
START HERE