Holly Elmore, Executive Director of Pause.ai US and evolutionary biologist, reveals why superintelligent AI poses an existential threat humanity isn't prepared for. She explains the critical difference between narrow AI tools and AGI that could surpass human intelligence across all domains—and why smarter doesn't mean safer. Learn about the global movement working to pause dangerous AI development before we lose control of the technology that could reshape or end civilization.
Executive Director | Pause.ai US Holly leads the US branch of the global Pause.ai movement, which advocates for halting the most advanced AI experiments until safety and governance frameworks are established. She has built a grassroots organization spanning 16-17 cities across the United States and coordinates with international chapters in 15 countries worldwide.
AI Safety Organizer & Policy Advocate | Pause.ai Movement Holly organizes protests, demonstrations, and educational campaigns to raise public awareness about existential risks from artificial general intelligence and superintelligence. She has successfully fought federal AI preemption legislation that would have blocked state-level regulation without establishing any federal framework, and she leads national campaigns mobilizing citizens to contact their representatives on AI safety issues.
Evolutionary Biologist | Research Background Holly brings a unique scientific perspective to AI risk assessment through her training in evolutionary biology, applying principles of biological intelligence and natural selection to understand artificial intelligence development. Her biological expertise informs her analysis of intelligence thresholds, cognitive capabilities, and the orthogonality thesis regarding the relationship between intelligence and morality.
AGI vs Superintelligence | Understanding the critical difference between human-level general intelligence and systems that exceed human abilities across all domains.
The Orthogonality Thesis | Why intelligence and morality are independent variables, and how this explains why smarter AI won't automatically become more ethical.
The Ant Pile Problem | How superintelligent AI could harm humans unintentionally, just as humans rarely consider the ants they step on while walking.
Nuclear Treaty Precedent | Why international agreements on nuclear weapons provide a proven framework for regulating frontier AI development.
Compute Governance Strategy | How data center heat signatures and compute bottlenecks make AI development trackable and enforceable from satellites.
Warning Shots Fallacy | Why waiting for AI disasters to motivate action is both morbid and ineffective without proper education and preparation.
Market Failure Dynamics | How AI development represents the ultimate market failure requiring external government regulation to protect public safety.
Pause.ai Global Movement | Building grassroots advocacy across 15 countries and 16 US cities to halt dangerous AI experiments.
AI Preemption Battles | Fighting federal legislation that would block state-level AI regulation without establishing any federal framework.
Grassroots Action Steps | Concrete ways citizens can influence AI policy through petitions, representative outreach, and local organizing.
00:00 - 03:41 | What Makes AGI Different From Current AI
03:42 - 07:22 | The Orthogonality Thesis: Intelligence Without Morality
07:23 - 11:04 | Why Superintelligence Won't Automatically Care About Humans
11:05 - 14:46 | How Nuclear Treaties Provide a Blueprint for AI Regulation
14:47 - 18:28 | Compute Governance: Tracking AI Development From Satellites
18:29 - 22:10 | Why We Can't Wait for Warning Shots
22:11 - 25:52 | ChatGPT as a Personal Wake-Up Call
25:53 - 29:34 | The Birth and Growth of Pause.ai Movement
29:35 - 33:16 | Fighting AI Preemption in Federal Legislation
33:17 - 36:53 | How to Get Involved in the Pause Movement
"When we walk past an ant pile, like, we generally most people will think very little of squishing a few ants. You know? It's just not they're not bothered. You know? And that's what the relationship would be like."
"Intelligence is the ability to reach goals, and that could be directed towards anything. Like, your values, you know, your morality inform what your goals are, but the ability to reach goals is independent of that."
"We don't necessarily get for free, like, good values and good guidance. That's actually a really, really difficult problem that is unsolved how you instill those in artificial intelligence."
"It's like the spaceships are in the sky right now, and the fact that everybody's not standing outside and pointing at them and saying, maybe we should do something to get ready."
"We have solved basically this problem before, you know, with nuclear weapons."
"The data centers that are training AI give off, like, immense heat signatures that's visible from satellites. It'd be very, very difficult to hide a frontier AI, at least for, you know, a machine learning type AI training run, from the whole world."
"This is like the market failure to end all market failures. We need external regulation, external to the project of building AI."
"I really didn't think we probably would see a computer, like, master natural language in my lifetime."
"There's no, like, natural law that says you can't be smarter than a human."
"If you don't do that, they're never going to receive warning shots that do occur as warning shots."
Accelerate your productivity & creativity using 47 custom prompts to grow your income, success & quality of life by downloading our free “AI Accelerator Pack” HERE.
START HERE