Our Position on Existential Risk from AI

Where We Stand on Existential Risk from AI
Why we believe long-term AI safety matters - even as we focus on protecting children today
SAIFCA is focused on protecting children from the real and immediate harms posed by artificial intelligence.
But we also believe that ignoring credible expert concerns about the long-term consequences of advanced AI would be a disservice to future generations.
This page sets out our position - and why our work remains valuable no matter what the future holds.
At The Safe AI for Children Alliance, our primary mission is to protect children from the real and growing risks posed by artificial intelligence. We focus on near-term harms - from unsafe AI companions to algorithmic manipulation - and on helping children thrive in an AI-shaped world.
But we also recognise a broader reality:
Many leading AI researchers and institutions warn that advanced AI may pose catastrophic or even extinction-level risks to humanity.
This page outlines where SAIFCA stands on that issue – and why our work remains vital no matter what the future holds.
(You may find it helpful to read our Theory of Change too)
🌍 Why We Take Existential Risk Seriously
Turing Award and Nobel Prize winners, along with hundreds of respected researchers, have signed public statements warning of the possibility of human extinction from unaligned, autonomous general intelligence.
“Mitigating the risk of extinction from AI should be a global priority.”
CAIS Statement on AI Risk
We do not believe such warnings should be dismissed.
Based on this expert consensus and our own research, SAIFCA recognises that existential risk from advanced AI is both real and credible. To avoid stating this openly would be, in our view, a disservice to children and to the principle of transparency we aim to uphold.
🎓 Respecting Diverse Views
We acknowledge that not everyone agrees. Some credible experts argue that such risks are exaggerated or very far in the future. While we respectfully disagree, we welcome reasoned debate.
We do not, however, support reckless AI acceleration without adequate safeguards - a position we find ethically indefensible.
We welcome thoughtful disagreement - but we reject reckless acceleration without accountability.
🌱 Why Our Work Matters Either Way
Whether or not advanced AI becomes an existential threat, our mission remains meaningful and urgently needed.
If existential risk is real (as we believe):
- We help normalise the conversation, enabling better public discourse and grassroots support for regulation.
- We amplify and support organisations focused on existential risk mitigation.
- We inspire and equip future safety-focused leaders, expanding the talent pool for technical and governance roles.
- We advocate for stronger regulation, which benefits both children today and society in the long term.
If existential risk does not materialise:
- Our work still protects children from present-day dangers - including manipulation, exploitation, and developmental harms.
- Our strategy continues to strengthen digital literacy, critical thinking, and societal preparedness.
- Nothing is lost. Everything still matters.
⏳ What About Timelines?
There is no consensus on how soon we may face these risks. Some say decades. Others suggest much sooner. We don’t take a fixed position on timing.
Instead, we focus on what we can do – right now.
- If timelines are short (1–5 years), our greatest contribution may be to strengthen and support existing safety efforts.
- If timelines are longer, we can grow a generation of informed, ethical, and capable leaders who will shape AI’s development and governance.
Even in the Worst Case
We do not believe catastrophic outcomes from AI are inevitable. But there are a few experts who suggest the possibility that humanity may already be near – or beyond – a “point of no return.”
💡 And even in that case, we would continue our work.
Protecting children from harm - in any timeline and under any circumstances - is always a cause worth fighting for.
➕ Want to Know More?
- Read our Theory of Change to see how our work links short-term actions to long-term impact
- Visit our Mission page for a quick overview of our goals and principles
- Explore the Centre for AI Safety's expert statement for more on this global concern and read more about the most catastrophic risks
If you'd like to support our important work, please consider making a donation. Every contribution helps us expand our reach and impact.