SAIFCA Theory of Change

The 'Why' Behind What We Do
SAIFCA Theory of Change

Guiding Purpose

We want children to have a future – and for that future to be an objectively good one.

This principle underpins our belief that protecting children from AI-related harms today contributes not only to their immediate safety and well-being, but also to humanity’s long-term ability to manage the risks posed by advanced AI.

Ultimate Impact

Children are protected from AI-related harms and equipped to help steer AI towards beneficial outcomes for humanity.

Long-Term Outcomes

  • Public opinion shifts significantly, particularly among parents and educators, creating political pressure for stronger AI regulation.
  • Regulation and governance frameworks evolve to protect children from current and near-term AI risks (e.g. unsafe chatbots, AI-generated CSAM).
  • Societal awareness grows around catastrophic AI risks, contributing to broader preparedness and support for alignment efforts.
  • A larger future talent pool emerges, as more young people are encouraged to develop AI expertise informed by ethical and safety principles.
  • A visible and active alliance of contributors, advocates, and advisors forms, increasing momentum and shared influence in public discourse and policy.

Medium-Term Outcomes

  • Parents and educators become engaged advocates, spreading awareness and demanding change.
  • SAIFCA becomes a widely trusted voice, influencing school policy, public discourse, and consultation processes.
  • School policies begin to reflect new AI-related risks, supported by SAIFCA guides and recommendations.
  • Public conversations about catastrophic AI risk become more normalised and connected to concerns about children’s futures.
  • Alliance members begin influencing their own networks, amplifying SAIFCA’s reach and credibility.

Short-Term Outcomes

  • SAIFCA builds credibility and visibility as a go-to resource on AI risks to children.
  • Parents and educators gain awareness of how current AI systems may harm or manipulate children.
  • A growing network of advocates, contributors, and experts strengthens the alliance’s legitimacy and reach.
  • Funders and stakeholders begin to recognise SAIFCA’s potential to shape policy and public awareness.

Outputs

  • Trusted articles and reports (e.g. AI Risks to Children)
  • School guides and educational tools
  • Interviews and contributions from AI experts
  • Website and alliance platform development
  • Public commentary and consultation responses
  • Networking and community-building among aligned experts and advocates

Activities

  • Writing, research, and publishing
  • Public education and school outreach
  • Building and supporting a strong alliance
  • Hosting expert interviews and external contributions
  • Advocating for child-centred regulation and safeguards
  • Supporting awareness of existential risk reduction efforts

Inputs

  • Strategic leadership and vision
  • Experience in intelligence, governance, and AI safety
  • Time, dedication, and passion
  • Financial support and expert collaboration
  • Online platform and communication tools

Assumptions

  • Raising awareness can drive social and political change
  • Parents and educators are powerful but undertargeted advocates
  • Regulation will follow sustained public pressure
  • Children can become future AI safety leaders if supported early
  • Awareness of catastrophic and existential risk can be normalised through trusted, child-focused messaging
  • Public discourse around AI risks may face resistance, but a focus on children’s safety can cut through polarisation and build consensus

Catastrophic Risk Context (‘Assumptions’ continued)

  • Unaligned autonomous generalised intelligence does present an existential, or at least catastrophic, risk
  • The type of AI capable of presenting such a risk may emerge within the next 5–10 years
  • The development of these systems is not inevitable and can be influenced through responsible governance, global coordination, and public pressure
  • Even if timelines are shorter or longer than expected, preventing harm to children from AI is a worthy cause for any duration of time

Read more about our full stance and reasoning on catastrophic and existential risk here:

External Factors

  • Pace and direction of AI development
  • Political appetite (or resistance) to regulation
  • Media coverage and high-profile incidents
  • Emergence of related movements (e.g. online safety, child rights)
  • Broader technological, social, or economic shifts

If you'd like to support our important work, please consider making a donation. Every contribution helps us expand our reach and impact.