December 10 / 2025 / Reading Time: 4 minutes

Child dignity in the AI era

Professor Elena Martellozzo and Professor Ernesto Caffo explain why the newly signed 'Declaration of Child Dignity in the Artificial Intelligence Era' comes at a crucial time for child safety

We live in a digital age that’s putting children’s safety at risk. While there are undeniable opportunities that artificial intelligence brings, the harm it is causing – particularly against children – is consistently overlooked. 

Take the practice of AI-generated child sexual abuse material (CSAM), for example. New research by Childlight shows that between 2023 and 2024 alone, AI-generated CSAM increased by 1,325%. 

This exponential rise should concern us all. According to the Internet Watch Foundation, in 2024, nearly 40% of all AI‑generated material fell into Category A under UK law — the most serious classification. Depictions in the material included rape, sexual torture of a child, or involved animals.

There is also a gendered nature to this abuse. Of all AI-generated images and videos where the child’s sex was recorded, 98% depicted girls. 

Eight years ago, attendees of the World Congress on Child Dignity in the Digital World launched the Rome Declaration on Child Dignity in the Digital World. The was the first global call to protect children online, uniting faith, science and society around a shared moral purpose. It was presented to Pope Francis in October 2017.

Yet the rapid development of AI tools in less than a decade was unimaginable. Let alone the harm these tools are placing on children.    

In November 2025, leaders from government, academia, technology and faith came together for a high-level meeting on ‘Child Dignity in the Artificial Intelligence Era’. Organised by Fondazione Child ETS in collaboration with the Child Dignity Alliance, the meeting aimed to critically address AI’s impact on the rights of children, placing their wellbeing front and centre. As Professor Ernesto Caffo, Founder of Fondazione Child said in his opening statement, “Children grow up in a digital ecosystem that knows everything about their behaviour yet nothing about their soul.” 

The meeting led to the production of the new Declaration of Child Dignity in the Artificial Intelligence Era, which was subsequently presented to Pope Leo XIV as a sign of united commitment.

Signed by multiple organisations including Childlight - Global Safety Institute, this new declaration reaffirms every child’s right to live, grow, and dream in safety. It outlines six pillars for action:

  • Protection and safety: Establish robust safeguards and mechanism that seek to prevent and respond to online and AI facilitated abuse, exploitation, and manipulation of children.
  • Education and empowerment: Promote digital and emotional literacy so that children, parents, and educators can navigate technology with awareness, wisdom, and resilience.
  • Ethical and transparent technology: Encourage the development of AI systems grounded in human values — transparency, accountability, fairness, and compassion.
  • Global governance and accountability: Strengthen international cooperation to ensure that laws, standards, and institutions uphold children’s rights across nations and digital platforms.
  • Research and innovation for good: Foster collaboration among academia, civil society, and the private sector to harness AI for child well-being, health, education, and inclusion.
  • Interfaith and international solidarity: Unite voices across generations and beliefs to renew our shared commitment to protect and nurture the humanity of every child.
Participants of the high-level meeting organised by Fondazione Child ETS meet with Pope Leo XIV to present the newly signed Declaration of Child Dignity in the Artificial Intelligence Era

While we focus on the risks posed by AI, we know that it can also be a positive form of innovation – but this is only possible if we all act in good faith and make a conscious effort to do no harm, particularly for vulnerable groups like children. As Baroness Joanna Shields OBE, CEO of Precognition and life peer in the UK House of Lords stated during the high-level meeting in Rome, “each generation is judged not by its discoveries, but by what it chooses to do with its innovations.”

We must choose to do the right thing now. Thankfully, some are. 

Just last week, the European Council agreed a position on a regulation to prevent and combat child sexual abuse. Once the law is adopted, digital companies across the European Union will be obligated to prevent the dissemination of CSAM and the solicitation of children, and national authorities will have the power to oblige companies to remove and block access to content and delist search results.

Additionally, the Council agreed to set up a new EU agency – the EU Centre on Child Sexual Abuse – to support the implementation of the regulation.

Now that a position has been reached, the Council can commence negotiations with the European Parliament with a view to agreeing on the final regulation. Europe is on the right track, now it needs to follow through and set a positive example for the world. 

Because children can’t wait.

--- 

About the Authors: 

Professor Elena Martellozzo is Director of the European Hub at Childlight, University of Edinburgh, and Professor of Child Sexual Exploitation and Abuse Research. She is a world-leading expert in cybercrime, focusing primarily on online harms and online safety, and is involved in policy debates at the intersection of technology and human behaviour.

Professor Ernesto Caffo is Founder and President of Fondazione S.O.S - Il Telefono Azzurro Onlus and President of Fondazione Child. He is Chair Professor of Child and Adolescent Psychiatry at the University of Modena and has authored several books and articles on child psychiatry, child abuse and children’s rights. Professor Caffo is also a board member of the International Centre for Missing & Exploited Children (ICMEC) and of Missing Children Europe. 

Share This Article:

If you have been affected by exploitation or abuse and need support, please visit