April 2025 / Reading Time: 3 minutes

STUDY F: Legal challenges in tackling AI-generated child sexual abuse material within Australia and New Zealand- REPORT

This report critically reviews the regulatory context of Australia and New Zealand on the topic of accountability around child sexual abuse material (CSAM) created via generative Artificial Intelligence (gen-AI). 

With regards to Australia, the same issues were examined on both a national as well as a state and territory level, whereas in New Zealand they were examined only on the national level, as no relevant legislation exists on a devolved basis. Australia has enacted an Online Safety Act (OSA) in 2021, which imposes certain duties on online service providers to protect Australians, in particular children and vulnerable adult users online. 

The eSafety Commissioner was established in 2015 (then the “Office of the Children’s eSafety Commissioner”) in Australia and serves as an independent regulator for online safety, with powers to require the removal of unlawful and seriously harmful material, implement systemic regulatory schemes and educate people around online safety risks.  Under the OSA, there are currently enforceable codes and standards in force which apply to AI-generated CSAM with civil penalties for services that fail to comply. In particular the “Designated Internet Service Standard” applies to generative AI services, as well as model distribution services. The Australian Government has also recently conducted consultations regarding the introduction of mandatory guardrails for AI in high-risk settings, which considers guardrails such as ensuring that generative AI training data does not contain CSAM. No such legislation exists in New Zealand, although there are ongoing discussions and legal reform suggestions around the potential introduction of similar legislation there. In both Australia and New Zealand, existing definitions of CSAM or similar terminology used in criminal legislation are broad enough to capture AI-generated CSAM. 

As a result, and despite limited case law on the matter due to the emerging character of gen-AI technologies, sentencing decisions have emerged in the Australian states of Victoria and Tasmania involving offenders who produced gen-AI CSAM. In New Zealand, no cases have yet been identified in which offenders have been sentenced for offences involving AI-generated CSAM, however, press reports suggest that offenders have been charged in relation to such material. In addition, there are reports of the New Zealand customs service seizing gen-AI CSAM, suggesting they believe they have the jurisdiction to do so. 

No cases have been identified in Australia or New Zealand in which AI software creators, or holders of datasets used to train AI, have been considered criminally liable in relation to the production of CSAM using their platforms or any other such charges. In New Zealand, certain pieces of legislation (e.g. Crimes Act 1961 and Harmful Digital Communications Act 2015) do not appear to apply in cases of gen-AI CSAM that portrays purely fictitious children. 

This is to an extent expected, as both laws require harm to be inflicted upon an identifiable natural person, and this is not the case in instances of AI-generated CSAM containing purely fictitious children. In both Australia and New Zealand, there are no pending reforms to expand criminal accountability in relation to gen-AI CSAM to AI software creators and dataset holders. Given that the definitions of CSAM in existing criminal legislation appear broad enough to capture AI-generated material, this is not surprising.

Share This Report:

If you have been affected by exploitation or abuse and need support, please visit