New regulations for AI companions in Australia
- Henry Fraser
- Sep 9
- 4 min read

The e-Safety Commissioner Julie Inman-Grant today registered an industry code for AI companion chatbots and generative AI systems. Drafted by industry and approved by the Commissioner, the code will require providers of these systems to implement risk assessment, mitigation, and reporting to deal with risks of children being served with harmful, inappropriate content. The code will come into effect in six months, and will be enforceable with penalties of up to $9.9 million. It’s one of six industry codes registered today that apply to a range of online service providers.
This code applies to 'designated internet services' and captures a miscellaneous range of internet services that don’t fit into more well-defined categories like search engines and social media. Along with generative AI models and chatbots, the code will also apply to porn websites, gore websites, pro-suicide websites, and end-user-managed hosting sites like Dropbox. Interestingly it also singles out AI model distribution platforms, requiring them to use their position as gatekeepers to enforce the rules against AI model providers.
World-leading regulation
This code is one of the first pieces of regulation in the world to directly and clearly set out child safety requirements for generative AI systems, including AI companion chatbots, which are singled out in the definition of ‘high impact AI designated services’. These services may previously have been indirectly captured by e-Safety regulation dealing with extreme content such as child sexual abuse material, but the code marks a shift toward more technology-specific regulations dealing with the potential for generative AI to serve vulnerable users (children) with harmful outputs.
Both the provider of underlying AI models, and user-facing chatbot applications, will be captured, and will therefore need to implement risk assessments, mitigations and reporting in relation to serving children restricted categories of informatio such as pornography and self-harm material. When you add to that the regulation of model distribution platforms, the code reaches several steps up the AI value chain.
Risk assessment
Where genAI systems, including companion chatbots, are capable of producing content in a restricted category (porn, violence, self-harm etc.) the code requires providers to do risk assessments. The provider must assess whether there is a high (tier 1), medium (tier 2) or low (tier 3) risk that their chatbot will generate:
• porn,
• sexually explicit material,
• self-harm material,
• high impact violence material,
• violence instruction,
or will give children access to porn or self-harm material from another online source.
Risk assessments must be planned and recorded in writing, and carried out by a person with relevant skills, experience and expertise. The burden is on the provider of the chatbot to show that their risk assessment methodology is based on a reasonable criteria. The code states that risk assessments should consider, among other things, the ages of end users, the number of child users, the likelihood that a child will generate, access or be exposed to porn or self-harm material. It explicitly requires consideration of design features and controls that are deployed to mitigate these risks. And it requires risk assessments to take into account safety-by-design guidance issued by government.
Risk mitigation
Where there is a high-risk (tier 1) that a genAI system or chatbot will generate a restricted category of content (like porn, self-harm material etc.), providers are required to implement age assurance and access control measures, and to test and monitor the effectiveness of these measures over time. If the risk is medium (tier 2), there are mandatory safety by design requirements, with an option of age-gating, or other measures to reduce the risk of children being served inappropriate content.
Reporting and other measures
For tier 1 and 2 risks, providers have to provide clear information, and enforce clear terms and conditions , about whether the system is permitted to be used to produce restricted categories of material. Other requirements include:
• Personnel to oversee trust and safety
• Timely referral of complaints to e-Safety
• Timely response to communications from e-Safety
• Updates to e-Safety
• Reporting on compliance to e-Safety.
Model distribution platforms
The inclusion of model distribution platforms in the code represents a significant step in regulating the ‘value chain’ of AI. . These are online platforms such as Huggingface and Github where users can download open source generative AI models, as well as cloud AI marketplaces like Microsoft Azure. These platforms are obliged to adopt and enforce policies or terms of use that require providers to comply with applicable Austarlian laws and regulations, including the industry code. They have to create mechanisms for end-users to complain to them about models available on the platform, and they have to report to e-Safety on their own compliance. In other words, the code enrols these platforms as regulatory intermediaries, recognising their capacity to perform a watchdog or gatekeeper function.
Future directions
It’s hard to overstate what a significant shift this is in the regulation of generative AI and chatbots in Australia. There has of course been significant controversy about the enforceability of age restrictions for online services, including privacy concerns about age verification. But for those who pay attention to online harms, it does make sense to put AI chatbots on a similar footing to other online services, given how easily these systems serve children with very disturbing and inappropriate content, and given the number of recent reports of chatbot mediated self-harm, psychosis and violence.
While this code is a very important step in establishing more responsible and safe practice among the providers of generative AI tools and chatbots, there are still some important areas where further regulation may be necessary. The code, understandably given the e-safety Commissioner's remit, focuses on specific categories of harmful content. It does not address risks such as chatbot addiction and dependency (by design), or the particular vulnerability of adults suffering mental ill health to chatbot mediated delusions and self-harm content. Nonetheless, it is a welcome signal - and more than just a signal - from the Commissioner that irresponsible and lawless deployments of powerful AI technologies are not acceptable.
Comments