May 21, 20243 min read
OpenAI Takes Action: New Team Formed to Address Child Safety Concerns
Learn how OpenAI is prioritizing child safety. With the establishment of a dedicated team, OpenAI is actively working towards addressing concerns and ensuring a safer environment for young users.

Facing increased scrutiny and concerns from activists and parents, OpenAI has launched a dedicated Child Safety team to explore measures to prevent the misuse or abuse of its AI tools by children. The company's move comes amidst growing awareness of the potential risks associated with children accessing AI-generated content.
A recent job listing on OpenAI's career page unveiled the formation of the Child Safety team, tasked with collaborating with platform policy, legal, and investigations groups both within and outside the organization. The team's primary focus is on managing processes, incidents, and reviews related to underage users.
Currently, OpenAI is in search of a child safety enforcement specialist who will be responsible for implementing the company's policies regarding AI-generated content, particularly in the context of sensitive material relevant to children.
As technology companies face increasing regulatory scrutiny, compliance with laws like the U.S. Children's Online Privacy Protection Rule becomes paramount. OpenAI's decision to bolster its child safety efforts aligns with industry standards and expectations, especially considering its potential future user base among minors. (OpenAI's current terms of use mandate parental consent for children aged 13 to 18 and prohibit use for those under 13.)
The establishment of the Child Safety team follows OpenAI's recent collaboration with Common Sense Media to develop guidelines for kid-friendly AI usage. Additionally, the company's partnership with its first education customer underscores its commitment to addressing the needs and concerns of young users.
The rise of AI tools as resources for academic and personal purposes among children and teenagers has raised concerns about potential risks. Instances of using AI tools like ChatGPT to cope with mental health issues or social conflicts have become increasingly common. However, this trend has also prompted apprehension about the potential negative impacts, including plagiarism and misinformation.
The Center for Democracy and Technology, conducted a poll, reporting that 29% of children used ChatGPT to handle anxiety or mental health problems, while 22% used it for dealing with problems associated with friends and 16% turned to it for managing family conflicts.
In response to these concerns, OpenAI has provided documentation and guidance for educators on utilizing ChatGPT responsibly in classroom settings. Acknowledging the potential for inappropriate content generation, the company advises caution when exposing children, even those meeting age requirements, to AI tools.
The call for comprehensive guidelines on the use of AI among children has gained traction, with organizations like UNESCO advocating for government regulations to safeguard against potential harm. While recognizing the potential benefits of AI in education, UNESCO emphasizes the need for public engagement and regulatory frameworks to ensure responsible and safe usage.
As OpenAI and other tech companies navigate the evolving landscape of AI usage, addressing child safety concerns remains a critical priority to foster a safer and more responsible digital environment for young users.

