0

Anthropic now lets kids use its AI tech — within limits | TechCrunch

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — at least under certain circumstances.

announced in a Post On Friday, Anthropic will start letting teens and tweens use third-party apps (but not its own) on the company’s official blog. AppsEssentially) powered by their AI models as long as the developers of those apps implement specific security features and tell users what anthropogenic technologies they are leveraging.

one in support article, Anthropic listed a number of safeguards that AI-powered apps building for minors should take, including age verification systems, content moderation and filtering, and educational resources on “safe and responsible” AI use for minors. The company also says it may make available “technical measures” aimed at tailoring AI product experiences to minors, such as “child-safety system prompts” that developers targeting minors would be required to implement. .

Developers using Anthropic’s AI models must also comply with “applicable” child protection and data privacy regulations such as the Children’s Online Privacy Protection Act (COPPPA), a US federal law that protects the online privacy of children under the age of 13. Protects. Anthropic says it plans to “periodically” audit apps for compliance, suspend or terminate the accounts of those who repeatedly violate compliance requirements, and require developers to post updates on public-facing sites or documentation. It is mandatory to “expressly state” that they are in compliance.

“There are some use cases where AI tools could provide significant benefits to young users, such as test preparation or tutoring support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to include our API in their products for minors.”

Anthropic’s change in policy comes as children and teens are not only turning to generic AI tools for help school work But personal issues persist, and as rival generative AI vendors — including Google and OpenAI — are exploring more use cases for children. This year, OpenAI formed a New Team To study child safety and announced Partnership with Common Sense Media to collaborate on kid-friendly AI guidelines. And Google made its chatbot Bard, named Gemini, available in English for teens in selected regions.

according to a vote According to the Center for Democracy and Technology, 29% of children reported that they have used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health problems, 22% for issues with friends and 16% for family disputes. Used for.

Last summer, school and college hurried Banning generic AI apps – notably ChatGPT – due to fear of plagiarism and misinformation. Since then, some have reverse their restrictions. But not everyone is convinced of generic AI’s potential for good, which points to Survey Like the UK Safer Internet Centre, which found that more than half of children (53%) reported that they had seen people their age use generative AI in a negative way – for example by making credible false statements to harass someone. Creating information or images (including porn deepfakes,

There is a growing demand for guidelines on the use of generative AI by children.

At the end of last year the United Nations Educational, Scientific and Cultural Organization (UNESCO). pushed For governments to regulate the use of generic AI in education, including enforcing age limits for users and guarding against data security and user privacy. “Generative AI could be a tremendous opportunity for human development, but it could also create harms and biases,” UNESCO Director-General Audrey Azoulay said in a press release. “It cannot be integrated into education without public participation and necessary safeguards and regulations from governments.”

anthropic-now-lets-kids-use-its-ai-tech-within-limits-techcrunch