Several popular AI products flagged as unsafe for kids by Common Sense Media

Several popular AI products flagged as unsafe for kids by Common Sense Media


An independent review of popular AI tools has found that many — including Snapchat’s My AI, DALLE, and Stable Diffusion, may not be safe for kids. The new reviews come from Common Sense Media, a nonprofit advocacy group for families that’s best known for providing media ratings for parents who want to evaluate the apps, games, podcasts, TV shows, movies, and books their children are consuming. Earlier this year, the company said it would soon add ratings for AI products to its resources for families. Today, those ratings have gone live, offering so-called “nutrition labels” for AI products, like chatbots, image generators, and more.

The company first announced in July that it aimed to build a ratings system to assess AI products across a number of dimensions, including whether or not the technology takes advantage of responsible AI practices as well as its suitability for children. The move was triggered by a survey of parents to gauge their interest in such a service. 82% of parents said they wanted help in evaluating whether or not new AI products, like ChatGPT, were safe for their kids to use. Only 40% said they knew of any reliable resources that would help them to make those determinations.

That led to today’s launch of Common Sense Media’s first AI product ratings. The products it assesses are ratings across several AI principles, including trust, kids’ safety, privacy, transparency, accountability, learning, fairness, social connections, and benefits to people and society.

The organization initially reviewed 10 popular apps on a 5-point scale, including those used for learning, AI chatbots like Bard and ChatGPT, as well as generative AI products, like Snap’s My AI and DALL-E and others. Not surprisingly, the latter category fared the worst.

“AI is not always correct, nor is it values-neutral,” remarked Tracy Pizzo-Frey, Senior Advisor of AI at Common Sense Media, in a summary of the ratings. “All generative AI, by virtue of the fact that the models are trained on massive amounts of internet data, host a wide variety of cultural, racial, socioeconomic, historical, and gender biases – and that is exactly what we found in our evaluations,” she said. “We hope our ratings will encourage more developers to build protections that limit misinformation from spreading and do their part to shield future generations from unintended repercussions.”

In TechCrunch’s own tests, reporter Amanda Silberling found Snapchat’s My AI generative AI features generally tended to be more weird and random, than actively harmful, but Common Sense Media gave the AI chatbot a 2-star rating, noting that it produced some responses that reinforced unfair biases around ageism, sexism, and cultural stereotypes. It also offers some inappropriate responses, at times, as well as inaccuracies. It also stored personal user data, which the organization said raised privacy concerns.

Snap pushed back at the poor review, noting that My AI was an optional tool and that Snapchat makes it clear it’s a chatbot and advises users about its limitations.

“By default, My AI displays a robot emoji. Before anyone can interact with My AI, we show an in-app message to make clear it’s a chatbot and advise on its limitations,” said Snap spokesperson, Maggie Cherneff. “My AI is also integrated into our Family Center so parents can see if and when teens are chatting with it. We appreciate the feedback in the review as we continue to improve our product,” she added.

Other generative AI models like DALL-E and Stable Diffusion had similar risks, including a tendency toward objectification and sexualization of women and girls and a reinforcement of gender stereotypes, among other concerns. (Requests for comment were not immediately returned.)

Like any new medium on the internet, these generative AI models are also being used to produce pornographic materials. Sites like Hugging Face and Civitai have grown popular not only as resources for finding new image models, but also making it easier to find different models that can be combined with one another to make porn using someone’s (like a celeb’s) likeness. That issue came to a head this week, as 404Media called out Civitai’s capabilities, but the debate as to the responsible party — the community aggregators or the AI models itself — continued on sites like Hacker News in the aftermath.

In the mid-tier of Common Sense’s ratings, were AI chatbots like Google’s Bard (which just yesterday officially opened to teens), ChatGPT, and Toddle AI. The organization warned that bias may occur in these bots as well, particularly for users with “diverse backgrounds and dialects.” They could also produce inaccurate information — or AI hallucinations — and reinforce stereotypes. Common Sense warned that the false information AI produces could shape users’ worldviews and make it even more difficult to separate fact from fiction.

The only AI products to receive good reviews were Ello’s AI reading tutor and book delivery service, Khanmingo (from Khan Academy), and Kyron Learning’s AI tutor  — all three being AI products designed for educational purposes. They’re less well-known than others. (And, as some kids may argue, less fun). Still, because the companies designed them with kids’ usage in mind, they tended to use responsible AI practices and focused on fairness, diverse representation, and kid-friendly design considerations. They also were more transparent about their data privacy policies.

Common Sense Media says it will continue to publish ratings and reviews of new AI products on a rolling basis, which it hopes will help to inform not only parents and families, but also lawmakers and regulators.

“Consumers must have access to a clear nutrition label for AI products that could compromise the safety and privacy of all Americans—but especially children and teens,” said James P. Steyer, founder and CEO of Common Sense Media, in a statement. “By learning what the product is, how it works, its ethical risks, limitations, and misuses, lawmakers, educators, and the general public can understand what responsible AI looks like. If the government fails to ‘childproof’ AI, tech companies will take advantage of this unregulated, freewheeling atmosphere at the expense of our data privacy, well-being, and democracy at large,” he added.



Source link