Tech companies promise to combat the production of sexual abuse images of children with AI

Tech companies promise to combat the production of sexual abuse images of children with AI
Tech companies promise to combat the production of sexual abuse images of children with AI
--

Both creating and distributing images of sexual abuse of minors created by artificial intelligence must become a lot more difficult. At the insistence of the American interest groups All Tech Is Human and Thorn (for the protection of the child), Amazon, Google, Meta, OpenAI and Microsoft, among others, have pledged to take concrete action.

Robbert Hoving of the Dutch Child Pornography Reporting Center welcomes the initiative, but at the same time sees that the damage has already been done: “The police also recently warned about the increase in AI child pornography. We are not receiving many reports yet, but we are holding our breath.”

Training materials

The leading makers of AI tools now promise to be more careful with the datasets with which their AI models are trained. If sexual abuse images of minors are accidentally included in that training material, the AI ​​image creation models will then be able to create child pornography.

Such a scenario is not imaginary. At the end of last year, research by the Stanford Internet Observatory showed that the more than five billion images of the well-known AI image bank Laion-5B contained at least a thousand cases of child abuse. AI companies can use Laion for free to train their models. The best-known example is Stable Diffusion, which has also signed the agreements.

To block

All major American AI companies and the French Mistral AI are participating in the new agreements. At the same time, the web is teeming with sites and apps that facilitate and publish AI images of sexual abuse (of children). That is why Thorn and All Tech Is Human also want search engines and social media to block links to these types of sites.

Legal porn can also be a problem, says Thorn. This is due to the ability of AI models to combine different sources and parts into something new or to artificially rejuvenate the faces of adults.

“Generative AI makes creating large amounts of material easier than ever before,” writes Thorn. Also material about child sexual abuse. Adversaries can convert original child abuse images and videos into new material, thereby revictimizing the child.

Manipulation

Another, at least as problematic, application is the use of AI to manipulate innocent images of children into sexualized content. This does not necessarily have to happen on shady internet forums; The New York Times wrote earlier this month about the large number of AI nude images that teenage girls are confronted with, images made by schoolmates.

Fully synthesized photos or videos depicting child abuse are also a growing problem – these involve images of non-existent people. These images flood internet forums and (private) apps, according to research published this week by the Internet Observatory at Stanford University.

The resulting flow of reports of online child abuse leads to extra pressure on investigative authorities, which compromises reports of physical child abuse.

The tech companies’ voluntary measures come as the US Congress prepares a series of bills to better protect children online. For example, the Kids Online Safety Act must impose strict requirements on tech companies.

The article is in Dutch

Tags: Tech companies promise combat production sexual abuse images children

-

NEXT Everything to do again: Bayern Munich and Real Madrid keep each other in balance in the Champions League semi-final