Several big tech companies have committed to removing nude images from AI training datasets. The news is in response to the Biden administration’s call for action.
Companies including Adobe, Anthropic, Cohere, Microsoft, and OpenAI have pledged to implement what they call responsible sourcing practices and incorporate feedback loops to guard against the output of image-based sexual abuse.
Tech companies have pledged to remove nude images from AI training datasets “when appropriate and depending on the purpose of the model.”
The White House reports an alarming increase in image-based sexual abuse. Platforms have become breeding grounds for the creation, dissemination, and monetization of deepfakes, alongside a concerning rise in sextortion cases.
The 2023-24 school year witnessed a global epidemic of deepfake incidents, with teenage girls being the primary targets of explicit content created and shared by their peers.
Commitments from these AI companies specifically target the prevention of nonconsensual intimate images of adults and child sexual abuse material.
However, critics argue that voluntary efforts are insufficient to tackle the complex challenges posed by AI-generated sexual content. The president of the Cyber Civil Rights Initiative, Mary Franks, emphasized that responsible and accountable practices by tech companies could have prevented the current crisis.
Another group of tech giants, including Bumble, Discord, Match Group, Meta, and TikTok, also announced a set of voluntary principles aimed at preventing image-based sexual abuse.