The AI research community acts against child abuse images.
Reading
Artificial intelligence researchers have removed over 2,000 web links to inappropriate images of children from an AI training dataset. The dataset called LAION was widely used by image-generator AI tools like Stable Diffusion and Midjourney.
Last year, a study reported that this dataset included links to explicit images of children. This finding prompted LAION to delete its dataset and later release a cleaned version.
After the removal of the problematic dataset, LAION, with help from Stanford and groups in Canada and the UK, worked to prevent similar issues. David Thiel from Stanford noted the need to do more, like removing AI models that can still create harmful images.
Some AI tools continued to harbor unsafe features until recently. For instance, Runway ML only just removed a version of Stable Diffusion cited for producing explicit content.
Governments globally are increasingly scrutinizing the misuse of tech for creating or sharing illegal images. Legal actions are being taken, like the recent lawsuit in San Francisco against websites promoting AI-generated explicit content.
Questions
What did researchers remove from the AI training dataset?
Researchers removed over 2,000 web links to inappropriate images of children.
What prompted LAION to delete its dataset?
A study reported that the dataset included links to explicit images of children.
What are governments doing about the misuse of tech for illegal images?
Governments are scrutinizing the misuse and taking legal actions, like lawsuits against websites promoting AI-generated explicit content.
Discuss
Discuss this article with your AI tutor to practice. Your tutor will help you with vocabulary and grammar.
discuss it with AI at lingolette.com