Big News: Leading AI firms pledge to uphold child safety principles amid deepfake concerns

Following a string of highly publicized incidents involving deepfakes and child sexual abuse material (CSAM) that have plagued the artificial intelligence sector, major AI businesses have banded together to stop the spread of AI-generated CSAM.

Thorn, a charity that develops technology to combat child sexual abuse, stated Tuesday that Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI, and numerous other firms had agreed to new guidelines developed by the group to solve the issue.

Leading AI firms pledge to uphold child safety principles amid deepfake concerns

At least five of the corporations have already responded to allegations that their goods and services were used to facilitate the development and distribution of sexually explicit deepfakes starring children.

AI-generated CSAM and deepfakes have become a contentious subject in Congress and elsewhere, with reports describing cases of adolescent females being attacked at school using AI-generated sexually explicit images of their likenesses.

NBC News previously revealed that sexually explicit deepfakes with actual children’s faces appeared in the top search results for terms such as “fake nudes” on Microsoft’s Bing, as well as in Google search results for specific female celebrities and the word “deepfakes.” NBC News also discovered an ad campaign running on Meta platforms in March 2024 for a deepfake software that claimed to “undress” a 16-year-old actress.

The new “Safety by Design” principles, which the corporations agreed to incorporate into their technology and products, include ideas that a number of them have already battled with.

One principle is the development of technology that will enable businesses to determine whether an image was created by AI. Many early implementations of this technology take the form of watermarks, which are typically easily removed.

See also  Google Drive introduced advanced search features on Android, just a month after iOS update
Leading AI firms pledge to uphold child safety principles amid deepfake concerns

Another guideline is that CSAM will not be used in training datasets for artificial intelligence models.

In December 2023, Stanford researchers uncovered over 1,000 photographs of child sexual assault in a prominent open-source database of images used to train Stability AI’s Stable Diffusion 1.5, a version of one of the most popular AI image generators. The dataset, which was neither developed nor controlled by Stability AI, was removed at the time.

Stability AI told NBC News that its algorithms were trained on a “filtered subset” of the dataset containing the child sexual assault photographs.

“In addition, we subsequently fine-tuned these models to mitigate residual behaviors,” the company stated in a statement.

Thorn’s new guidelines also state that companies should only release models after they have been reviewed for kid safety, that they should host their models responsibly, and that they should provide assurances that their models would not be used for abuse.

It is unclear how different corporations will apply such criteria, and some have faced major criticism for how they have been implemented and the communities they serve.

CivitAI, for example, provides a marketplace for anybody to commission “bounties,” or deepfakes, of real or fake persons.

Leading AI firms pledge to uphold child safety principles amid deepfake concerns

At the time of publication, the “bounties” website had received numerous requests for deepfakes of prominent ladies, some of which sought sexually explicit outcomes. CivitAI states that “content depicting or intended to depict real individuals or minors (under 18) in a mature context” is forbidden. Some of CivitAI’s pages exhibiting AI models, AI-generated photos, and AI-generated movies had sexually explicit portrayals of young females.

See also  The programming languages that are most beneficial to learn if you wish to enter into AI

In its announcement of the new “Safety by Design” standards, Thorn also acknowledged the structural burden that AI places on an already beleaguered law enforcement sector. According to a report released Monday by the Stanford Internet Observatory, only 5% to 8% of reports to the National Center for Missing and Exploited Children about child sexual abuse imagery result in arrests, and AI opens the door to a flood of new, AI-generated child sexual abuse content.

Thorn develops technology used by IT firms and law enforcement to detect child exploitation and sex trafficking. Thorn’s technology and work have been commended by tech businesses, with many collaborating with the group to integrate its innovations into their platforms.

Thorn has faced criticism for its work with police enforcement. One of its key products collects internet sex solicitations and makes them available to authorities, a technique that, according to Forbes, has sparked concern among countertrafficking experts and sex workers.

SOURCE

Leave a Reply

Your email address will not be published. Required fields are marked *