* . *

AI tools are illegally training on real children, including for explicit material

GoogleAds

AI’s unstoppable quest for training data is hoovering up growing amounts of increasingly questionable content—including details of children whose use by AI breaks the law, researchers have found. 

At least 170 links to photos and personal details of children in Brazil have been scraped from the internet and utilized to train AI systems without parental consent or knowledge, Human Rights Watch said in a report this week. Some of those AI-systems have generated explicit and violent images of children, HRW said. 

Brazilian law prohibits the processing of children’s personal data without the consent of a child’s guardian, Hye Jung Han, a children’s technology rights researcher and the author of the report, told Fortune. 

The links to the photos were scraped from personal blogs and social media sites into a large data set called LAION-5B, which has been used to train popular image generators such as Stable Diffusion. The 170 photos of children is likely a “significant undercount,” HRW said, since the group only reviewed 0.0001 percent of the 5.8 billion images captured in LAION-5B. 

“My wider concern is that this is the tip of the iceberg,” Han told Fortune. “It’s likely that there’s many more children and many more Brazilian children’s images in the data set.”

LAION-5B scraped photos of children from as far back as 1994, and which were clearly posted with the expectation of privacy, Han said. One of the photos features a 2-year-old girl meeting her newborn sister, and the photo’s caption includes not only both of the girls’ names, but the name and address of the hospital where the baby was born. 

That kind of information was available in the URLs or the metadata of many of the photos, Han said. Childrens’ identities are often easily traceable from the photos, either from the caption, or through information about their whereabouts when their photo was taken. 

Young children dancing in their underwear at home, students giving a presentation at school, and highschoolers at a carnival are only a few examples of the personal photos that were scraped. Many of them were posted from mommy blogs, or screenshots taken from personal family Youtube videos with small view counts, Han said. The photos “span the entirety of childhood,” the report found. 

“It’s very likely that these were personal accounts, and [the people who uploaded the images] just wanted these videos shared with family and friends,” Han added. 

All publicly available versions of LAION 5B were taken down last December after a Stanford investigation found that it had scraped images of child sexual abuse. Nate Tyler, a spokesperson for LAION, the nonprofit that runs the data set, said that the organization is working with the Internet Watch Foundation, the Canadian Centre for Child Protection, Stanford, and Human Rights Watch to remove all known references to illegal content from LAION 5B. 

“We are grateful for their support and hope to republish a revised LAION 5B soon,” Tyler said. 

He added that since LAION 5B is built from URL links, rather than direct photographs, simply removing the URL links from the LAION dataset won’t remove any illegal content from the web. 

However, there is still identifying information about minors within links, Han said. She told Fortune she’s asked LAION to do two things: first, prevent future ingestion of children’s data, and second, regularly remove their data from the dataset. 

“[LAION] has not responded or committed to either of those things,” Han said. 

Tyler did not directly address this criticism, but underscored the nonprofit’s commitment to addressing the issue of illegal material in the database.

“This is a larger and very concerning issue, and as a nonprofit, volunteer organization, we will do our part to help,” Tyler said.

Much of LAION-5B’s data is sourced from Common Crawl, which is a data repository that copies swaths of the open internet. However, Common Crawl’s executive director, Rich Skrenta, previously told the Associated Press that it is LAION’s responsibility to filter what it takes before making use of it. 

Potential for harm

Once their photos are collected, children face real threats to their privacy, Han said. AI models, including those trained on LAION-5B data, have notoriously regurgitated private information – such as medical records or personal photographs – when prompted.

AI models can now generate convincing clones of a child with just one or two images, the report wrote. 

“It is pretty safe to say that the photos that I found absolutely contributed to the model being able to produce realistic images of Brazilian kids, including sexually explicit imagery,” Han said. 

More maliciously, some users have used text-to-image AI sites to generate child pornography. One such site, called Civiai, trains their data off of LAION-5B and is overrun by requests for explicit content – 60% of images generated on the platform are considered lewd. Some users asked for and were provided with images related to “very young girl,” and “sex with dog,” an investigation from 404Media, a tech journalism company, found. 

Civiai, upon request, even generated lewd images of girls that specifically did not look “adult, old” or “have big breasts,” 404Media revealed. 

After the investigation was released, the cloud computing provider for Civiai, OctoML, dropped its partnership with the company. Now, Civiai includes a NSFW filter, much to the dismay of some users, who said that the platform will now be like “any other,” according to 404Media. 

A spokesperson from CIviai told Fortune that it immediately bans anyone who produces NSFW content involving minors, and has introduced a “semi-permeable membrane,” referring to the filter which blocks inappropriate content. 

Deepfake technology has already begun to impact young girls, Han said. At least 85 Brazilian girls have faced harassment from classmates who used AI to create sexually explicit deepfakes of them, based on photos taken from their social media profiles, according to the report. Han said she started investigating the topic due to the consistency and realism of these deepfakes. 

“I started looking at what was it about this technology that was able to produce such realistic imagery, horrific imagery, of Brazilian kids, and that investigation led me to the training data set,” Han added. 

The U.S. has seen a number of similar incidents. At least two high schools have faced scandals with boys generating deepfake nude images of dozens of their female classmate. 

Some states, including Florida, Louisiana, South Dakota, and Washington, have begun banning the creation of deepfake nudes of minors, and other states are considering similar bills. However, Han thinks lawmakers should go further, and protect children’s data from being scraped into AI systems completely, as a “futureproof.”

“The burden of responsibility should not be placed on children and parents to try and protect kids from a technology that’s fundamentally impossible to protect against,” Han said. “Parents should be able to post those of kids to share with families and friends and not have to live in the fear that those photos might one day be weaponized and used against them.” 

Source : https://fortune.com/2024/06/11/ai-models-training-real-children-explicit-materials-brazil/

Author :

Date : 2024-06-11 21:35:00

GoogleAds
Photo Video Mag
Logo
Compare items
  • Total (0)
Compare
0

1 2 3 4 5 6 7 8 .. . . . . . . . . . . . . . . . . . . . . . . . . %%%. . . * . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ - - - - - - - - - - - - - - - - - - - - . . . . .