A Pornhub Chatbot Stopped Millions From Searching for Child Abuse Videos
For the past two years, millions of people searching for child abuse videos on Pornhub’s UK website have been interrupted. Each of the 4.4 million times someone has typed in words or phrases linked to abuse, a warning message has blocked the page, saying that kind of content is illegal. And in half the cases, a chatbot has also pointed people to where they can seek help.
The warning message and chatbot were deployed by Pornhub as part of a trial program, conducted with two UK-based child protection organizations, to find out whether people could be nudged away from looking for illegal material with small interventions. A new report analyzing the test, shared exclusively with WIRED, says the pop-ups led to a decrease in the number of searches for child sexual abuse material (CSAM) and saw scores of people seek support for their behavior.
“The actual raw numbers of searches, it’s actually quite scary high,” says Joel Scanlan, a senior lecturer at the University of Tasmania, who led the evaluation of the reThink Chatbot. During the multiyear trial, there were 4,400,960 warnings in response to CSAM-linked searches on Pornhub’s UK website—99 percent of all searches during the trial did not trigger a warning. “There’s a significant reduction over the length of the intervention in numbers of searches,” Scanlan says. “So the deterrence messages do work.”
Millions of images and videos of CSAM are found and removed from the web every year. They are shared on social media, traded in private chats, sold on the dark web, or in some cases uploaded to legal pornography websites. Tech companies and porn companies don’t allow illegal content on their platforms, although they remove it with different levels of effectiveness. Pornhub removed around 10 million videos in 2020 in an attempt to eradicate child abuse material and other problematic content from its website following a damning New York Times report.
Pornhub, which is owned by parent company Aylo (formerly MindGeek), uses a list of 34,000 banned terms, across multiple languages and with millions of combinations, to block searches for child abuse material, a spokesperson for the company says. It is one way Pornhub tries to combat illegal material, the spokesperson says, and is part of the company’s efforts aimed at user safety, after years of allegations it has hosted child exploitation and nonconsensual videos. When people in the UK have searched for any of the terms on Pornhub’s list, the warning message and chatbot have appeared.
The chatbot was designed and created by the Internet Watch Foundation (IWF), a nonprofit which removes CSAM from the web, and the Lucy Faithfull Foundation, a charity which works to precent child sexual abuse. It appeared alongside the warning messages a total of 2.8 million times. The trial counted the number of sessions on Pornhub, which could mean people are counted multiple times, and it did not look to identify individuals. The report says there was a “meaningful decrease” in searches for CSAM on Pornhub and that at least “in part” the chatbot and warning messages appear to have played a role.
Source : https://www.wired.com/story/pornhub-chatbot-csam-help/
Author :
Date : 2024-02-29 09:00:00