“Video Unavailable”: Social Media Platforms Remove Evidence of War Crimes
In recent years, social media platforms have been taking down online content more often and more quickly, often in response to the demands of governments, but in a way that prevents the use of that content to investigate people suspected of involvement in serious crimes, including war crimes. While it is understandable that these platforms remove content that incites or promotes violence, they are not currently archiving this material in a manner that is accessible for investigators and researchers to help hold perpetrators to account.
Social media content, particularly photographs and videos, posted by perpetrators, victims, and witnesses to abuses, as well as others has become increasingly central to some prosecutions of war crimes and other international crimes, including at the International Criminal Court (ICC) and in national proceedings in Europe. This content also helps media and civil society document atrocities and other abuses, such as chemical weapons attacks in Syria, a security force crackdown in Sudan, and police abuse in the United States.
Yet social media companies have ramped up efforts to permanently remove posts from their platforms that they consider violate their rules, or community guidelines or standards according to their terms of service, including content they consider to be “terrorist and violent extremist content” (TVEC), hate speech, organized hate, hateful conduct, and violent threats. According to the companies, they not only take down material that content moderators classify for removal. Increasingly, they also use algorithms to identify and remove content so quickly that no user sees it before it is taken down. In addition, some platforms have filters to prevent content identified as TVEC and other relevant content from being uploaded in the first place. Governments globally have encouraged this trend, calling on companies to take down content as quickly as possible, particularly since March 2019, when a gunman livestreamed his attack on two mosques in Christchurch, New Zealand that killed 51 people and injured 49 others.
Companies are right to promptly remove content that could incite violence, otherwise harm individuals, or jeopardize national security or public order. But the social media companies have failed to set up mechanisms to ensure that the content they take down is preserved, archived, and made available to international criminal investigators. In most countries, national law enforcement officials can compel the companies to hand over the content through the use of warrants, subpoenas, and court orders, but international investigators have limited ability to access the content because they lack law enforcement powers and standing.
Law enforcement officers and others are also likely to be missing important information and evidence that would have traditionally been in the public domain because increasingly sophisticated artificial intelligence systems are taking down content before any of them have a chance to see it or even know that it exists. There is no way of knowing how much potential evidence of serious crimes is disappearing without anyone’s knowledge.
Independent civil society organizations and journalists have played a vital role in documenting atrocities in Iraq, Myanmar, Syria, Yemen, Sudan, the United States, and elsewhere – often when there were no judicial actors conducting investigations. In some cases, the documentation of organizations and the media has later triggered judicial proceedings. However, they also have no ability to access removed content. Access to this content by members of the public should be subject to careful consideration, and removal may be appropriate in some cases. But when the content is permanently removed and investigators have no way of accessing it, this could hamper important accountability efforts.
Companies have responded to some civil society requests for access to content either by reconsidering its takedown and reposting it, or by saying that it is illegal for them to share the content with anyone. Human Rights Watch is not aware of any instances where companies have agreed to provide independent civil society and journalists access to such content if it was not reposted.
It is unclear how long the social media companies save content that they remove from their platforms before deleting it from their servers or even whether the content is, in fact, ever deleted from their servers. Facebook states that, upon receipt of a valid request, it will preserve the content for 90 days following its removal, “pending our receipt of [a] formal legal process.” Human Rights Watch knows, however, of instances in which Facebook has retained on its servers content taken down for periods much longer than 90 days after removal. In an email to Human Rights Watch on August 13, a Facebook representative said, “Due to legislative restrictions on data retention we are only permitted to hold content for a certain amount of time before we delete it from our servers. This time limit varies depending on the abuse type… retention of this data for any additional period can be requested via a law enforcement preservation request.”
In an email to Human Rights Watch on August 4, Twitter said it, “retains different types of information for different lengths of time, and in accordance with our Terms of Service and Privacy Policy.” In at least one instance that Human Rights Watch is aware of, YouTube restored content two years after it had taken it down.
Holding individuals accountable for serious crimes may help deter future violations and promote respect for the rule of law. Criminal justice also assists in restoring dignity to victims by acknowledging their suffering and helping to create a historical record that protects against revisionism by those who will seek to deny that atrocities occurred.
However, both nationally and internationally, victims of serious crimes often face an uphill battle when seeking accountability, especially during situations of ongoing conflict. Criminal investigations sometimes begin years after the alleged abuses were committed. It is likely that by the time these investigations occur, social media content with evidentiary value will have been taken down long before, making the proper preservation of this content, in line with standards that would be accepted in court, all the more important.
International law obligates countries to prosecute genocide, crimes against humanity, and war crimes. In line with a group of civil society organizations who have been engaging with social media companies on improving transparency and accountability around content takedowns since 2017, Human Rights Watch urges all stakeholders, including social media platforms, to engage in a consultation to develop a mechanism to preserve potential evidence of serious crimes and ensure it is available to support national and international investigations, as well as documentation efforts by civil society organizations, journalists, and academics.
The mechanism in the US to preserve potential evidence of child sexual exploitation posted online provides important lessons for how such a mechanism could work. US-registered companies operating social media platforms are required to take down content that shows child sexual exploitation, but also preserve it on their platforms for 90 days and share a copy of the content, as well as all relevant metadata—for example, the name of the content’s author, the date it was created, and the location—and user data, with the National Center for Missing and Exploited Children (NCMEC). The NCMEC, a private nonprofit organization, has a federally designated legal right to possess such material indefinitely, and, in turn, notifies law enforcement locally and internationally about relevant content that could support prosecutions.
A mechanism to preserve publicly posted content that is potential evidence of serious crimes could be established through collaboration with an independent organization that would be responsible for storing the material and sharing it with relevant actors. An upcoming report from the Human Rights Center at the University of California, Berkeley, “Digital, Lockers: Options for Archiving Social Media Evidence of Atrocity Crimes,” studies the possible archiving models for this, creating a typology of five archive models, and assessing strengths and weaknesses of each.
In parallel with these efforts, social media platforms should be more transparent about their existing takedown mechanisms, including through the increased use of algorithms, and work to ensure that they are not overly broad or biased and provide meaningful opportunities to appeal content takedowns.
In letters to Facebook, Twitter, and Google sent in May 2020, Human Rights Watch shared the links to this content that had been taken down and asked the companies if Human Rights Watch could regain access for archival purposes. Human Rights Watch also asked a series of other questions related to how the companies remove content. The full response from Twitter is included as an Annex in this report. At the time of writing, Human Rights Watch had not received a response from Google, and only a brief email response not addressing most of the questions raised in the letter from Facebook.
However, given the quantity of content that could be flagged, including potentially hundreds of thousands of reposts, platforms announced a new approach involving the use of machine-learning systems. YouTube, which is owned by Google, said in August 2017 that it was implementing “cutting-edge machine-learning technology” designed to identify and remove content it identified as TVEC. The new system has yielded results: YouTube took 6,111,008 videos offline between January and March 2020 for violating its Community Guidelines, according to the most recent transparency report available at time of writing. The company removed 11.4 percent of the content because it was “violent or graphic,” 1.8 percent of the content because it was “hateful and abusive,” 4.2 percent of the content because it was a “promotion of violence and violent extremism,” and 37 percent of the content because it was “spam, misleading, or scams.” Automated systems flagged 93.4 percent of all the content that the platform took down. Of this, 49.9 percent was taken down before any user saw it, the report said.
Until recently, YouTube has said that it only removes content that automated systems flagged as TVEC after humans reviewed whether it fit the company’s definitions of “terrorist” or “violent extremist” material. However, YouTube announced on March 16, 2020, that in response to the Covid-19 pandemic, it “will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review.”
During that same time period, between January and March 2020, Facebook took down 6.3 million pieces of “terrorist propaganda,” 25.5 million pieces of “graphic violence,” 9.6 million pieces of “hate speech,” 4.7 million pieces of “organized hate” content, and disabled 1.7 billion “fake accounts.” 99.3 percent, 99 percent, 88.8 percent, 96.7 percent, and 99.7 percent of this content, respectively, was automatically flagged before users reported it. During that period, the company said it was able to remove most content it considered to be terrorist before users saw it. Users appealed takedowns for 180,100 pieces of “terrorist propaganda” content, 479,700 pieces of “graphic violence” content, 1.3 million pieces of “hate speech” content, and 232,900 pieces of “organized hate” content. Upon appeal, Facebook restored access to 22,900 pieces of “terrorist propaganda” content, 119,500 pieces of “graphic violence” content, 63,600 pieces of “hate speech” content, and 57,300 pieces of “organized hate” content. Facebook reported that it restored content that had been taken down without any appeal in 199,200 cases involving “terrorist propaganda,” 1,500 cases involving “graphic violence,” 1,100 cases involving “hate speech,” and 11,300 pieces involving “organized hate” content.
Between January and June 2019, 5,202,281 Twitter accounts were reported for “hateful conduct” and 2,004,344 Twitter accounts were reported for “violent threats.” Upon receiving these reports, Twitter “actioned” 584,429 accounts for “hateful conduct” and 56,219 accounts for “violent threats.” The email from Twitter’s Public Policy Strategy and Development Director outlined a slightly different approach to identifying content to take down, based on the behavior of the account putting out the content rather than a review of the substance of the content:
Twitter’s philosophy is to take a behavior-led approach, utilizing a combination of machine learning and human review to prioritize reports and improve the health of the public conversation. That is to say, we increasingly look at how accounts behave before we look at the content they are posting. Twitter also employs content detection technology to identify potentially abusive content on the service, along with allowing users to report content. This is how we seek to scale our efforts globally and leverage technology even where the language used is highly context specific.
In certain situations, behaviour identification may allow us to take automated action – for example, accounts clearly tied to those that have been previously suspended, often identified through technical data. However, we recognise the risks of false positives in this work and humans are in the loop for decisions made using content and where signals are not strong enough to automate. Signals based on content analysis are part of our toolkit, but not used in isolation to remove accounts and we agree with concerns raised by civil society and academics that current technology is not accurate enough to fully automate these processes. We would not use these systems to block content at upload, but do use them to prioritise human review.
Similar to the approach adopted to address child sexual exploitation content, in December 2016, the founding member companies of the Global Internet Forum to Counter Terrorism (GIFCT) – Facebook, Microsoft, Twitter, and YouTube – committed to creating a shared industry database of hashes, later called the “Hash Sharing Consortium,” for its members. In addition to the four founders, GIFCT’s current members include Pinterest, Dropbox, Amazon, LinkedIn, Mega.nz, Instagram, and WhatsApp. If members identify a piece of content on their platform as terrorist content according to their respective policies, they can assign it a hash or unique digital “fingerprint,” which is entered into the shared database. A “hash sharing consortium” that includes GIFCT members and some other tech companies can then use filtering technology to identify hashed content and block it from being uploaded in the first place.
If the content has been edited in any way (sped up, slowed down, or shortened, for example) or if there is more contextual information that has been added, the content would bypass hash filtering and not be automatically blocked from being uploaded. According to the GIFCT, as of July 2020 its database currently has over 300,000 unique hashes of “terrorist” content.
For years, civil society organizations have engaged with social media companies, independently and as part of the GIFCT, on the need for increased transparency in how and why “terrorist” content is taken down and to warn against the human rights harms of opaque, cross-platform coordination. Most recently, in July 2020, 16 civil society organizations wrote to Nicholas Rasmussen, the executive director of the GIFCT, reiterating concerns groups raised in February 2020 with Facebook, Google, Microsoft and Twitter around the growing role of GIFCT more broadly in regulating content online. The letter raises concerns around a serious risk of unlawful censorship from government involvement in GIFCT; lack of genuine and balanced engagement with civil society; lack of clarity over the terms “terrorism,” “violent extremism,” “extremism,” and support for or incitement to them; increasing scope and use of a shared hash database without either transparency or remedy for improper removals; and persistent lack of transparency around GIFCT activity.
Despite these and other efforts, there is still little public information on what criteria social media platforms use when assessing whether content is TVEC or hate speech and should be taken down. Additionally, there is little visibility to anyone outside of the GIFCT member companies as to what content is represented in the hash database as TVEC, and whether it meets the platforms’ own definitions of terrorist content. According to the GIFCT, its member companies “often have slightly different definitions of ‘terrorism’ and ‘terrorist content.’” But for the purposes of the hash-sharing database, the companies decided to define terrorist content “based on content related to organizations on the United Nations Security Council’s consolidated sanctions list.”
Dia Kayyali, a program manager and advocacy lead at WITNESS, warns that GIFCT may ultimately become a “multi-stakeholder forum where human rights experts are brought in as window-dressing while government and companies work closely together… in the mad rush to ‘eliminate’ poorly defined ‘terrorist and violent extremist content.’”
Compounding concerns over the disappearance of potentially valuable online evidence, a growing number of governments, as well as Europol, have created law enforcement teams known as internet referral units (IRUs) that flag content for social media companies to remove, with scant opportunity to appeal or transparency over the criteria they use and how much of the removed material they archive, if any.
Each company retains the right to make individualized decisions about whether to remove any particular post from its service on the basis of its own community guidelines and terms of service, but little is known about how companies choose to act when using the shared hash database. In practice, it is likely that small companies will use the database to automatically remove content in the database because they do not have the resources to carry out individualized reviews. The GIFCT has categorized the hashes it has entered into its database as falling into at least one of the following categories: “Imminent Credible Threat,” “Graphic Violence Against Defenseless People,” “Glorification of Terrorist Acts,” “Radicalization, Recruitment, Instruction,” and “Christchurch, New Zealand, attack and Content Incident Protocols”.
Importantly, though, the regulation is likely to require platforms to preserve content they remove for six months, in order to allow access by law enforcement. This could help put pressure on social media companies to create an independent mechanism to preserve and archive content.
Recognizing that companies engaged in content moderation are not operating with sufficient transparency and accountability, a group of organizations, academics, and advocates developed in February 2018 the Santa Clara Principles on Transparency and Accountability in Content Moderation. The Principles provide a set of baseline standards or initial steps that companies should take to provide meaningful due process to impacted individuals and better ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of users’ rights.
Photos, videos, and other content posted on social media have increasingly supported accountability processes, including judicial proceedings, for serious international crimes, both at the national and international level. Human rights workers and journalists have also analyzed such content when investigating serious crimes. This material can be used to corroborate witness testimony and to confirm specific details about an incident, including the exact time and location, identities of the perpetrators, and how the crimes were carried out or their aftermaths. Such content can be especially valuable when researchers and investigators do not have access to the location where alleged crimes were committed due to security concerns or restrictions imposed by local authorities.
National Proceedings
Human Rights Watch is aware of at least 10 cases where prosecutors in Germany, Finland, the Netherlands, and Sweden secured convictions against individuals linked to war crimes in Iraq and Syria in cases that involved videos and photos shared over social media.
In one example, in 2016, Swedish authorities investigated Haisam Omar Sakhanh, a Syrian man who was seeking asylum there, for lying about a former arrest in Italy on his asylum application.
Through their investigation, they discovered a video that the New York Times had published in September 2013 that showed a Syrian non-state armed group opposed to the government extrajudicially executing seven captured Syrian government soldiers in Idlib governorate on May 6, 2012. Sakhanh was seen in the video participating in the executions. A Swedish court convicted him of war crimes and sentenced him in 2017 to life in prison. In a 2019 Dutch case, a man was convicted of a war crime for posing next to a corpse on a crucifix in Syria and then posting the photograph on Facebook. In this case, the content was not only germane to the charge of association with a terrorist organization, but also the charge of participating in an outrage on personal dignity, adding to the determination that a war crime had been committed.
As one European law enforcement officer working on war crimes investigations put it: “Social media content is absolutely crucial to our work, especially to preliminary investigations and in ongoing conflicts and countries where we can’t go.”
International Courts and Internationally Mandated Investigations
For International Criminal Court (ICC) investigators and United Nations-mandated investigations or inquiries, open source information is particularly helpful given that these bodies do not have national law enforcement powers.[31] As such, investigators cannot rely on subpoenas and search warrants to access privately held information. ICC investigations and other internationally mandated inquiries take place when national authorities have been unable or unwilling to address serious crimes, which sometimes means there can be years between the alleged crime and the gathering of evidence, making it even more important for the court to have access to material that captured incidents as they
took place.
On August 15, 2017, the ICC issued an arrest warrant for Mahmoud al-Werfalli, linked to the armed group known as the Libyan Arab Armed Forces (LAAF) under the command of Khalifa Hiftar, for the war crime of murder. He was wanted by the court for his alleged role in the killing of 33 people in seven incidents that took place in and around Benghazi between June 2016 and January 2018. On July 4, 2018, the ICC issued a second arrest warrant for al-Werfalli for crimes committed in another incident. The ICC issued the arrest warrants largely on the basis of seven videos of the killings posted on social media, some of which were posted by the unit that al-Werfalli commanded. Al-Werfalli remains a fugitive of
the court.
This is the first instance at the ICC where such videos played a key role in the triggering of an investigation into a particular series of alleged crimes. The case is also unique in that investigators were documenting the alleged crimes as they occurred, thus enabling them to identify and preserve the relevant content. However, because of the ICC’s role as a court of last resort, investigators usually undertake investigations into alleged crimes long after they have taken place.
Facebook in Myanmar
Myanmar’s military committed extensive atrocities against its Rohingya population, including murder, rape, and arson during its late 2017 campaign of ethnic cleansing, forcing more than 740,000 Rohingya to flee to Bangladesh. As security forces perpetrated crimes against humanity starting in August 2017, they used Facebook as an echo chamber to foster the spread of incendiary commentary which served to dehumanize the Rohingya and incite violence.The International Fact-Finding Mission on Myanmar, established by the UN Human Rights Council in March 2017, reported on Facebook’s role in enabling the spread of discrimination and violence against Rohingya and called for Myanmar’s military generals to be investigated for genocide, crimes against humanity, and war crimes.
Facebook has since admitted that it failed to prevent its platform from being used to “foment division and incite offline violence.” In an effort to address this, from August to December 2018, Facebook took down 490 pages, 163 accounts, 17 groups, and 16 Instagram accounts for “engaging in coordinated inauthentic behavior,” and banned 20 individuals and organizations tied to the military “to prevent them from using our service to further inflame ethnic and religious tensions.” In these cases, it stated on its website that it took down the content down but has preserved it. On May 8, 2020, Facebook said it took down another three pages, 18 accounts, and one group, all linked to the Myanmar police. Facebook defines coordinated inauthentic behavior as, “when groups of pages or people work together to mislead others about who they are or what they are doing.”
A Myanmar expert told Human Rights Watch that it likely took Facebook so long to identify these posts because it did not have enough Burmese-speaking content moderators, and its algorithm was unable to detect the Burmese language font Zawgyi because it is not machine-readable. For this reason, takedowns of content in response to the August 2017 violence were primarily manual as opposed to automatic. A Unicode conversion by Facebook has now enabled computer-initiated takedowns of content written in Zawgyi.
The International Fact-Finding Mission recommended that all social media platforms “retain indefinitely copies of material removed for use by judicial bodies and other credible accountability mechanisms addressing serious human rights violations committed in Myanmar in line with international human rights norms and standards, including where such violations amounted to crimes under international law.”
An individual with knowledge of the UN’s Independent Investigative Mechanism for Myanmar (IIMM) – which is mandated to collect evidence of serious crimes and prepare files for criminal prosecution, making use of the information handed over to it by the International Fact-Finding Mission– said that this content posted on Facebook is crucial to its own investigations and that losing access to this content could completely halt some of its investigations. ICC prosecutor Fatou Bensouda, in her request to open an investigation into the deportation of Rohingya into Bangladesh and related crimes, also cited Facebook posts by military officials as evidence of discriminatory intent.
Some investigators and researchers identified and saved Myanmar-related content related to state-sponsored hate speech, incitement to violence, and misinformation before Facebook took it down, and this has also provided important documentation related to the commission of grave crimes. Human Rights Watch, the UN, and others, have gathered a wide range of information from Facebook pages belonging to the Myanmar military, including for example evidence that demonstrates command responsibility and identifies units involved in attacks as well as evidence that demonstrates the role of authorities in promoting threats, discrimination, and incitement; the military’s intent to force Rohingya from the country, and possibly its intent to destroy; and the pre-planning of grave crimes. Relevant documentation included frequent updates on the military’s “clearance operations” posted on the Facebook pages of the commander-in-chief and other military officials and organizations that were later removed.
In November 2019, Gambia brought a case to the International Court of Justice (ICJ) alleging Myanmar’s violation of various provisions of the Genocide Convention. In a major ruling, on January 23, 2020 the ICJ unanimously adopted “provisional measures” ordering Myanmar not to commit and to prevent genocide, and to take steps to preserve evidence while the case proceeds on the merits. The legal team representing Gambia, which brought the case to the ICJ, relied on content that Myanmar state officials had posted on Facebook as important evidence of genocidal intent, as identified by the International Fact Finding Mission. On June 8, 2020, Gambia filed a suit against Facebook in the United States to compel the social medial company to provide documents and communications from Myanmar officials’ profiles and posts that the social media platform had taken down, as well as materials from internal investigations that led to the takedowns.
On August 4, 2020, Facebook filed an objection to the request from Gambia to compel the company to provide documents and communications from Myanmar officials’ profiles and posts that the social media platform had taken down. Facebook said that the proposed discovery order would violate the Stored Communications Act, a US law that prohibits providers of an “electronic communications service” from disclosing the content of user communications, and urged the US District Court for the District of Columbia to reject the request.
Gambia is seeking this information under 28 U.S.C. Section 1782, which enables parties to litigation outside of the US to seek evidence in the US for their case. While foreign litigants do not have the power to compel potentially relevant evidence in all circumstances, there is a question as to whether the Stored Communications Act poses too high a barrier, particularly in relation to public communications that the company removed.
On August 26, Facebook announced that it had lawfully provided the IIMM with data it had preserved in 2018.
The situation in Myanmar shows both how, in some cases, it is important for social media companies like Facebook to act quickly to remove content that may be inciting violence from their public platforms, while at the same time preserving such content is critical so it can be used for accountability purposes.
Civil Society and Media Documentation
The value of social media content extends beyond judicial mechanisms and internationally mandated investigations to the work of civil society organizations and investigative journalists.
Between January 1, 2007 and February 11, 2020, Human Rights Watch in its public reports linked to at least 5,396 pieces of content on Facebook, Twitter, and YouTube that supported allegations of abuse in 4,739 reports, the vast majority of which were published in the last five years. When reviewing these links in April 2020, Human Rights Watch found that the content in at least 619 links (or 11 percent) was no longer available to the organization online, meaning it had presumably either been removed by the social media platforms, or the users who posted the material had removed it or made it private. It was generally not clear from the error messages why the content was unavailable, with some error messages that were extremely vague, including “Please try your request again later,” which we did numerous times over an extended period, and “Video unavailable.” Human Rights Watch is now in the process of developing a comprehensive archiving system to preserve all content that we link to in our reports going forward.
In some cases, the content posted on social media alerted Human Rights Watch researchers to an alleged violation that they were not previously aware of and prompted deeper inquiry. In other cases, researchers discovered relevant social media content in the course of their research and used it to corroborate important details from other sources. These include videos in 2017 showing Iraq forces executing ISIS suspects in Mosul and a 2020 video from Niger showing soldiers in a military vehicle running over and killing two alleged Boko Haram fighters. In China, Human Rights Watch researchers found content posted on public WeChat accounts that provided evidence of mass police surveillance and abuse of ethnic Uyghurs and other Turkic Muslims in Xinjang, as well as of gender discrimination in employment.
Investigative journalists have also relied on social media content in their reporting on apparent war crimes and laws-of-war violations. On September 21, 2015, for example, the US-led coalition against ISIS in Iraq posted a video on its YouTube channel titled “Coalition Airstrike Destroys Daesh VBIED Facility Near Mosul, Iraq 20 Sept 2015.” The video, filmed from an aircraft, showed the bombing of two compounds that its caption identified as a car-bomb factory. However, an Iraqi man who saw the video, Bassim Razzo, recognized that the sites being bombed were actually his home and the home of his brother.The attack on September 20, now known to have been a joint US-Dutch airstrike, killed four members of Bassim Razzo’s family.
The video was a key part of an extensive New York Times Magazine investigation into over 100 coalition airstrikes, showing that the US-led coalition was killing civilians at much higher rates than it claimed. The journalists preserved a copy of the video of the bombing, which was fortunate because the coalition took it down in November 2016 after one of the journalists, Azmat Khan, contacted the coalition about the strike in the video. Based on the evidence gathered, the coalition eventually offered Bassim Razzo compensation for the bombing. Rizzo refused it because of the paltry sum being offered, a mere fraction of Rizzo’s medical and property damage costs, without even factoring in the loss of life.
Khan said that starting on January 8, 2017, the coalition began removing all of its airstrike videos from YouTube. Chris Woods, the founder and director of Airwars, an independent civilian casualties monitor, said that as of 2016, amid mounting allegations of mass civilian casualties during US-led coalition airstrikes, the coalition also began aggressively removing any social media content seemingly linked to ISIS within minutes. Platforms like YouTube and Facebook followed suit, taking down profiles, groups, or pieces of content they identified as linked to ISIS or other extremist armed groups. These pages sometimes included videos and photos from the sites of coalition actions that, when preserved quickly enough, were vital resources for Airwars to gain insight into the impact of airstrikes on civilians in areas under ISIS control. Airwars developed a system to download and archive content where it could. “On at least one occasion, the coalition actually contacted us to get its hands on a deleted ISIS video showing the civilian harm from one of its strikes, which we had luckily saved,” Woods said.
Between 2013 and 2018, Human Rights Watch and seven other independent, international organizations researched and confirmed at least 85 chemical weapons attacks in Syria – the majority perpetrated by Syrian government forces. The actual number of chemical attacks is likely much higher. The Syrian Archive is the open source project of a nonprofit organization called Mnemonic that collects, verifies, and analyzes visual documentation of human rights violations in Syria. It has been documenting suspected attacks by collecting data from the media and from civil society and other organizations, totaling over 3,500 sources. Its “Chemical Weapons Database” contains 28 GB of documentation from 193 sources of what it found to amount to 212 chemical weapons attacks in Syria between 2012 and 2018, it says. This content includes 861 videos, most of which were posted on YouTube by citizen and professional journalists, medical groups, humanitarian organizations, and first responders. The organization said that out of 1,748,358 YouTube videos in its entire archive that it had preserved up until June 2020, 361,061 or 21 percent were no longer available online. Out of 1,039,566 Tweets that it had preserved, 121,178 or 11.66 percent were no longer available online.
Mnemonic’s Yemeni Archive project has similar findings.[67] Of the 444,199 videos from YouTube that the Yemeni Archive has preserved as of June 2020, 61,236 videos or 13.79 percent are no longer publicly available online. Of the 192,998 Tweets that the Yemeni Archive has preserved, 15,860 Tweets or 8.22 percent are no longer available online.
Because Mnemonic’s Syrian Archive and Yemeni Archive projects have their own systems of saving and archiving material, the group has retained copies of the content subsequently taken down either by platforms or by users themselves. If it had not, these takedowns could have had a real impact on potential accountability in the future.
For years, Russia has used its veto as a permanent member of the UN Security Council to quash efforts brought by other member states to hold those responsible for chemical weapons and other attacks on civilians in Syria to account. This has made investigations by independent organizations all the more important. In 2018, Russia vetoed the renewal of the main investigation mechanism for chemical weapons attacks in Syria, the Joint Investigative Mechanism (JIM). Shortly after, in late 2018, the Organization for the Prohibition of Chemical Weapons (OPCW) created a new team, the Investigation and Identification Team (IIT), responsible for identifying the perpetrators of the use of chemical weapons in Syria. The creation of this team significantly expanded the OPCW’s remit to identifying perpetrators, when before it was limited to identifying whether attacks happened or not through the Fact-Finding Mechanism (FFM). In April 2020, the IIT published its first report where it found that a chemical weapons attack in March 2017 occurred following orders at the highest level of the Syrian Armed Forces. Videos posted online of the incidents were part of the evidence that was used in the investigation.
Bellingcat, an investigative journalism outlet that specializes in fact-checking and open-source intelligence, was the first to uncover the link between a Russian Buk missile launcher from Russia’s 53rd air defense brigade and the downing of Malaysia Airlines Flight MH17. Much of their investigation was based on materials that had been posted online. According to Eliot Higgins, the founder of Bellingcat, on more than one occasion lawyers working on cases related to Flight MH17 asked the group to provide it with the results of Bellingcat’s work. When trying to compile the material, Higgins realized that much of the content it had relied on had been taken offline. The content included videos and photographs, hosted on sites such as Facebook, Twitter, YouTube, and the Russian social media platform VKontakte. As a result, Bellingcat had to spend a significant amount of time finding alternative copies of links and online archived copies of images and pages to substantiate its conclusions. Ultimately, the Dutch-led Joint Investigation Team, which Bellingcat shared its material with, issued arrest warrants for three Russians and one Ukrainian, who were put on trial in the Netherlands in absentia in March 2020.
Nick Waters, an open source investigator at Bellingcat, investigated an airstrike on June 18, 2015 in Sabr Valley in northern Yemen which Amnesty International concluded had killed at least 55 civilians. He told Human Rights Watch that on July 28, 2019, he discovered a video of the airstrike on YouTube that was much clearer than any he had previously seen. It was in high definition, and showed the munition falling from the sky, children on the ground who were killed in the strike, and the topography of the area which allowed for geolocation. He had seen photos of the children elsewhere, but not at the site itself. Waters said the video had been online for several years. Waters shared the link to the video with a colleague who watched it, but when Waters tried to watch it again the following day, it had been taken down. “Maybe us watching it triggered the algorithm that took it down?”, he wondered.
Waters said an acquaintance inquired with YouTube on his behalf to try to understand why it had suddenly been taken down. His acquaintance later told him that he could not “disclose any details about the mechanism,” but he added that “it’s certainly not completely random—that would fly in the face of logic.” Waters was unable to get a copy of the content from YouTube; however, in 2020, he found out that fortunately that Mnemonic has preserved a copy through its Yemeni Archive initiative.
In another example from Yemen, in April 2015, the September 21 YouTube channel, a media outlet linked to the Houthis, an armed group in control of northern Yemen, uploaded a video with no audio of an apparent cluster bomb attack. The video (which has since been taken down) showed numerous objects attached to parachutes slowly descending from the sky, and then zoomed out to show mid-air detonation and several black smoke clouds from other detonations. By matching visible landmarks in the video to satellite imagery and topographic (three-dimensional) models of the area, Human Rights Watch determined that the video was recorded in the village of al-Shaaf in Saqeen, in the western part of Saada governorate. Human Rights Watch also determined the specific weapon likely used in the attack by matching the distinctive parachute design and detonation signatures visible in the video to technical videos of the CBU-105 Sensor-Fuzed munitions manufactured by Textron Systems Corporation, and supplied to Saudi Arabia and the United Arab Emirates by the US.
The video raised concerns regarding how US cluster munitions came into the hands of the Houthis. Neither the US, Saudi Arabia, nor the United Arab Emirates has signed the 2008 Convention on Cluster Munitions, which bans their use. However, US policy on cluster munitions at the time was detailed in a June 2008 memorandum issued by then-Secretary of Defense Robert Gates. Under the Gates policy, the US could only use or export cluster munitions that “after arming do not result in more than 1 percent unexploded ordnance across the range of intended operational environments,” and the receiving country had to agree that cluster munitions “will only be used against clearly defined military targets and will not be used where civilians are known to be present or in areas normally inhabited by civilians.” By verifying the location of the attack in the video, Human Rights Watch was able to conclusively demonstrate that the munitions had been used in an area inhabited by civilians.
As a result of this and other evidence of attacks using this weapon, in June 2016, the US Department of State suspended new deliveries of CBU-105 Sensor Fuzed Weapons to Saudi Arabia.In August 2016, Textron announced that it would discontinue production of the CBU-105s, which were the last cluster munitions to be manufactured in the US. This represented an important step in minimizing cluster munitions attacks in Yemen by the Saudi-led coalition.
In another example, in April 2018 Human Rights Watch published a report that included a video with footage from a journalist broadcasting on Facebook Live showing Nicaragua’s government brutally cracking down on demonstrators at ongoing protests. The organization used this evidence to call for those responsible for the abuses to be held to account, including Francisco Diaz, the deputy chief of the national police. Diaz was among a group of officials subsequently subjected to targeted sanctions by the European Union, the United Kingdom, and Canada.
As shown in these cases, documentation by civil society organizations and the media, which often rely at least in part on content posted on social media, can play a crucial role in spurring national and international prosecutions or other forms of accountability and redress.
National Law Enforcement
Currently, each social media company sets its own procedures for law enforcement to request content that is no longer available online and user information. In most cases, companies require the law enforcement body to present a valid subpoena, court order, or search warrant. Facebook states that, upon receipt of a valid request, it will preserve the content for 90 days, “pending our receipt of [a] formal legal process.” However Human Rights Watch knows of instances in which it has retained content taken down for much longer. It states that it will not process “overly broad or vague requests.”
In January 2020, Google announced that it would be charging US law enforcement money for requested responses to search warrants and subpoenas. In at least one instance that Human Rights Watch is aware of, YouTube restored content two years after it had taken
it down.
The email from Twitter’s Public Policy Strategy and Development Director said the company “retains different types of information for different lengths of time, and in accordance with our Terms of Service and Privacy Policy.” Twitter’s Privacy Policy states that it keeps Log Data for a maximum of 18 months, but Human Rights Watch was not able to determine from its Privacy Policy more detail on how long other information might be retained for.
A European law enforcement officer investigating war crimes told Human Rights Watch that “content being taken down has become a daily part of my work experience” and that he is “constantly being confronted with possible crucial evidence that is not accessible to me anymore.” He and another European law enforcement officer said that as a general principle they secure copies of all content they come across during their investigations in a forensically sound manner, using a system that downloads the webpage of interest, timestamps this download, and automatically applies a cryptographic hashing algorithm and a cryptographic digital signature to these files, all in order to authenticate when, where, and by whom this material was archived. Where it seems to constitute an important piece of evidence, they contact the platform and ask it to preserve the content, and all the relevant attached data, even if it comes down.
The procedures developed to facilitate access to content for law enforcement are premised on authorities knowing about the content that was taken offline, in order to make a request. They do not address how law enforcement could access content taken down so rapidly that no authority knows of its existence. One of the law enforcement officers said that, with content that was posted, viewed, and then taken down, even if he did not see the content while it was online, it almost always leaves a trace, with people online referencing it.[88] This allows him to know what to request access to from the companies. When content is blocked from being uploaded or comes down so quickly it does not leave a trace, this potentially ends his ability to pursue a case, he said.
International Courts and Internationally Mandated Investigations
The International Criminal Court (ICC) and internationally mandated investigations, such as the Independent Investigative Mechanism for Myanmar (IIMM), do not have the power to compel evidence from private companies outside their jurisdiction, and this has been a significant obstacle in their ability to obtain content from social media companies. According to one UN investigator, each social media platform has a law enforcement focal point and when UN investigators have contacted them, the focal points have had to decline their requests for the simple reason that these types of requests are not backed by court orders or subpoenas.
Some of these investigative teams have developed a work-around, with the ICC for example requesting that law enforcement officials in a country that is a party to its statute obtain a court order or subpoena and make a company request on its behalf.[90] Two investigators told Human Rights Watch that the data they wanted from companies like Facebook included the content itself as well as the data on users who had posted the content, which was vital to their investigations. As such, simply saving the content they identified was insufficient for the purposes of their investigations.
Civil Society and Media
Facebook and Google, two of the three companies contacted for this report, did not respond to Human Rights Watch queries as to whether they had created any mechanism to allow the media or civil society organizations to request content that has been removed.
In an emailed response to Human Rights Watch’s letter to Twitter requesting access to taken down content for archival purposes, the company’s Public Policy Strategy and Development Director said it could not provide content data without an appropriate legal process:
Pursuant to the U.S. Stored Communications Act (18 U.S.C. 2701 et seq.), Twitter is prohibited from disclosing users’ content absent an applicable exception to the general bar on disclosure. This law allows U.S. law enforcement to compel disclosure of content with a valid and properly scoped search warrant, but there is no such mechanism for disclosure to entities who are unable to obtain a warrant (whether governmental or non-governmental).
Unfortunately, this means we cannot provide copies of the content you have identified for archival purposes. However, Twitter is supportive of efforts through the Global Internet Forum to Counter Terrorism (GIFCT)’s working group on legal frameworks to consider potential avenues to allow greater access to content for appropriate uses…
Companies, UN entities, and governmental authorities can draw useful lessons about managing content classified as TVEC from the field of child sexual exploitation, often referred to as Child Sexual Abuse Material (CSAM) or child sexual abuse content. There, a similar imperative exists for companies to both take down content and also preserve it for law enforcement and investigative purposes.
There is no universal legal definition of either child sexual exploitation material or terrorism content. However, in a key distinction between the two categories of content, child sexual abuse material is better suited to automated takedowns based on hashes than content that social media companies classify as TVEC. Because most countries criminalize the simple possession of CSAM, regardless of the intent to distribute, CSAM is not considered protected speech under the law. This means that once such material is identified, it can be taken down without the need to examine its context or intent. When identifying terrorist or violent extremist content, contextual factors are extremely important in determining alleged support for or glorification of terrorism, the nuances of which automated systems are notoriously bad at catching. Using hashes as the basis for taking down content strips away this context.
Most platforms identify child sexual exploitation content based on hashes in various child sexual exploitation hash databases that different governments and organizations have developed. Some of the organizations running these databases require a person to vet each piece of content, before it is hashed and that hash is added to the database, like the Internet Watch Foundation in the United Kingdom. New content is identified by users who flagging it or by algorithms.
In the United States, the National Center for Missing and Exploited Children (NCMEC), a private nonprofit organization with a federally designated legal right to possess such material indefinitely, also uses the copy of the content to produce a hash, which it enters into its database to share with platforms. NMEC’s authorizing statutes in some ways make it a hybrid entity, exercising special law enforcement powers and mandate its collaboration with law enforcement authorities.
One child protection worker raised concerns that the NCMEC has sometimes added hashes of content to its database that do not meet the definition of child sexual exploitation, and has even at times wrongfully notified law enforcement. When this happens, a user can come under surveillance from state authorities for content that is not illegal. This reinforces the importance of ensuring such a mechanism is appropriately resourced and regulated, the expert said.
In the US, once the content is identified the companies have a statutory obligation to take down but preserve it on their servers for 180 days. After that period, the US government requires companies to delete the content. The US government also requires companies to share a copy of each piece of content as well as all relevant metadata and user data with the NCMEC. The NCMEC, in turn, notifies law enforcement nationally and internationally of the content. Some other jurisdictions have similar independent arrangements, including with the Internet Watch Foundation in the United Kingdom.
Both CSAM and TVEC are sometimes processed in servers located outside of the country from which the content originated. For both types of content, while the definition of what counts as child sexual exploitation material and terrorism varies from country to country, and Facebook, Twitter, and YouTube are obligated to comply with national laws, they have also developed internal standards for content they consider TVEC or CSAM that they
apply globally.
In 1999, the NCMEC developed the Safeguard Program, designed to address vicarious trauma, secondary trauma, and compassion fatigue in staff and assist them in developing the healthy coping skills necessary to maintain a positive work-life balance. Similar support would be essential in any system created to preserve content classified as TVEC or otherwise relevant for evidentiary purposes of serious international crimes.
In line with recommendations made by a coalition of civil society organizations aimed at increasing transparency and accountability around content takedowns, Human Rights Watch believes it is vital that all relevant stakeholders jointly develop a plan to establish an independent mechanism to take on the role of liaising with social media platforms and preserving publicly posted content they classify as TVEC, as well as other removed material that could be evidence of serious international crimes, including content taken down because it was associated with accounts showing “coordinated inauthentic behavior.” The independent mechanism should then be responsible for sorting and granting access to the content for archival and investigative purposes in a manner that respects privacy and security concerns.
This mechanism could serve a role similar to that of existing archives that are legally privileged to hold child sexual exploitation content, but should be nongovernmental, and allow more stakeholders to access the content, including international, regional, and local civil society organizations, journalists, and academics, in addition to national law enforcement officials and investigators with internationally mandated investigations. The system should require legal authorization to retain such content, but in contrast to groups such as National Center for Missing and Exploited Children (NCMEC), it would not be statutorily linked to any particular government. It also would not have a duty to automatically notify particular law enforcement agencies of the removed content.
This body should function akin to a restricted-access research library. Some international tribunals, including the International Criminal Tribunal for Rwanda, have established archives holding physical and digital records, including audio and video recordings of the tribunal’s work. Because these records are often sensitive yet vitally important repositories documenting historical narratives, the United Nations has developed policies to ensure that people can request to access them based on a classification system that prioritizes security and privacy. A distinction with these materials is that they have already been used in criminal investigations, so have a demonstrated evidentiary value.
Decisions around the publication of national archives that reveal the abuses by prior governments in different country contexts could help inform methods for protecting privacy rights while upholding the collective right to information about human rights abuses. For example, in 2005, after broad national consultations, the Guatemalan government decided to make public the Historical Archive of the National Police, which consists of nearly five linear miles of documents, photographs, videotapes, and computer disks. Most of the documents have been digitized, and the public has access to records that include the names, photographs, and details of individuals arrested by the police from 1881 to 1997. Kate Doyle, senior analyst at the National Security Archive, said,
“When it comes to uncovering archives of repression, privacy rights have to be seriously considered, but they’re not absolute. The right of an individual to privacy may be overcome by the right of an entire societyto know its own terrible history. You are talking about the right of future generations to read and fully comprehend once-secret records documenting State violence – how it functioned, why it was used, and specifically who it targeted. In that sense the identities of victims become part of the puzzle of a repressive past.”
Broad consultations would be key to ensuring the mechanism correctly identifies and preserves material that could be relevant for investigations into serious crimes.
Efforts are already underway to create a limited archive of content that social media companies remove as “terrorist.” One such example, intended for different purposes than Human Rights Watch’s proposed mechanism, is led by Tech Against Terrorism (TAT), a project launched by the UN Security Council and supported by the Council’s Counter Terrorism Committee Executive Directorate.
According to TAT’s executive director, Adam Hadley, the TCAP:
[W]ill be a secure online platform that hosts terrorist material including verified terrorist content (imagery, video, PDFs, URLs, audio) collected from open-sources and existing datasets. Content on the TCAP will be verified by terrorist content specialists. The purpose of the TCAP is to facilitate secure information sharing between platforms, academia, and data scientists. As well as archiving historical content to support academic analysis and the development of improved content classifiers, the TCAP will provide a real-time alert service to inform smaller internet platforms of public content [that TAT identifies as terrorist] discovered on their services. Furthermore, the TCAP dataset will support third party data scientists in developing more accurate and transparent algorithmic / analytical efforts that can be deployed to support smaller internet platforms.
Hadley added that to address any privacy concerns, “Only tech companies, researchers, and civil society will be allowed access to the platform. We will also ensure that personal identifiable information (PII) of users is not traceable on the platform.”
The mechanism that Human Rights Watch proposes would serve a different function than TCAP and would not be overseen by an entity launched by the UN Security Council or any UN counterterrorism body. Nevertheless, the questions TCAP has grappled with regarding how to securely and legally store and provide access to archived content may help inform discussions on the mechanism that Human Rights Watch is calling for.
In April, 2020, the GIFCT launched six working groups with membership of representatives from social media companies, civil society organizations, academics and governments. One of the working groups is focused on “understanding the challenges and constraints of existing legal frameworks; incorporating the risks and opportunities of greater data-sharing; and identifying opportunities for clarification and reform.” Two members of the working group told Human Rights Watch that the group will be tackling, among other topics, the issue of content takedowns and the legal framework needed to preserve taken down content for evidentiary purposes.
Human Rights Watch urges social media companies and other relevant stakeholders to launch a consultation process to determine the contours of an independent mechanism to preserve content and its metadata that may serve as evidence of serious international crimes.
These consultations should prioritize inclusion of internationally mandated investigators, human rights researchers, civil society organizations, journalists, academics, and national law enforcement representatives. Such consultations should address the following issues:
Nature of the content to be archived, and the manner in which it would be stored:
What content would need to be preserved for evidentiary and research purposes in this mechanism, ensuring that the selection of content would be based on narrow criteria, in a manner that would meet international standards of free expression and privacy and data protection rights;
How would content be archived in order for the archiving process not to be too onerous, but for content to be found relatively easily, without jeopardizing privacy concerns..
Determining who would have access to the content:
Developing clear criteria and rights-based principles to guide who could access archived material, and measures to avoid either unrestricted access and dissemination or unreasonable restrictions on nongovernmental access for research;
Developing clear guidelines and conditions for accessing the content in accordance with privacy and data protection rights standards;
Developing accreditation standards that would govern access by individuals, governments and organizations to restricted content for approved purposes.
Determining what content would be accessible and for what purposes:
Establishing under what circumstances would various types of applicants be able to access not just content but data that is related to the content, such as metadata and user data, and for which purposes and under what conditions;
Ensuring that there are appropriate protections when access is sought to private information, including security safeguards to ensure sensitive content is not leaked and prevent unlawful sharing of the archived material;
Ensuring that there are appropriate notification and appeals processes;
Establishing the terms of use of the content, once accessed, including ways to ensure the privacy and security of the individuals featured in preserved content, as well as those who posted or captured the content;
Securing ways to ensure those granted access to archived material do not violate the terms of use.
Requests for accessing the content:
Determining how specific requests would need to be, in light of the need to ensure access to content that has been taken down before any human has seen it;
Finding ways to legally share content for approved purposes with those seeking access from jurisdictions where viewing it is a criminal offense or is otherwise prohibited by law.
Safeguarding the mechanism, including to prevent abuse and misuse by:
Implementing measures to ensure maximum transparency around the functioning of the mechanism, while respecting privacy, data protection, and due process rights in compliance with international law;
Ensuring that the archived material is stored in a secure manner in accordance with privacy and data protections standards under international law and that sufficient information security controls and auditing are put in place to securely protect all data contained in the mechanism;
Finding ways to adequately fund the mechanism and preserve its independence, both to sort and securely preserve content and review access applications, and to carry out outreach and educational activities to help ensure that all those wishing to apply to access archived content know how to do so;
Developing and funding a program to address vicarious trauma, secondary trauma, and compassion fatigue experienced by those analyzing distressing material and to the extent possible others accessing the material;
Basing the mechanism in a jurisdiction where it can operate without government interference;
Implementing a regular audit of the mechanism to ensure fairness and accuracy.
In advance of the creation of an independent mechanism to liaise with social media platforms and preserve online material classified as TVEC and other relevant content, Human Rights Watch urges social media companies to take the following steps:
Put in place a process whereby internationally mandated investigators, including those from the International Criminal Court (ICC) and UN-mandated investigations, can request access to removed content and its metadata, without having to go through national law enforcement agencies;
Make public the full process by which the company identifies and removes content, including the roles of human moderation and artificial intelligence, how a hash is added to the Global Internet Forum to Counter Terrorism (GIFCT) hash database, how the company uses hashes from other companies in its own content moderation processes, how long the company stores content it has taken down, what measures have been put in place to decide when to delete it, and how quickly the deletion occurs;
Improve transparency and accountability in content moderation to ensure takedowns are not overly broad or biased. This includes implementing the standards in the Santa Clara Principles on Transparency and Accountability in Content Moderation, namely to clearly explain to users why their content or their account has been taken down, including the specific clause from the Community Standards that the content was found to violate and how the content was detected, evaluated, and removed (for example, by users, automation, or human content moderators), and provide a meaningful opportunity for timely appeal of any content removal or account suspension;
Review and modify overly broad definitions of “terrorist and violent extremist content” to ensure they comport with international human rights norms including the right to free expression.
Until a mechanism is created, it will be important for human rights researchers to improve their own ability to preserve and archive material that they rely on in their documentation efforts. Donors and human rights organizations should invest in supporting the development and maintenance of the necessary technical infrastructure and developing the skills of those who might not currently have the ability to preserve and archive material, to assist in preservation efforts.
This report was researched and written by Belkis Wille, crisis and conflict division senior researcher. Ida Sawyer, acting crisis and conflict director, edited the report.
Julie Ciccolini, research technologist, provided research support. Shayna Bauchner, Asia division assistant researcher, Deborah Brown, senior researcher and advocate on digital rights, Hye Jung Han, children’s rights division researcher and advocate, Gabriela Ivens, head of open source research, Balkees Jarrah, international justice program associate director, Sara Kayyali, Syria researcher, Linda Lakhdhir, Asia division legal advisor, Nicole Martin, senior manager of archives and digital systems, Manny Maung, Asia researcher, HananSalah, senior Libya researcher, Param-Preet Singh, associated director of the international justice program, Joe Stork, deputy Middle East and North Africa director, and Letta Tayler, crisis and conflict division senior researcher provided specialist review. Dinah Pokempner, general counsel, provided legal review and Tom Porteous, associate program director, provided programmatic review. Crisis and conflict associate Madeline de Figueiredo, photography and publications coordinator Travis Carr, and administrative manager Fitzroy Hepkins prepared the report for publication.
Human Rights Watch would like to thank the many experts, particularly civil society representatives who have been engaging with social media companies and victims’ communities for years on this issue, who were generous with their time and insights when speaking about this topic with researchers.
Source : https://www.hrw.org/report/2020/09/10/video-unavailable/social-media-platforms-remove-evidence-war-crimes
Author :
Date : 2020-09-10 09:00:00