Google’s general counsel Kent Walker on Sunday outlined four measures the company is taking to identify, remove and prevent terror-related video and content from its online properties.
As a first measure, the company will devote more engineering resources and video analytics technology to remove and prevent extremist content from its services. As part of the effort, Google will apply its most advanced machine learning capabilities to train its automated systems to identify potentially offensive material for removal, Walker said in a blog post.
Google will also “greatly increase” the number of independent human experts in its YouTube Trusted Flagger program who will be available to manually review videos that are identified as being potentially, but not clearly, offensive, he said. Google will add 50 non-governmental organizations to the 60 NGOs it already has in place to conduct such manual reviews.
As a third measure, starting this week, Google will take a tough stance on videos that do not blatantly violate its content policies but that nonetheless contain potentially inflammatory and supremacist content. Going forward, Google will tag such videos with a warning about the nature of the content. It will also prevent advertisements from being placed automatically next to such content and neutralize the ability for people to comment or endorse the videos.
The fourth plan is to use Jigsaw, Google-parent Alphabet’s technology incubator, to create social campaigns against hate and radicalization, according to Walker. The goal is to find a way to reach potential terror recruits and redirect them to anti-terror messages designed to change their minds.
The new measures add to the multiple mechanisms and processes that Google already has in place to deal with inflammatory content and videos that violate its policies.
The company, for instance, already engages thousands of people from around the world to inspect and review potentially extremist and terrorist-related videos. Google has, for some time now, been using image-matching technology to prevent people from reloading content that was previously flagged and removed. The company also already uses what it describes as content-based signals to identify new videos for removal and has existing partnerships with counter-terrorism organizations and expert groups around the world to help it manage the problem.
Google’s new measures reflect the growing pressure on the company—and others such as Facebook, Twitter and Microsoft—to do more to prevent extremists and hate groups from using their online platforms to recruit and proselytize their cause.
Many have blamed the four companies for not being proactive enough in ensuring their channels are not misused to spread terror propaganda. After a March terror attack in the United Kingdom that killed four people, UK home secretary Amber Rudd said Google and the other companies had to more aggressively figure out ways to remove terror propaganda from their sites, but also to prevent it from showing up in the first place.
Google, Twitter and Facebook face pressure on another front as well. The families of victims of various terror attacks in the U.S. and elsewhere have filed multiple lawsuits in recent months accusing the three companies of offering material support to terrorists via the use of their platforms.
Google also has been under mounting pressure from major advertisers to deal with the issue. In recent months, many large companies have suspended their ad campaigns or withdrawn their ads from Google after their ads were mistakenly placed next to extremist videos by the company’s automated ad placement system.
The growing concerns have already prompted the three companies to announce a partnership last December in which they agreed to share a common database of “hashes” that uniquely identify terrorist images and video removed from their respective services. The goal in sharing the information is to ensure that a video or content that has been removed from one platform does not show up on another platform.