YouTube to Up Number of Workers Rooting Out Improper Content

ADVERTISEMENT

In a continued effort to crack down on inappropriate videos, YouTube is expanding its content moderation staff, bringing the total number of people working across Google to address the issue to more than 10,000 in 2018.

The effort includes removing content that is not appropriate for young viewers. In an open letter on YouTube’s official blog, CEO Susan Wojcicki noted that human reviewers continue to play a vital role in both removing content and training machine-learning systems, leading to the decision to increase the number of workers focused on this mission.

“We have begun training machine-learning technology across other challenging content areas, including child safety and hate speech,” Wojcicki said in the letter.

“In the last year, we took actions to protect our community against violent or extremist content, testing new systems to combat emerging and evolving threats,” Wojcicki wrote. “We tightened our policies on what content can appear on our platform, or earn revenue for creators. We increased our enforcement teams. And we invested in powerful new machine-learning technology to scale the efforts of our human moderators to take down videos and comments that violate our policies.

“Now, we are applying the lessons we’ve learned from our work fighting violent extremism content over the last year in order to tackle other problematic content. Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube.”

Wojcicki noted that since June, YouTube’s trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping to train its machine-learning technology to identify that kind of content in the future. The company is also working to moderate comments on videos. Since June, YouTube has removed 150,000-plus videos for violent extremism.

YouTube is also looking to improve transparency. To that end, it will create a regular report in 2018 offering more data about the flags it receives, as well as the actions being taken to remove both videos and comments that violate its guidelines.