It plans to up the total number to more than 10,000 people, and is also working on advanced machine-learning technology that could automatically flag suspect content for review. "Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content," says CEO Susan Wojcicki. "Since June, our trust and safety teams have manually reviewed nearly 2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future." Now, she says, 98 percent of videos removed for violent extremism are flagged by algorithms in advance. Nearly 70 percent of violent extremist content is removed within eight hours of upload; nearly half within two hours.
Source: Forbes December 05, 2017 10:41 UTC