Facebook removes 3.2 billion fake accounts
Facebook said it has removed 3.2 billion fake accounts between April and
September this year, along with millions of posts depicting child abuse
and suicide, according to its latest content moderation report released
on Wednesday.
That more than doubles the number of fake accounts taken down during the same period last year, when 1.55 billion accounts were removed, according to the report.
The world’s biggest social network also disclosed for the first time how many posts it removed from popular photo-sharing app Instagram, which has been identified as a growing area of concern about fake news by disinformation researchers.
Proactive detection of violating content was lower across all categories on Instagram than on Facebook’s flagship app, where the company initially implemented many of its detection tools, the company said in its fourth content moderation report.
For example, the company said it proactively detected content affiliated with terrorist organizations 98.5% of the time on Facebook and 92.2% of the time on Instagram.
It removed more than 11.6 million pieces of content depicting child nudity and sexual exploitation of children on Facebook and 754,000 pieces on Instagram during the third quarter.
Law enforcement is concerned that Facebook’s plans to provide greater privacy to users by encrypting the company’s messaging services will hamper efforts to fight child abuse.
Last month, FBI Director Christopher Wray said the changes would turn the platform into a “dream come true for predators and child pornographers.”
Facebook also added data on actions it took around content involving self-harm for the first time in the report. It said it had removed about 2.5 million posts in the third quarter that depicted or encouraged suicide or self-injury.
The company also removed about 4.4 million pieces involving drug sales during the quarter, it said in a blog post.
Reuters
That more than doubles the number of fake accounts taken down during the same period last year, when 1.55 billion accounts were removed, according to the report.
The world’s biggest social network also disclosed for the first time how many posts it removed from popular photo-sharing app Instagram, which has been identified as a growing area of concern about fake news by disinformation researchers.
Proactive detection of violating content was lower across all categories on Instagram than on Facebook’s flagship app, where the company initially implemented many of its detection tools, the company said in its fourth content moderation report.
For example, the company said it proactively detected content affiliated with terrorist organizations 98.5% of the time on Facebook and 92.2% of the time on Instagram.
It removed more than 11.6 million pieces of content depicting child nudity and sexual exploitation of children on Facebook and 754,000 pieces on Instagram during the third quarter.
Law enforcement is concerned that Facebook’s plans to provide greater privacy to users by encrypting the company’s messaging services will hamper efforts to fight child abuse.
Last month, FBI Director Christopher Wray said the changes would turn the platform into a “dream come true for predators and child pornographers.”
Facebook also added data on actions it took around content involving self-harm for the first time in the report. It said it had removed about 2.5 million posts in the third quarter that depicted or encouraged suicide or self-injury.
The company also removed about 4.4 million pieces involving drug sales during the quarter, it said in a blog post.
Reuters
No comments
Thanks for viewing, your comments are appreciated.
Disclaimer: Comments on this blog are NOT posted by Olomoinfo, Readers are SOLELY responsible for their comments.
Need to contact us for gossips, news reports, adverts or anything?
Email us on; olomoinfo@gmail.com