Facebook reports increased posts of graphic violence in Q1 2018

Share

An estimated 3 to 4 percent of Facebook accounts were fake accounts, the company said.

The company said in the first quarter it took action on 837 million pieces of content for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism.

"Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter", Xinhua quoted the report as saying.

Facebook's vice president of product management, Guy Rosen, said in a blog post Tuesday about the newly-released report that nearly all of the 837 million spam posts Facebook took down in the first quarter of 2018 were found by Facebook before anyone had reported them.

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

The social network's global scale - and the extensive efforts it undertakes to keep the platform from descending into chaos - was outlined Tuesday in its first ever transparency report. While the company seems to be very proficient at removing nudity and terrorist propaganda, it's lagging behind when it comes to hate speech.

Facebook's detection technology "still doesn't work that well" in the hate speech arena and needs to be checked by the firm's review workers, Mr Rosen said.

Afghanistan: 9 killed in Jalalabad attack
Taliban militants and fighters of Islamic State outfit have presence in Nangarhar province, some 120 km east of Kabul. Afghan security forces surrounded the area and were still battling the gunmen hours after the initial explosions.

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, almost triple the 1.2 million during the previous three months.

Had the company failed to do so, its monthly user base would have swelled beyond its current 2.2 billion. The company attributed the decline to the "variability of our detection technology's ability to find and flag" fakes.

Only 38% of the deleted posts with hate speech and xenophobia was discovered AI.

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", he said.

While the removal of 583 million fake Facebook accounts is certainly noteworthy, it does little to address concerns regarding actual user privacy.

"Today's report gives you a detailed description of our internal processes and data methodology".

Facebook's head of global policy management Monika Bicket said the group had kept a commitment to recruit 3,000 more staff to lift the numbers dedicated to enforcing standards to 7,500 at the start of this year. This means, the social media company still relies on its users and reviewers to check up on hate speeches and it will take some time for its AI to learn sarcasm and detect abusive hate speeches.

Share