One in 1000 posts on Facebook is hate speech. In 2018, for the first time, a social platform was pointed out in a report by the UN Human Rights Council, considering that Facebook had been the key to spreading violent rhetoric against the Rohingya in Myanmar. . The situation of this Muslim minority has been described as genocide by various organizations.
The social network presented this Thursday, November 19, 2020 its latest report on compliance with community standards on the platform, in which it ensured that 95% of hate speech is proactively eliminated without users needing to use it. report.
The Menlo Park company (California, USA) explained that three years ago, in 2017, artificial intelligence was only used to proactively detect 23.6% of hate messages, so the rest they have were removed when a user reported this content and thus increased their exposure.
However, between July and September of this year, of the tens of millions of hate content deleted on Facebook and Instagram (owned by them), 95% were proactively deleted when detected by artificial intelligence systems, thus significantly reducing its exposure time.
Specifically, the company removed 22.1 million Facebook posts for this reason in those three months (a similar amount from the previous quarter) and 6.5 million on Instagram (double the previous quarter).
It should be noted that the period contained in this Thursday’s report includes part of the US presidential campaign, but not the home stretch (October) or polling day itself or the several days that the count lasted in. November.
The company led by Mark Zuckerberg first added data to the report on the “prevalence” of hate messages, the percentage of times people view content that violates community standards. platform.
To quantify this variable, the company takes a random sample of content visible on Facebook (that is, it has not been deleted) and analyzes which of them are hate messages (which attack groups of people because of their race, gender, culture, religious beliefs, sexual orientation, etc.).
According to this method, the prevalence of hateful content between July and September was between 0.10 and 0.11%, which means that out of 10,000 times a user viewed content on Facebook, ten or eleven of them. they contained hateful messages.
In a phone call with a group of journalists – including EFE – to answer questions about the report, Facebook vice president of integrity, Guy Rosen, defended the company’s decision to bring back its moderators from content in the office. , despite the covid-19 pandemic.
Rosen followed up on the letter posted by 200 of these moderators who asked to continue working from home to avoid the risk of contagion, and explained that the reason they are being asked to return to the office is because there are very sensitive materials that they cannot checked from home.