Escuche esta historia

--:--

6:44

The prevalence of hate speech on Facebook has halved in the last year

Enrique R
Enrique R
6 min de lectura

The company has worked to avoid the predominant visibility of this discourse on the social network with good results.

When talking about hate on social media, anyone is likely to think of Twitter. But what about Twitter is actually the constant confrontation (something stimulated by its algorithms).

On the issue of hatred expressed and made visible, it is Facebook that has had the most problems, however the results of the measures taken during the last 12 months reflect encouraging results, especially if we understand the reach and penetration that the social network has worldwide in all the social strata.

Key Mission: Reduce the prevalence of hate messages

Since 2016, Facebook has developed a fairly aggressive campaign to respond to complaints of discriminatory content, with hateful language or against community norms, including content that violated the norms and laws of the countries where it was manifested, but always acting after complaints made by users who considered themselves affected by this type of content.

The prevalence of hate speech on Facebook has halved in the last year

The problem at that time was to always go after the infractions, that is, to act after the hate messages had had some visibility on the platform.

Something that will seem irrelevant if we think in individual terms, but we talk about Facebook: Millions of hate messages were having a lot of visibility and something had to be done.

From 2016 to 2020, a lot of water had run under the bridge. Now Facebook had a plan: Proactive detection of hate messages. They are changes in the algorithm that lead to the elimination of said content even without allowing its publication.

Anti-hate language policy brought new rules to Facebook

Many users of the network have recently complained of a "level of censorship" that prevents them from posting certain content. It is very likely that it is the algorithm doing its job, in which it is clear that it will never be perfect, but the results are in sight.

In some cases, the content will not be worthy of a violation or ban, but may be registered as likely to contain hate language.

In this case, the algorithm will prevent it from being visible as any content, thus reducing the predominance of this language from the potential display.

This work includes the action of a human team that reviews mountains of content, not just the algorithm, so it is valid to think that Facebook has actually invested a lot of resources in working against the problem of violent or discriminatory language on its platform.

The prevalence of hate speech on Facebook has halved in the last year

What are the numbers that show the reduction of the visibility of hate language on Facebook?

Because we can all talk, but Facebook has complied with presenting the verified report on the visibility of said content in July 2020 and in June 2021, practically a year of constant evaluation, where the results are:

— July 2020. Predominance of hate speech: between 0.10 and 0.11%

— June 2021. Predominance of hate speech: 0.05%

What do these percentages refer to?

It is a measurement standard defined by facebook. The figures come from calculating the visibility of the content marked as hate speech within a sample of 10,000 publications on the platform.

Taking 10,000 posts in July 2020, it was found that 10 to 11 of these were classified by the algorithm as hate messages, but when performing that same test in June 2021, the result was that only 5 posts could be classified as such.

These results have been possible thanks to the successful implementation of pre-control with proactive detection, something that many criticize as censorship and perhaps in part it seems, but what interests Facebook, like any large company, are the results and what that it can guarantee to its users and to society.

Working against hate speech is much more complex than removing content

With millions of posts uploaded to the platform every day, Facebook has had to be very careful with how they handle this issue.

That is why we are working to reduce the visibility of content that is potentially violent or discriminatory language, but that cannot be solidly demonstrated that it violates the regulations in this regard.

The prevalence of hate speech on Facebook has halved in the last year

The reason that there is a gigantic human team behind millions of reviews is that a lot of content on hate speech is shared in the form of analysis to combat the problem, experiences of users affected in the subject and even posts that are a call to the social network to act against content of this type.

If it were an automatic detection by words it would be absurd how much content of this type would be eliminated. The review filters all this content that is published and even that which is not really dangerous, usually ends in a limitation of the visibility of certain profiles, publications or pages in terms of recommendations (something that affects the scope, obviously).

Constant work on the proactive detection of violent language

The growth has been constant, and the social network works analyzing all the numbers to optimize its service.

This is how they make a comparison of the total content detected by the algorithm proactively, over 100% of the content removed by the human team, which verifies and makes decisions (although the algorithm can remove direct content in extreme cases).

Results: At the beginning of the application of proactive detection, the percentage detected by the algorithm was 23.6% of the total final content that was eliminated by human teams. At the last measurement, that number is close to 98%.

By working with scientific methodology, the company always sticks to the final results on the content that is seen, what is filtered and escapes the applied controls, that is why we talk about the percentages of content with hate language seen. Being what you want to attack, it is the rate on which you have to continue working.

Although from Facebook they manage reports, prices and data, statistics, enough to present and validate their reports.

Being such a controversial social network, they always go to large auditing companies known worldwide to validate all the data and results of their measurements. In this case and many others, they have been assisted by the global audit firm EY.

Always open to the fact that there may be manifestations of hatred that its algorithms and human team do not detect, work continues to improve the safe and healthy experience for users.

Responses