Facebook founder Mark Zuckerberg finally faced the US Senate to answer a multitude of questions on issues his social media network has been facing in recent months. One of the most important takeaways from his testimony in the Senate is his commitment to fight hate speech and fake news on Facebook.
Facebook’s Plan to Fight Fake News
Zuckerberg admitted that they at Facebook failed to do enough to address the problem of fake news, submitting that they did not take a broad enough view of their responsibility. He apologized and owned up to the responsibility of what has been happening with his virtually monopolizing social media platform.
Zuckerberg professed that changes in the way things are done are already underway. He acknowledged that it is not enough to simply enable connections among people; these connections need to be positive. Likewise, he intoned that it’s not sufficient to give Facebook users control over their information. Something has to be done to make sure that developers (with whom Facebook users share their data) are compelled to protect the shared information. Basically, the Facebook CEO emphasized his commitment to protect users’ information, something he admitted to have failed doing when Facebook dealt with Cambridge Analytica.
These points on protecting Facebook users’ data had to be mentioned because fake news are largely targeted. For them to work, they have to reach the right audience. Zuckerberg wants to cut this targeting by ensuring that users’ data don’t fall to the hands of operators of fake news.
When it comes to removing fake news content already posted online, none of the senators were able to ask questions to discuss the matter in great detail. However, Facebook has already been introducing a number of solutions. At one point, Facebook provided the option to put indicators or flags on disputable or dubious content to warn readers about their credibility. This system, however, was eventually replaced with links to related articles. This was meant to allow readers to decide on their own if the posts they are reading are credible or not, by providing links to other articles that could serve as references for fact checking. Then, early this year, Facebook floated the idea of letting users decide what sites to trust.
Suffice it to say, Facebook does not have a foolproof or fully dependable strategy in getting rid of fake news. It has changed methods a number of times over the past months and none of these methods appear to get universal commendation. The most recent change, letting users decide which sites to trust, is hailed by Facebook as the most objective but critics call it an attempt to ease the burden on the social media giant.
Facebook’s Plan to Address Hate Speech
Zuckerberg conceded that hate speech is harder to address as compared to fake news. This is because it is difficult to properly determine what exactly constitutes hate speech. It’s more difficult than identifying terrorist propaganda, Zuckerberg opined. There are linguistic nuances that need to be carefully taken into account and the present artificial intelligence (AI) technology developed by Facebook is not good enough to accurately maneuver around these nuances. Zuckerberg said it would take around five to ten years for artificial intelligence to become adequately accurate in finding and getting rid of hate speech.
In the immediate future, Facebook’s plan is focused on hiring more people to moderate content on the social media network. Zuckerberg said that by the end of 2018, the company would have hired 20,000 staff around the world who natively speak the local languages in their respective countries. This emphasis on fluency with the native tongue is in relation to the need for people who can more competently and accurately distinguish hate speech. After all, it does not make sense relying on staff who don’t understand the language of a certain country to identify and remove supposed hate speech therein.
Hate Speech and Fake News are Language and Culture Specific
It can be said that hate speech and fake news are language and culture specific. Different countries with different languages have their specific forms and variants of hate speech. Obviously, hate speech is being purposefully spread so its creators understandably have a target audience in mind. In identifying this target audience, the dispensers of hate speech need to take language into account. They also have to consider culture and sociopolitical trends to come up with the most effective hate content to release.
Going back to Facebook’s plan in fighting hate speech, it certainly makes sense hiring locals to address hate speech locally. Facebook needs people who can precisely determine if a certain post is indeed hate speech or simply a malice-free expression of opinion. Moreover, it’s not enough for the new Facebook moderators to just be well-versed with the local language. It’s also important that they understand the local culture and have a good grasp of local sociopolitical dynamics to be competent enough in deciding whether something should be stricken off as hate speech or acknowledged as an expression of freedom of speech.
Zuckerberg, in his Senate testimony, raised the point that hate speech is very language-specific. He cited the situation in Myanmar as an example of how Facebook is trying to properly deal with the problem. Facebook has been hiring dozens of Burmese-language content moderators because it needs the competence of locals in properly identifying hate speech. He explained that hate speech can be racially coded to incite violence and locals are the most qualified people to decode this hate speech “coding.”
For the uninformed, Facebook is being blamed for the Rohingya genocide in Myanmar. UN investigators have concluded that Facebook played a major role in inciting hate against the Rohingya people.
Why Facebook Needs Translators to Fight Hate Speech and Fake News
Mark Zuckerberg didn’t exactly say that he wants translators to help Facebook fight fake news and hate speech. As mentioned, their plan is to hire more moderators who have a good grasp of the local language of the country where moderation is being implemented. The idea of the need for translators is driven by the fact that human Facebook moderators in different countries are not always objective. They can be partisan that instead of objectively eliminating fake news and hate speech, they may just be weeding out opposing views.
One good example is the case of the Philippines. The Philippine president is highly popular on Facebook. Many Filipino Facebook users (although there are claims that many of these are troll accounts) habitually share positive posts about the current administration. Unfortunately, a lot of these positive posts turn out to be fake news, deception, false claims, or credit grabbing. There are posts of ridiculous claims like the one about Marawi City having been rehabilitated and transformed into a beautiful European-looking city. There are posts claiming international praise for the president like the supposedly good words from the Queen of England. Some government officials also habitually post misleading and borderline credit-grabbing claims like a couple of instances when a high ranking public works official attributed the completion of bridges to the present government when they were in fact projects of the previous administration.
And of course, the hate speech against critics abound. Supporters of the current Philippine president routinely propagate false allegations against personalities or groups that express criticisms or unfavorable statements against the present administration. A lady senator suffered humiliating memes after being falsely linked to a sex scandal video. Another senator critic is being regularly called an id*ot, useless, and a nuisance every time he expresses unflattering comments on the actions of the government. Activists, journalists, opposition politicians, church leaders, and celebrities who bravely express views unfavorable to the government are promptly harassed, insulted, or threatened. This is not to say that critics are not similarly involved in hate speech but the use of hate speech is more evident among pro-government Facebook users.
Unfortunately, moderation for Facebook in the Philippines does not seem to work as intended especially when it comes to hate speech aimed at government critics. Ironically, there have been a number of instances when it was the pages or accounts of critics and journalists that were suspended even without violating Facebook’s community guidelines. At least one of these cases was in fact raised in a Senate (PH) hearing. There are allegations that the staff handling Facebook in the Philippines are favoring the government, something that cannot be totally ignored considering that Philippine government officials have visited Facebook’s office a couple of times.
This is why Facebook needs translators. It may be a good idea hiring locals to properly evaluate and remove hate speech and fake news posts but, in some countries, it’s not easy taking away partisanship or political leanings among people. Facebook can use competent translators to objectively and accurately translate posts on Facebook. These translated posts can then be impartially evaluated by moderators who are uninterested parties to sociopolitical issues in a specific country. Professional translators, even when they are from the same country where the Facebook posts they translate are being moderated, can be expected to observe objectiveness and accuracy, lest they lose their jobs.
Facebook has its own translation function. Posts can be instantly translated through the “See Translation Below” option right below posts or comments. However, it is not that accurate. Often, the translations disregard context. Facebook can add more credibility to its efforts against fake news and hate speech by partnering with a language service provider like Day Translations.
This idea of using translators would entail more costs for Facebook but it’s something worth considering. If Facebook is really serious in preventing its social media network from becoming a tool for modern strongmen, it should willingly take on the added costs and take pride in being able to hamper vile intentions. To reduce costs, the use of translators can be selectively implemented depending on the sociopolitical situation of the countries where Facebook is being rightfully moderated.
While Facebook is still in the process of perfecting its AI system to fight fake news and hate speech, relying on people to moderate posts is the logical thing to do. However, it’s important for Facebook to acknowledge the fact that people will always have political leanings or a subjective appreciation of content that can be related to them. It’s not a bad idea hiring local moderators for their language proficiency and suitable sociocultural exposure and appreciation. These local translators can better evaluate and decide on what to do with fake news and hate speech. However, this can be a bane in countries where emerging strongmen enjoy public support and are adept at using various forms of propaganda to exploit the gullibility of their citizens.
Day Translations, Inc. is a global provider of expert language services committed to providing the highest quality translation, interpretation, localization, and other related services. Through native speakers based in different parts of the world, the company provides services for over 100 languages, all of which are through professional human translators. It serves businesses and organizations of all sizes. The company is open 24/7 throughout the year to help clients with all kinds of language service needs and can fill in our form to contact us and through telephone at 1-800-969-6853. Day Translations can also be conveniently reached through its official app Terpy, which can be freely downloaded from Google Play and iTunes.
Image Copyright: ximagination / 123RF Stock Photo