Tuesday, September 1, 2020

Facebook touts beefed up hate speech detection ahead of Myanmar election

Facebook has offered a little detail on extra steps it’s taking to improve its ability to detect and remove hate speech and election disinformation ahead of Myanmar’s election. A general election is scheduled to take place in the country on November 8, 2020.

The announcement comes close to two years after the company admitted a catastrophic failure to prevent its platform from being weaponized to foment division and incite violence against the country’s Rohingya minority.

Facebook says now that it has expanded its misinformation policy with the aim of combating voter suppression and will now remove information “that could lead to voter suppression or damage the integrity of the electoral process” — giving the example of a post that falsely claims a candidate is a Bengali, not a Myanmar citizen, and thus ineligible to stand.

“Working with local partners, between now and November 22, we will remove verifiable misinformation and unverifiable rumors that are assessed as having the potential to suppress the vote or damage the integrity of the electoral process,” it writes.

Facebook says it’s working with three fact-checking organizations in the country — namely: BOOM, AFP Fact Check and Fact Crescendo — after introducing a fact-checking program there in March.

In March 2018 the United Nations warned that Facebook’s platform was being abused to spread hate speech and whip up ethnic violence in Myanmar. By November of that year the tech giant was forced to admit it had not stopped its platform from being repurposed as a tool to drive genocide, after a damning independent investigation slammed its impact on human rights.

On hate speech, which Facebook admits could suppress the vote in addition to leading to what it describes as “imminent, offline harm” (aka violence), the tech giant claims to have invested “significantly” in “proactive detection technologies” that it says help it “catch violating content more quickly”, albeit without quantifying the size of its investment nor providing further details. It only notes that it “also” uses AI to “proactively identify hate speech in 45 languages, including Burmese”.

Facebook’s blog post offers a metric to imply progress — with the company stating that in Q2 2020 it took action against 280,000 pieces of content in Myanmar for violations of its Community Standards prohibiting hate speech, of which 97.8% were detected proactively by its systems before the content was reported to it.

“This is up significantly from Q1 2020, when we took action against 51,000 pieces of content for hate speech violations, detecting 83% proactively,” it adds.

However without greater visibility into the content Facebook’s platform is amplifying, including country-specific factors such as whether hate speech posting is increasing in Myanmar as the election gets closer, it’s not possible to understand what volume of hate speech is passing under the radar of Facebook’s detection systems and reaching local eyeballs.

In a more clearly detailed development, Facebook notes that since August, electoral, issue and political ads in Myanmar have had to display a ‘paid for by’ disclosure label. Such ads are also stored in a searchable Ad Library for seven years — in an expansion of the self-styled ‘political ads transparency measures’ Facebook launched more than two years ago in the US and other western markets.

Facebook also says it’s working with two local partners to verify the official national Facebook Pages of political parties in Myanmar. “So far, more than 40 political parties have been given a verified badge,” it writes. “This provides a blue tick on the Facebook Page of a party and makes it easier for users to differentiate a real, official political party page from unofficial pages, which is important during an election campaign period.”

Another recent change it flags is an ‘image context reshare’ product, which launched in June — which Facebook says alerts a user when they attempt to share a image that’s more than a year old and could be “potentially harmful or misleading” (such as an image that “may come close to violating Facebook’s guidelines on violent content”).

“Out-of-context images are often used to deceive, confuse and cause harm. With this product, users will be shown a message when they attempt to share specific types of images, including photos that are over a year old and that may come close to violating Facebook’s guidelines on violent content. We warn people that the image they are about to share could be harmful or misleading will be triggered using a combination of artificial intelligence (AI) and human review,” it writes without offering any specific examples.

Another change it notes is the application of a limit on message forwarding to five recipients which Facebook introduced in Sri Lanka back in June 2019.

“These limits are a proven method of slowing the spread of viral misinformation that has the potential to cause real world harm. This safety feature is available in Myanmar and, over the course of the next few weeks, we will be making it available to Messenger users worldwide,” it writes.

On coordinated election interference, the tech giant has nothing of substance to share — beyond its customary claim that it’s “constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps”, including groups seeking to do so ahead of a major election.

“Since 2018, we’ve identified and disrupted six networks engaging in Coordinated Inauthentic Behavior in Myanmar. These networks of accounts, Pages and Groups were masking their identities to mislead people about who they were and what they were doing by manipulating public discourse and misleading people about the origins of content,” it adds.

In summing up the changes, Facebook says it’s “built a team that is dedicated to Myanmar”, which it notes includes people “who spend significant time on the ground working with civil society partners who are advocating on a range of human and digital rights issues across Myanmar’s diverse, multi-ethnic society” — though clearly this team is not operating out of Myanmar.

It further claims engagement with key regional stakeholders will ensure Facebook’s business is “responsive to local needs” — something the company demonstrably failed on back in 2018.

“We remain committed to advancing the social and economic benefits of Facebook in Myanmar. Although we know that this work will continue beyond November, we acknowledge that Myanmar’s 2020 general election will be an important marker along the journey,” Facebook adds.

There’s no mention in its blog post of accusations that Facebook is actively obstructing an investigation into genocide in Myanmar.

Earlier this month, Time reported that Facebook is using US law to try to block a request for information related to Myanmar military officials’ use of its platforms by the West African nation, The Gambia.

“Facebook said the request is ‘extraordinarily broad’, as well as ‘unduly intrusive or burdensome’. Calling on the U.S. District Court for the District of Columbia to reject the application, the social media giant says The Gambia fails to ‘identify accounts with sufficient specificity’,” Time reported.

“The Gambia was actually quite specific, going so far as to name 17 officials, two military units and dozens of pages and accounts,” it added.

“Facebook also takes issue with the fact that The Gambia is seeking information dating back to 2012, evidently failing to recognize two similar waves of atrocities against Rohingya that year, and that genocidal intent isn’t spontaneous, but builds over time.”

In another recent development, Facebook has been accused of bending its hate speech policies to ignore inflammatory posts made against Rohingya Muslim immigrants by Hindu nationalist individuals and groups.

The Wall Street Journal reported last month that Facebook’s top public-policy executive in India, Ankhi Das, opposed applying its hate speech rules to T. Raja Singh, a member of Indian Prime Minister Narendra Modi’s Hindu nationalist party, along with at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence — citing sourcing from current and former Facebook employees.



from Social – TechCrunch https://ift.tt/3gSzmEx

No comments:

Post a Comment