Facebook reported  on August 15, 2018, that its efforts to deal with misinformation, fake news, and hate speech in Myanmar have been slow and inadequate:
The ethnic violence in Myanmar is horrific and we have been too slow to prevent misinformation and hate on Facebook.
It cited some technical issues and other reasons why it failed to decisively address misinformation in Myanmar:
The rate at which bad content is reported in Burmese, whether it’s hate speech or misinformation, is low. This is due to challenges with our reporting tools, technical issues with font display and a lack of familiarity with our policies.
During his appearance before the United States Senate in April 2018, Facebook Chief Executive Officer Mark Zuckerberg boasted about his company's progress in resolving the spread of hate speech in countries like Myanmar.
But six civil society groups signed  a letter disproving Zuckerberg's claim while highlighting the “inherent flaws”  in Facebook's ability to respond to emergencies. Zuckerberg was quick to apologize  and vowed to do more to stop groups from using Facebook to promote religious violence and discrimination in Myanmar.
Facebook usage has surged in Myanmar over the past several years but this has also led to the widespread dissemination of fake news, hate speech, and other forms of misinformation targeting  the country’s Muslim minority, especially the stateless Rohingya population.
Hardline Buddhist groups were accused of fomenting hatred and bigotry against the Rohingya which led to violent clashes, displacement of Muslim residents in the Rakhine State, and the intensification of online persecution against minorities.
The government of Myanmar refuses  to recognize the Rohingya as one of the country’s ethnic groups and considers them as illegal immigrants.
Even before Zuckerberg's testimony in the United States Senate, United Nations officials blamed Facebook for its failure to prevent hate speech in Myanmar.
Marzuki Darusman, chairperson of the Independent International Fact-Finding Mission on Myanmar reported  on March 12:
[H]ate speech and incitement to violence on social media is rampant, particularly on Facebook. To a large extent, it goes unchecked.
Yanghee Lee, Special Rapporteur on human rights in Myanmar told  members of the 37th session of the Human Rights Council:
[T]he level of hate speech, particularly on social media, has a stifling impact on asserting sensitive and unpopular views.
A recent update  from Facebook allows for reviews of mechanisms implemented in the past five months to address hate speech and misinformation including the following programs  carried out by Facebook:
In the second quarter of 2018, we proactively identified about 52% of the content we removed for hate speech in Myanmar.
As of this June, we had over 60 Myanmar language experts reviewing content and we will have at least 100 by the end of this year.
We proactively identified posts that indicated a threat of credible violence in Myanmar. We removed the posts and flagged them to civil society groups to ensure that they were aware of potential violence.
The Facebook update was issued a day after Reuters published  a special feature about the ‘meager’  resources allotted by the tech company to resolve complaints relating to hate speech in Myanmar. Reuters also identified around 1,000 posts with hate speech content which could still be accessed on Facebook during the first week of August.
Now that Facebook recognizes the link between online hate speech and the violence inflicted on Myanmar’s minority groups, it remains to be seen to what extent will the company's actions halt the dissemination of hateful content. This, however, should embolden civil society groups and other human rights advocates to place greater pressure on Facebook and other digital platforms to prevent the publication and broadcasting of misinformation in Myanmar and around the world.