

But that was not true for publishers in the Global South, which Facebook began accepting into the program in 2016. For them, the payouts weren’t high enough compared with other available forms of monetization. Instant Articles quickly fell out of favor with its original cohort of big mainstream publishers. If a participating publisher had also opted in to monetizing with Facebook’s advertising network, called Audience Network, Facebook could insert ads into the publisher’s stories and take a 30% cut of the revenue. With the new scheme, articles would open up directly within the Facebook app, and Facebook would own the ad space. The ad provider, usually Google, would then cash in on any ad views or clicks. Before Instant Articles, articles posted on Facebook would redirect to a browser, where they’d open up on the publisher’s own website. But the move also conveniently captured advertising dollars from Google. Facebook isn’t just amplifying misinformation.Īn MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fueling the deterioration of information ecosystems around the world. Over the last few weeks, the revelations from the Facebook Papers, a collection of internal documents provided to Congress and a consortium of news organizations by whistleblower Frances Haugen, have reaffirmed what civil society groups have been saying for years: Facebook’s algorithmic amplification of inflammatory content, combined with its failure to prioritize content moderation outside the US and Europe, has fueled the spread of hate speech and misinformation, dangerously destabilizing countries around the world.īut there’s a crucial piece missing from the story.
#Today myanmar news Offline
Months later, Facebook admitted it hadn’t done enough “to help prevent our platform from being used to foment division and incite offline violence.” In 2018, a United Nations investigation determined that the violence against the Rohingya constituted a genocide and that Facebook had played a “determining role” in the atrocities. It shifted public opinion and escalated the conflict, which ultimately led to the death of 10,000 Rohingya, by conservative estimates, and the displacement of 700,000 more. But either way, the sheer volume of fake news and clickbait acted like fuel on the flames of already dangerously high ethnic and religious tensions. It’s still not clear today whether the fake news came primarily from political actors or from financially motivated ones. They claimed that Muslims were armed, that they were gathering in mobs 1,000 strong, that they were around the corner coming to kill you.
#Today myanmar news crack
As police and military began to crack down on the Rohingya and push out anti-Muslim propaganda, fake news articles capitalizing on the sentiment went viral. It was during this rapid degradation of Myanmar’s digital environment that a militant group of Rohingya-a predominantly Muslim ethnic minority-attacked and killed a dozen members of the security forces, in August of 2017. In a country where Facebook is synonymous with the internet, the low-grade content overwhelmed other information sources. All the engagement had instead gone to fake news and clickbait websites.

One year after that rollout, legitimate publishers accounted for only two of the top 10 publishers on Facebook in Myanmar. A year later, Facebook (which recently rebranded to Meta) offered global access to Instant Articles, a program publishers could use to monetize their content. In 2015, six of the 10 websites in Myanmar getting the most engagement on Facebook were from legitimate media, according to data from CrowdTangle, a Facebook-run tool.
