Fake news. Says who?

Social media companies confront the ‘infodemic’ during COVID-19

By Terrance J. Mintner • July 1, 2020


Vetre | Shutterstock

The coronavirus pandemic has just surpassed 10 million global infections. Over 500,000 people have died. Meanwhile, governments around the world are swamped as they try to stem the rapid spread of the virus and the economic fallout it has caused.

On top of all that, they’ve had to deal with another menace. A tsunami of false and misleading information about COVID-19 has flooded social media. The European Commission warned last month that “foreign actors and certain third countries, in particular Russia and China” are likely behind the disinformation campaigns.

“Infodemic” is the new buzzword to describe this explosion of fake news. 

Here is just a small sample of the toxic ideas in circulation: “Drinking bleach can cure the virus” (Facebook); “Hydroxychloroquine is the miracle cure” (Twitter); “Washing hands does not help” (Facebook and Twitter); and “By wearing a mask, the exhaled viruses will not be able to escape and will concentrate in the nasal passages, enter the olfactory nerves and travel into the brain” (Facebook and Twitter). Health experts are trying keep these and other half-baked notions at bay. 

Given the spread of false information over matters of life and death, it’s no surprise that governments feel compelled to step in. They’re demanding much more vigilance from social media companies.

Last month, the EC required tech firms to provide officials with detailed monthly reports on how they are fighting fake news. Members of the British government have suggested the tech giants should audit their algorithms. New Zealand and Brazil, among other countries, want to subject these companies to tougher regulations.

To their credit, social media companies are complying. In March several of them including Facebook, Google, Twitter and Microsoft publicly pledged to cooperate in the fight against fake news during COVID-19.

A few weeks ago, a Facebook spokesperson said the company took down “hundreds of thousands” of harmful coronavirus posts. In other cases, it opted to place warning labels on “90 million pieces of misinformation.”

But these efforts are hardly enough. A peer-reviewed study in the journal Psychological Medicine published on June 9 has shown that those who consume news on social media are more likely to disregard official health guidelines. Specifically, it stated that “there was a very strong negative relationship between holding one or more conspiracy beliefs and following all health-protective behaviors.”

Based on surveys conducted in Britain in April and May, the BBC reported that “56% of people who believe that there’s no hard evidence the coronavirus exists get a lot of their information from Facebook, compared with 20% of those who reject the conspiracy theory.”

Speaking of conspiracy theories, the surveys found that 60% of those who believe in a causal link between 5G and COVID-19 “get a fair amount or great deal of their information on the virus from YouTube.”

The study in Psychological Medicine – conducted by a research team at Kings College London – concludes that social media in the UK are “largely unregulated.” “One wonders how long this state of affairs can be allowed to persist while social media platforms continue to provide a worldwide distribution mechanism for medical misinformation,” the researchers added. 

But how can social media firms keep up with a gazillion posts per day? They might take some down or put warning labels on others, but many slip through.

Plus, there is a larger issue here. Big tech firms have already promised to do more. This means they will more closely police the borders between news, legitimate opinion, and fake or misleading information. That’s a tall order. Why do they have the right to control, filter and channel public opinion?

 

 


Nijwam Swargiary | Unsplash

“This is kind of like the search for the Holy Grail,” Dr. Tal Pavel, an educator, entrepreneur and researcher with expertise in cyberspace, tells Spectory. Pavel founded CyBureau, an organization that provides internet and technology users with a wide range of cyber solutions and tools to empower them.

“The value of Facebook’s shares has collapsed in the past week because of claims the company failed to act on cases of fake news or hate speech,” he adds.

Facebook has indeed become something of a whipping boy as of late. In the new few days, hundreds of businesses have joined a boycott of Facebook ads. They seek to pressure a company they say can do much more to tackle hate speech and misinformation. It could also be a convenient move for them, as ad budgets have fallen drastically during the pandemic. 

The stats seem to justify their concerns. A study by the Center for Countering Digital Hate last month revealed that hundreds of posts on Facebook and Twitter spreading misinformation about the coronavirus are being left online, even after users flagged them. It added that “90% remained visible online afterwards without any warnings attached.”

Facebook could counter this, Pavel says, by re-iterating it’s just a platform. “On the other hand, when users do not act according to Facebook’s regulations and rules, the company will shut down their accounts immediately. So, it’s not just a platform.”

 

Due to its widespread reach and viral posts, social media platforms can shape how we analyze and perceive reality

Some critics argue that this “we’re just a platform” status should be revoked or updated. In the U.S. the tech giants are protected by Section 230 of the Communications Decency Act (1996). The legislation stipulates that they cannot be held legally liable for user-generated content because they don’t moderate it as would a traditional publisher.

The law allows them to engage in “good Samaritan” moderation of “objectionable” content, but puts them outside the scope of a publisher.

Last month U.S. President Donald Trump issued an executive order attempting to curb these protections. He was steaming mad after Twitter appended fact checks to several of his tweets about mail-in voting.

But that’s politics. Maybe a pandemic will force tech firms into a more hands-on approach when it comes to misinformation. 

“If there will be enough pressure on Facebook’s management to change its attitude toward fake news – its ability to pinpoint and exclude false information from its platform – perhaps it will succeed. But I do not think this will amount to long-term change,” Pavel says. 

“Facebook will shift temporarily according to the pressure, but the company will not stop profiting from our data. There’s no turning back to a time in which we are less exposed online, with more privacy and anonymity. The same is true with regard to fake news.”

Furthermore, he explains, Facebook might not have the technical ability to expose different kinds of digital manipulation, including deep fake videos. For example, real content from the past can be republished to sow confusion. Real images and videos can be slightly manipulated, making it hard to determine what’s real and what’s not. 

In the end we’re left with the bigger question: Why should we give tech companies all this power to decide what constitutes “fake news” and what comes through our screens?

“Due to its widespread reach and viral posts, social media platforms can shape how we analyze and perceive reality,” Pavel concludes.

That’s a power regulators would need to work hard at scaling back.


Nick Bolton | Unsplash

Leave a Reply