Social media platforms aren’t doing enough to fight COVID-19 untruths

If ever there was a time when the best and worst of social media was evident, the pandemic may be it. As much of the world's population has hunkered down at home for extended periods to slow the spread of COVID-19, social media platforms have experienced an enormous surge in usage. As a means of staying connected while physically distancing, social media has proved to be an invaluable source of information and community.

But the social media giants knew the pandemic would reveal a more disturbing side. The 2016 election of Donald Trump provided a wake-up call as to how digital platforms can become a forum for fomenting large-scale discord using misinformation.

Facebook CEO Mark Zuckerberg vowed to crack down on misinformation during the pandemic. Credit:AP

Facebook went on the front foot early when the pandemic hit. It placed a coronavirus information centre at the top of its News Feed with verified information, committing itself to restricting the distribution of false information about vaccines and having fact-checkers place warnings on posts considered dubious.

It recently boasted that from April to June it had applied warning labels to 98 million pieces of COVID-19 misinformation, had removed 7 million pieces of content that could lead to imminent harm and had directed more than 2 billion people to resources from credible health authorities.

But according to a global human rights group, misinformation about vaccines and other health topics on Facebook was viewed an estimated 3.8 billion times during the past year, and peaked in April, soon after Facebook introduced the new measures to stem the flow. The study also found that more than 80 per cent of suspect posts it looked at did not include a warning label from fact checkers. It raises serious concerns about Facebook's ability to monitor and control such large amounts of information.

The World Health Organisation has also expressed growing alarm over the rampant spread of misinformation across many nations."We’re not just battling the virus," WHO director-general Tedros Adhanom Ghebreyesus recently said. “We’re also battling the trolls and conspiracy theorists that push misinformation and undermine the outbreak response.”

The WHO says it is collaborating with more than 50 digital companies and social media platforms, including TikTok, Google, Viber, WhatsApp, and YouTube, to ensure credible information is shown when people go searching for news.

Closer to home, last week The Age's Facebook page was hit by a co-ordinated spam attack that forced the removal of an article about suicide rates during the pandemic. The page was flooded with thousands of comments including threats, abuse, defamation and conspiracy theory material.

The attack was led by an Australian group that believes the severity of the coronavirus is exaggerated, or in some cases that the illness is not real and is a ploy to allow governments to assert more control over the lives of citizens. What has frustrated some media organisations is that Facebook will not allow comments to be turned off, a simple way for such attacks to be stopped.

It is no great revelation that social media is awash with bogus information. But during a pandemic, such falsehoods and deception can have real consequences. Irresponsible individual actions can have dire outcomes for the collective.

While much of the economy is in a pandemic-induced coma, big tech companies are enjoying healthy profits. There is no excuse for them not to be doing more to ensure social media platforms are a force for good during such times.

Note from the Editor

The Age’s acting editor, Michelle Griffin, writes a weekly newsletter exclusively for subscribers. To have it delivered to your inbox, please sign up here.

Most Viewed in National

Source: Read Full Article