top of page
Contagion, A Digital Remix
Social Media Misinformation Practices Around COVID-19

Mishaela Robison, Catherine Kim, Shruti Juneja, and Joyce Er

July 5, 2020

The ultrastructural morphology exhibited by coronaviruses.

I. Introduction

     The coronavirus pandemic has disrupted lives in unprecedented ways at the individual, national, and international levels. These disturbances – and subsequent reactions to them – have not only enervated the strengths of institutions, but also exacerbated their weaknesses. Our flawed understanding and regulation of social media misinformation is a clear demonstration of these weaknesses. The widespread presence of misinformation surrounding the 2016 presidential elections in the U.S. exemplified the effects of such technological power. In the context of the pandemic, technology’s negative influence has continued to increase: the Harvard Gazette recently reported that “nearly two-thirds of Americans said they have seen news and information about the disease that seemed completely made up.” The following briefing will explore how coronavirus misinformation has become so widespread and influential. 



     To better understand the methods of disinformation dissemination, we analyzed 10 major social media sites: Facebook, Instagram, Twitter, LinkedIn, YouTube, Reddit, Tumblr, WhatsApp, Snapchat, and TikTok. We chose these sites due to their popularity and user demographics. By examining the different affordances of these sites and the impact of these affordances on the spread of disinformation, we hope to demonstrate how false information spreads through a variety of formats.


II. Platform Structures

     The 10 sites we examined varied based on their user norms and the affordances that they provide. Based on these factors, we divided them into four different categories: (1) text-based personal one-to-many, (2) image-based personal one-to-many, (3) anonymized public one-to-many, and (4) private communication.


Text-Based Personal One-to-Many

     Sites such as Facebook, Twitter, and LinkedIn employ an aggregated feed for users that presents content based on their curated relations (friends, subscribed account and connections.) Although algorithms determine the posts that make up a feed and their order, the content itself comes from a user’s chosen connections, which typically correspond to an individual that the user knows, or knows of, in an offline context. Accounts on these sites typically contain warrants that tie to an offline self, such as a user’s full name, personal images, and public friends list. For this reason, these sites tend to lack anonymity.

     Users can post in a variety of formats (ex. text, image, video, articles, etc.), but text is the default, and uploads of other mediums often include written additions, prompting the distinction of this category as text-based. The broadcast nature of these sites means that posts are published on a “one-to-many” basis, with a single user’s content presented to their connections as soon as they hit send. (This framework excludes direct communication features such as Facebook Messenger, Twitter Direct Messaging, and LinkedIn Messaging.) Although privacy settings can allow for more direct (one-to-one or one-to-select-few) transmissions, the default setting remains the same.


Image-Based Personal One-to-Many

     Similar to their text-based counterparts, Instagram, TikTok, and YouTube allow for content broadcasting at the one-to-many level in a personal manner, as it connects the online account to an offline self. On each of these platforms, every post must include some form of visual image/video that typically conveys the creator’s main message, with added text as a supplement. The image-driven nature of these platforms serves as an additional connection to the offline self, making anonymity, again, rare.


Anonymized Public One-to-Many

     Sites such as Reddit and Tumblr share the aggregated news feed of the prior categories, but they differ in their norm of relative anonymity. While posts and comments on these sites are tied to the poster’s username, their accounts do not typically provide many warrants to their offline identities unless they actively choose to self-disclose. Accounts on these sites rarely contain a user’s actual name, for example; instead, users refer to one another by their usernames, unless they have developed enough of a relationship to know each other’s names or preferred pseudonyms. Similarly, images containing the user are neither required nor normally posted. Instead, users reveal aspects of themselves through information they reveal or the communities (e.g. sub-reddits and hashtags) in which they spend their time. Although some users may implement more revealing practices (e.g. posting a name, uploading an image of themselves, describing their families), those cases are the exception, not the rule. Despite the lack of personally revealing information on these sites, they still operate through a broadcast one-to-many framework, allowing a user’s post to reach a number of others.


Private Communication

     Unlike the other categories, Snapchat and WhatsApp center around personal, one-to-one communication. Users on these sites tend to personally know those with whom they are in contact, as evidenced by the need for personal information (e.g. a username or a phone number) to contact them. Though these apps can allow for one-to-many communication (Snapchat stories or group chats), their central focus is singular interpersonal communication, as evidenced by features that incentivize consistent one-on-one interactions (e.g. Snapchat streaks, best friends) and the comparative lack of incentives for one-to-many communication.


III. What Does Misinformation Look Like?

     Since the coronavirus outbreak, false information has influenced many social media users. Types of identified COVID-19 misinformation content include 1) fake cures and preventions, 2) downplaying the impact of the virus, and 3) conspiracy theories of the “origin” or “intention” of the virus. Users share false statements as an article, post, or video on popular social media sites, often accompanied by popular hashtags. 


Fake Cures and Preventions 

     False “cures and preventions” cause people to underestimate the threat of COVID-19 and can even lead to irreversible damage to individuals’ health. On Facebook, for instance, users posted and shared articles claiming that drinking bleach and garlic water could cure COVID-19, and that the hot air from a hairdryer could protect one from infection. On Facebook and YouTube, a documentary named “Plandendemic” features a discredited doctor, Judy Mikovitz, claiming that wearing masks would increase the likelihood of contracting the virus. 

The documentary “Plandendemic” was later taken down on both platforms. Facebook eventually promised to take more action against COVID-19 misinformation. However, a study that Avaaz (a U.S.-based nonprofit organization focusing on global activism) conducted on the “Facebook Infordemic” found that “it can take up to 22 days for (Facebook) to downgrade and issue a warning on such (misinformation) content, giving ample time for it to go viral.” 


Downplaying the impact of the virus 

     Despite the growing number of global COVID-19 infections and reports on overwhelmed hospital and healthcare workers, some individuals still deny the severity of the pandemic. On Tiktok, the hashtag #filmyourhospital went viral. This hashtag encouraged people to film their local ERs to show how uncrowded they were. Containing the outbreak requires cooperation, and coronavirus denialism holds back COVID-19 response efforts, particularly when individuals fail to follow measures advocated by immunologists and virologists.


Conspiracy theories 

     Meanwhile, conspiracy theories are on the rise. Many of them focus on “explaining” the origins and intentions of COVID-19’s “creation.” For example, some videos associated with the hashtag #billgates on Tiktok “promote the idea that the COVID-19 pandemic is a front to adopt a cryptocurrency system patented by Microsoft in order to institute governmental mind control.” Other popular conspiracy theories include ones claiming that COVID-19 is a bioweapon created by a lab in Wuhan, China, which many scientists have debunked 


Misinformation comes in different forms

     COVID-19 misinformation does not just come from the aforementioned sources. While some fake news, such as drinking-bleach-to-treat-coronavirus, is obviously false, other misleading information comes from seemingly legitimate sources. Since the lockdown of Wuhan, China, scientists across the world have dedicated themselves to studying the new virus and sharing their knowledge and findings online, especially on Twitter. 

Among these “scientific” voices, Dr. Eric Ding, a self-proclaimed Harvard “Epidemiologist & Health Economist,” has gained hundreds of thousands of followers with his coronavirus-related commentary, including New Jersey governor, Phil Murphy. However, other virologists and epidemiologists soon found out that Ding earned his doctorate degree in Nutrition and has zero background in infectious-disease research.” They criticized Ding for his uninformed and misleading analysis of COVID-19 data. However, Ding remains popular on Twitter today and still shares his “insights” on COVID-19’s current state. Ding has even blocked some epidemiologists and virologists who confronted him on his misinformed tweets. In other words, the blocking feature, designed to protect users’ well-being, is being used by a fake expert to protect misleading content from valid criticism.


IV. Mixed Policies

     Each platform we analyzed was susceptible to a degree of misinformation, with the specific norms and affordances influencing what exactly that misinformation looks like. When it comes to technological issues, policy tends to be reactive rather than proactive, and social media sites have belatedly scrambled to address and prevent the spread of false news. The nature of each platform’s structure influenced how misinformation spread and who it reached, and the policies adopted by each company reflected these affordances. Social media platforms have not issued a unified response to battle misinformation; instead, their policies have ranged from passively telling users to beware of false news, to intentionally boosting information from trusted sources like the WHO, to actively labeling or taking down inaccurate information. The variance in approaches and lack of uniformity has exacerbated the presence of misinformation across social media platforms.


Text-Based Personal One-to-Many 

     Text-based personal one-to-many sites arguably received the biggest backlash for their complacency in allowing the spread of misinformation surrounding the COVID-19 pandemic, in turn forcing them to have the most prominent responses. The LinkedIn Newsfeed structure is unique because it is curated by editors, who have created a feed with the latest verified information about the pandemic. LinkedIn also updated its policies to emphasize that “information contradicting guidance from leading global health organizations and public health authorities is also not allowed on the platform.” In addition to creating a regularly updated blog containing its misinformation policies, Twitter is adding labels to tweets that contain false information. However, the company has admitted that it is “NOT attempting to address all misinformation. Instead, we prioritize based on the highest potential for harm, focusing on manipulated media, civic integrity, and COVID-19.” While Twitter does not take action against unverified claims, it has been comparatively more proactive against verified ones; moderate misleading information receives a label, while severe misleading information is subject to removal. Moderate disputed claims receive a label, while severe disputed claims are issued a warning.

On another note, Facebook has received much criticism regarding its more hands-off content moderation approach. Initially, the platform “temporarily banned ads and commerce listings for masks on our apps to help protect against scams, misleading medical claims, medical supply shortages, inflated prices and hoarding,” but rolled this ban back since “many health authorities now advise wearing non-medical masks.” An update acknowledged that “Connecting people to credible information is only half the challenge. Stopping the spread of misinformation and harmful content about COVID-19 on our apps is also critically important.” To do this, the company has “directed over 2 billion people to resources from the WHO and other health authorities through our COVID-19 Information Center” and is working with “over 60 fact-checking organizations that review and rate content in more than 50 languages around the world.”


Image-Based Personal One-to-Many 

     Image-based personal one-to-many sources are generally used by younger audiences. TikTok has added a sub-category relating to COVID-19 for users reporting “Misleading Information.” However, although various tech leaders and lawmakers have expressed their views that TikTok is not a legitimate news source, this perhaps makes it all the more dangerous: Mark Andrejevic, professor of Media Studies at Monash University, explains that TikTok’s “style and rhythm of online content [...] lend itself to the affective charge of conspiracy theory [which doesn’t] work well in contemplative, deliberative contexts.” Meanwhile, Instagram has added stickers to promote accurate information and has removed COVID-19 accounts from recommendations, unless posted by a credible health organization. YouTube has declared that it “doesn't allow content about COVID-19 that poses a serious risk of egregious harm,” and users will have their content taken down if they violate this policy. If a user accumulates three strikes, then their channel will also be terminated. The platform has also promoted trusted content by organizing health panels that “show in search results for COVID-19 related searches” and link to self-assessments based on CDC guidelines.


Anonymized Public One-to-Many 

     Anonymized public one-to-many applications face a unique challenge regarding the anonymity and accountability of users. Tumblr took a largely hands-off approach, making an announcement to its users about how to seek accurate information, but doing little to actively police misinformation. On Reddit, the easiest way to discover whether a source is reliable is by reading other users’ comments, which has prompted some users to take matters into their own hands. Officially, Reddit created a “dedicated AMA® [“Ask Me Anything”] series in response to public health concerns about coronavirus.” The company also announced that it may “apply a quarantine to communities that contain[…] hoax or misinformation content. A quarantine will remove the community from search results, warn the user that it may contain misinformation, and require an explicit opt-in.” The policies these companies adopt are clearly crucial; a survey from streaming media service Flixed found that the people who used Reddit as their primary social platform for COVID-19 news reported the worst decline in mental health.


Private Communication

     Private communication sites generally tend to be less susceptible to widespread disinformation because each individual reaches a limited audience. Thus, these companies had much more relaxed approaches to monitoring content. For example, Snapchat’s official policy states, “Our content platform, Discover, is curated and we work closely with only a select set of partners, including some of the most trusted news organizations around the world. We do not offer an open news feed where unvetted publishers or individuals have an opportunity to broadcast misinformation, and our guidelines prohibit Snapchatters and our partners from sharing content that deceives or deliberately spreads false information.” However, this tendency towards disinformation insusceptibility cannot be considered universal: WhatsApp has been particularly vulnerable to the spread of rumors and hoaxes, and messages are much more difficult to monitor because of the encryption schemes. WhatsApp’s guidelines warn users “If you aren’t sure something’s true, don’t forward it.” However, this is a mere suggestion, not an enforceable policy, and many users probably have not even read these guidelines. WhatsApp also explored more creative solutions, allowing the WHO to launch a chatbot that texts verified, up-to-date information to users.

























*Warns users about COVID-19 misinformation: The platform has a dedicated page explaining the dangers of COVID-19 misinformation to its users.

**Boosts content from trusted sources: The platform elevates/promotes information from trusted sources, such as the WHO, CDC, etc.

***Labels / takes down false information: The platform either labels information as fake/misleading, or takes it down entirely.


V. Who Gets to Decide?/What Should Be Done?

     In this era of rampant misinformation and intentional disinformation, who can you trust? Some say that the social media platforms that occupy so much of our time and attention have a unique responsibility to guide their users to trustworthy information and reduce the visibility of misleading user-generated content. Accordingly, sites like LinkedIn have tasked their editorial staff with the laborious task of curating “official updates” from “trusted, official sources” for their users. The sources cited in these updates are almost entirely from the World Health Organization. Similarly, many social media companies have elevated governmental agencies’ websites as sources of truth. For example, Instagram redirects users to their local health authorities’ websites, sending Americans to and Britons to In other words, companies largely acknowledge the importance of guiding their users to trusted information, provided primarily by governmental bodies.

     However, such measures elide the difficulties of establishing ground truths in the face of intra- and inter-governmental disagreements around the world. For example, this tension was particularly evident in Chinese diplomat Lijian Zhao’s suggestion that the U.S. military might have introduced the novel coronavirus to Wuhan before the first COVID-19 case was reported. The U.S. has promoted similarly hostile political dialogue around the pandemic, with Secretary of State Pompeo backing the ungrounded rumor that the coronavirus originated in a Wuhan lab. In spite of the lack of supporting evidence, confusion quickly ensued on both sides, contributing to already-mounting tensions between the U.S. and China and fueling conspiracy theories about putative man-made origins of COVID-19. As new information emerges about the virus each day, the lack of stable, incontrovertible scientific evidence has left ample room for politically motivated speculation.

     Efforts by companies and governments notwithstanding, the responsibility ultimately falls on individuals themselves to become more discerning readers of the news. Because social media platforms essentially hand a megaphone to anyone with an Internet connection, sifting the wheat from the chaff in today’s digital landscape is increasingly difficult. Individuals must carefully calibrate the content they consume, steering clear of crackpot theories while also avoiding echo chambers of like-minded fanatics. Individuals should remain firmly rooted in scientific fact and evidence to avoid succumbing to an infodemic.


VI. Conclusion

     False information spread online follows the “rules” of each platform, growing popular through the same methodology as any other content would. This briefing should certainly not be considered as a conclusive report, but rather as an invitation for further exploration into the intricacies of social media disinformation. However, as we move through this era of turmoil, people should keep one crucial message in their minds: The coronavirus pandemic has made it easy for politicians to brand social media as the ultimate threat to society, and impose highly restrictive policies that demagogically garner the support of a wider public. However, to achieve truly meaningful and constructive change in the world of social media policy, we must encourage greater interaction and cooperation between technological companies and governments to establish lasting, informed, and sustainable policies. In other words, it “is crucial that we critique these policies now and that they are not carried forth into other times and contexts without renewed attention.” The problem lies not with social media sites themselves, but rather with the incentives in place that allow false information to thrive. Knowledge is indeed power in the modern day, and it is now every individual’s responsibility to put that potential power to good use.

Screen Shot 2020-07-04 at 2.11.00 AM.png
bottom of page