top of page
Search
Writer's pictureJessica Morley

Can Misinformation Harm Public Health?

This is a summarised version of my paper "recognising the infosphere as a social determinant of health.” Awareness of the impact of poor quality information on people's health is rapidly increasing; most recently illustrated by former President Obama claiming that misinformation and disinformation is killing people.


Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis- and disinformation (MDI) online and have begun taking some (admittedly relatively ineffective) steps to tackle this. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. As a consequence, those actively seeking health advice and those browsing the web, social media, or even app stores for other reasons are faced with an almost constant barrage of medical news stories, social media posts, spurious website results, direct-to-consumer drug and medical adverts, and hospital and digital-health service marketing messages - much of which is wildly inaccurate.


Ioannidis and others refer to as the “medical misinformation mess,” highlighting the fact that, for instance, vaccine-related internet content have shown consistently that most of this content is misleading and that false messages are more likely to be liked and shared than those that are accurate (more on this later). Additionally, myriad online communities promoting self-harm, anorexia, and homeopathy now exist; un-evidenced and unregulated apps are freely available for download; and the reckless promotion of fad diets and unproven wellness trends by celebrities on unregulated social media platforms is leading to the spread of various dangerous behaviours.


The consequences of this 'mess' are well summarised by Perakslis and Califf:


“A child who needlessly experiences disabilities caused by measles, an adult who dies after stopping a statin despite having high-risk coronary heart disease, and a patient with cancer who ceases chemotherapy in favour of a bogus alternative are all victims of misinformation that is being promulgated on social media and other internet platforms.”


Overall, it seems that - despite World Health Organization (WHO) Director-General Tedros Adhanom Ghebreyesus stating in Feb 2020 (in relation to COVID-19) “We’re not just fighting an epidemic; we’re fighting an infodemic”- public health bodies have continued to underestimate the capacity of the web and social media to exert serious and potentially dangerous influence over health-related behaviour.


This raises the following crucial questions:

  1. Why has so little been done to date to control the flow of, and exposure to, online health MDI?

  2. In what circumstances is more robust action justified?

  3. What specific newly justified actions are needed to curb the flow of, and exposure to, online health-related misinformation and disinformation (OHMDI)?


To tackle question 1, the reasons for lack of intervention are complicated. But there are two primary reasons.


First, there were early efforts to control health info online. Back in the mid-90s, when the web came to be portrayed by the medical community as a dangerous space that lacked the gatekeeping functions necessary to protect naïve health consumers, the WHO submitted a proposal to the Internet Corporation for Assigned Names and Numbers (ICANN) for the creation of a sponsored top-level .health domain


The proposal suggested that the WHO would, through consultation, develop a set of quality and ethical criteria that would-be .health sites would have to meet. The intention was not to police or regulate all health information on the web but to offer a reliable go-to domain to support users who wanted to narrow their search field to include credible sources only. However, it was successfully opposed by stakeholders who argued that the web could not be policed, users were already sophisticated enough to recognize quackery, and no one body should assume the right to veto many thousands of websites. No public health body has attempted to bid for the domain name since. Indeed, in June 2011, the domains .health, .care, .diet, .doctor, .healthcare, .help, .hospital, and .med all went to the highest private bidder.


Second, there has been a growing social resistance to public health interventions in general (think of the major resistance to masks as a protective measure against COVID-19) with various stakeholders arguing that mandatory interventions designed to control the behaviour of individual’s is antithetical to autonomy and overly draconian.


In this context, it has become ethically and politically difficult to argue in favour of tougher online health information controls. A website, social media post, or mobile app can be written by one person and read, commented, shared, downloaded, and edited by thousands. Intervening by, for example, automatically removing or flagging MDI would be perceived as a paternalistic (or even censorious) restriction on individual autonomy, particularly when the current overarching health policy paradigm is heavily infused with the (misguided) belief that information automatically leads to individual empowerment. Furthermore, regulation of online health information is likely to be accused of being in conflict with the right to freedom of speech.


As a consequence, although platforms including Twitter and Instagram do block specific hashtags (for example, #proana, an abbreviation of “pro-anorexia,” is not searchable on Instagram), public health bodies have thus far managed to justify only non coercive state-level interventions focused on educating citizens.


So can intervention ever be justified?


The short answer is yes, according to the following arguments:

  1. Education is necessary but insufficient: those that spread and accept MDI are unlikely to be persuaded by evidence, facts, and reasoning, there’s also evidence that relying on education alone is becoming less effective over time due to the nature of the environment it is trying to control.

  2. Precedent: Internet filtering that targets the websites of insurgents, extremists, and terrorists generally garners wide public support, as does the filtering of content that is perceived to be antithetical to accepted societal norms, such as pornographic content or hate speech

  3. Justice: Protecting those who are more susceptible to MDI (often those who have defining characteristics also associated with poorer health outcomes) is more about meeting the other aim of public health interventions—reducing health inequalities—than it is about paternalistically deciding what “is best” for society.

Taken together, these arguments justify working to overcome the ethical concerns associated with state-led intervention in the infosphere in the name of public health.


The justice argument is particularly compelling, especially when combined with knowledge of how social media platforms operate.


To understand this, we must first understand that, although trust in a source of health information is determined by a complex set of interacting factors, a particularly influential factor is perceived credibility.


In the offline world, credibility is largely controlled by the gatekeeping function performed by clinicians. In an online world, however, this gatekeeping function is removed. This means that a far greater burden is placed on individual internet users to make their own judgements about credibility and to determine which sources they trust.


Several studies, including a particularly compelling one by Chen and colleagues have shown that individuals with lower eHealth literacy lack the skills necessary to determine the credibility of the source accurately, and are, therefore, more likely to place their trust in inappropriate sources of information, like social media posts, than more traditional and 'verified' sources of information. One potential reason for this is that social endorsement (ie, likes and shares) acts as a signal of perceived trustworthiness to those with lower eHealth literacy (see here for a more detailed explanation as to why this might be).


This is problematic because we know that, the algorithms driving both search engine results and social media feeds prioritize posts or websites that lead to greater engagement. Human nature means that often these are posts and sites that are more consistent with already held beliefs, emotive or controversial. OHMDI is considerably more likely to meet these “criteria” than scientific evidence-based medical information, meaning that OHMDI is far more likely to benefit from algorithmic amplification than content produced by reputable health sources i.e. mis/disinfo is more likely to get 'liked' on social media and therefore more likely to be perceived as credible by those with lower levels of eHealth literacy. Agents who deliberately try to manipulate or confuse debates about health care are well aware of this phenomenon and exploit it to their advantage.


As we know that eHealth literacy levels, and other factors that contribute to varying trust in the credibility of different sources of information, vary between population groups as do the effects of so-called 'filter bubbles' this is effectively creating a scenario where some population groups are exposed to a worse-quality 'infosphere' (i.e. an informational environment more rife with mis/dis info) than others - we can argue that the infosphere is a new social determinant of health to create the 'Onlife determinants of Health Model:'




The Onlife Determinants of Health Model adapted from Dahlgren and Whitehead (1991). New elements are in italics.


In this model, the determinants at the top (general socioeconomic, cultural, environmental, and now informational conditions) are those over which public health bodies have the greatest degree of influence. In contrast, the determinants at the bottom (age, sex, and genetic factors) are those over which public health bodies have little to no influence. Thus, the model describes the remit of public health bodies and anticipates the range of activities these bodies might decide are necessary to improve the public’s health.


We can now conclude that not only is it possible to overcome the ethical concerns regarding individual autonomy vs group-level protection to justify government-led control of online health information, but also doing so definitely is within the remit of public health bodies.


Having reached this conclusion, we must now move to consider what public health bodies can actually do to promote the development of 'healthy' online environments. To do this, we must consider what public health bodies do already to protect the public's health.


Typically, public health bodies that operate at both a national or international scale conduct monitoring activities that enable them to identify public health threats such as air quality or pathogens that have the potential to cause harm. Depending on the threat level, responses can include issuing advice on how the public can keep themselves well or putting in place emergency measures such as the closing of airports to stop the spread of infectious disease in keeping with the 2005 International Health Regulations

In short, almost all public health activities fall into one of the following three categories:

  1. Prevention: reducing the incidence of ill health by supporting healthier lifestyles

  2. Protection: surveillance and monitoring of infectious disease, emergency responses, and vaccinations

  3. Promotion: health education and commissioning services to meet specific health needs, for example, occupational health programs that promote self-care

If we apply this logic to the infosphere, instead of (e.g.,) the biosphere, you get the following range of potential interventions:

  1. Actions that lead to the automatic blocking of content classed as posing the highest risk to public health are preventative.

  2. Actions that lead to the monitoring of content on social media or the wider web and the subsequent removal of potentially harmful information are protective.

  3. Actions that improve access to and visibility of high-quality information are promotional.

With this in mind, we can argue that Public Health Bodies, should, in keeping with the foundational values of public health ethics (transparency, confidentiality, and community consent), develop programs of work focused on the following four areas:

  1. Defining the prevalence and trends of health MDI and identifying content for removal (protective monitoring)

  2. Understanding what health MDI is shared and how it spreads so that it is possible to intervene earlier (preventive action)

  3. Evaluating the reach and influence of high-risk health MDI (protective monitoring)

  4. Developing and testing promotional responses

Collectively, these programs of work would enable public health bodies to monitor the most prevalent content being shared online, identify weaknesses in any current strategies, and detect new sources and causes of MDI before they result in significant harm. In essence, these kinds of actions would help public health bodies minimise minimize both the harms of poor infosphere conditions and the ethical risks associated with public health policy.


This does not, however, exempt online service providers themselves from taking action to protect public health from the negative impacts of OHMDI. They should take responsibility for what it is in the infosphere and regulate it. Specifically, they should discharge this responsibility by identifying, detecting, responding to, and recovering from OHMDI, and protecting accurate information.

In short, it is only by taking proactive, coordinated measures will OSPs, public health bodies, and app-store providers be able to stay one-step ahead of the rapidly evolving conditions of the infosphere and play their role in protecting public health.

If this joint response can be coordinated effectively, and the infosphere is appropriately recognized as a social determinant of (public) health and, therefore, a public good, then the twin goals of protecting the public’s health and reducing health inequalities can be supported. Identifying and implementing the most appropriate and efficacious interventions that fall within this framework may not be easy, but we should not let the scale of the challenge become a deterrent. Decisive action is needed, and it is needed as soon as possible.








33 views0 comments

Recent Posts

See All

Comentarios


bottom of page