Disinformation is not a new problem, digital media have radically changed the dynamics through which disinformation is created, distributed, and consumed. In recent years, online disinformation has been implicated in the global resurgence of vaccine-preventable diseases, the subversion of national politics, and the amplification of social divisions.
In this context, there is a widespread belief that disinformation is a pressing problem that needs to be addressed. However, there is far less clarity on how the problem should be addressed. Researchers at Dublin City University are tackling this urgent question.
The icon on this article reflects the research's contribution to UN Sustainable Development Goal 4: Quality Education. The Sustainable Development Goals are 17 objectives designed by the United Nations to serve as a shared blueprint for peace and prosperity for people and the planet.
Countermeasures
Developing effective countermeasures for online disinformation is an urgent goal, but it is also a challenging one that presents conceptual, practical, and regulatory difficulties. Conceptual difficulties arise because definitions of the problem vary considerably and the boundaries between disinformation, opinion, and other kinds of problem content such as hate speech are often unclear.
Practical issues stem from the huge volume of content that flows through online platforms, which makes it difficult to develop and apply fair and consistent principles for moderation. There are also concerns from a regulatory perspective: there are legal, ethical, and democratic implications of restricting free expression, and granting platforms an (unaccountable) power to determine what is acceptable.
Cross-cutting model
Writing in a special issue of the European Psychologist journal, Jon Roozenbeek (University of Cambridge), Eileen Culloty and Jane Suiter (DCU FuJo) provide researchers and policymakers with an overview of which individual-level interventions are likely to have an impact on reducing disinformation.
The team reviewed the evidence for the effectiveness of four categories of interventions: boosting (psychological inoculation, critical thinking, and media and information literacy); nudging (accuracy primes and social norms nudges); debunking (fact-checking); and automated content labelling.
Building on this work, a paper by Lala Muradova, Eileen Culloty and Jane Suiter in Political Communication investigated whether deliberative minipublics can be used to correct disinformation. The team theorise that in times of politicisation and polarisation of expertise, endorsement of expert information by a minipublic can serve to legitimise expert correction and render it more persuasive in the eyes of individuals.
Minipublic experiment
A minipublic is a randomly selected group of citizens who meet with experts and advocates to make recommendations on matters of public concern. There are different types of minipublics, but the core characteristics are the same. Members are randomly chosen to reflect the microcosm of the society at large. They meet over several days to learn from the expert evidence. Subsequently, individuals engage in the discussion and consideration of different arguments about the issue in small groups, make specific recommendations and vote on specific policy proposals. In some cases, such as the Irish Citizens´ Assemblies, those recommendations are further submitted to politicians for discussion about advancing specific policies.
In the experiment, respondents were asked to watch a public-information video with corrective information. There were three experimental conditions: a control, an expert correction and a minipublic endorsement of expert correction. The misperception we study is the idea that the coronavirus was man-made and released from a lab. Respondents in the expert video watched a clip of a virology expert correcting this misperception. In the minipublic endorsement, respondents are exposed to information about a minipublic and its conclusions.
The stimulus consists of a video where brief introductory information about the minipublic was given, followed by the same expert evidence, a demonstration of small group deliberation and finally, endorsement by the minipublic. One of the participants acts as a spokesperson to affirm that as a result of deliberations participants concluded that COVID-19 was not human-made. The team found that that minipublic endorsement significantly increases the uptake of expert information among (non-participating) citizens.
Impactful research
The DCU FuJo Institute has led interdisciplinary efforts to deliver free insights and tools to understand and respond to disinformation. DCU FuJo has coordinated EU projects including the Ireland hub for the European Digital Media Observatory; H2020 Provenance on developing user-facing tools to help people evaluate online content; and ITN-JOLT on harnessing digital and data technology for journalism.
DCU FuJo is also a partner on EUComMeet investigating deliberation and disinformation and RedMed on increasing the resilience of European media. FuJo researchers also lead national projects investigating disinformation including a prestigious IRC laureate (on climate-change disinformation) and ongoing work for the media regulator on monitoring the EU Code of Practice on Disinformation.
In addition to academic publications, FuJo researchers regularly contribute to public debates, seminars, and practical initiatives including, for example, an EDMO hub briefing report on disinformation for policymakers and regulators; the working group for the National Counter Disinformation Strategy.
Read full paper and citation here - Misperceptions and Minipublics: Does Endorsement of Expert Information by a Minipublic Influence Misperceptions in the Wider Public?