Initial User Research to counteract misinformation on social networks

Watch our mindmap

Introduction

The Zappa project is funded by a grant from the Culture of Solidarity fund to support cross-border cultural initiatives of solidarity in times of uncertainty and "infodemic". The aim of this initial user research was to help guide development of a custom Bonfire extension to empower communities with a dedicated tool to deal with online misinformation.

Methodology

For the purposes of this research, we used a simple definition of misinformation as being information which is either incorrect or misleading but which is presented (or re-shared) as fact, Disinformation is on the same spectrum as misinformation, but the intention is to intentionally deceive.

We found a spectrum in the graphic below from First Draft as useful in helping frame the discussion with our user research participants.

Our user research participants were volunteers who we recruited through our networks - either directly through an existing relationship, a referral from a contact, or a response to a general request sent out via the Fediverse.

We outlined the kinds of people who we wanted to speak with through some initial brainstorming, which was refined after talking to our first couple of user resarch participants. The board below shows who we wanted to talk with and the themes we wanted to cover.

Over the four-week user research period we conducted 10 user research interviews via video conference and, in addition, had one participant who preferred to answer our questions asynchronously via email.

As the screenshot of our board above demonstrates, we were aiming to talk to a wide range of stakeholders and cover a number of themes. The boxes are prioritised from top to bottom under each grey box, with for example “people who have a practical interest in the area” being given greater priority than “people with a technical interest in the project”.

The green boxes indicate those groups or themes we believe that we covered during the user research process, with those in yellow being those we do not believe we fully covered.

Findings

Our research opened our eyes to important work being done by individuals and organisations, primarily in Europe and North America. Two user research participants also discussed work being done in South America, and one mentioned a project with an organisation based in Asia.

The mindmap below shows some of the most relevant findings from our research, detailing problems around misinformation, who is affected by it, as well as what works, what might work, and what doesn’t work. We were particularly grateful for one user research participant suggesting a categorisation based on technical, relational, and procedural solutions, which they are using in their work.

Broadly speaking, our main findings were that:

Reccomendations

Based on this initial user research, we recommend these initial actions:

A) Use a multi-pronged approach

As one user research participant mentioned explicitly, and several alluded to, there is no way to ‘solve’ or ‘fix’ misinformation - either online or offline. Consequently, approaching the problem holistically with a range of interventions is likely to work best. These interventions may cover ‘technical’, ‘reputational’ and ‘process-based’ approaches, and include:

B) Ensure bot accounts are clearly identifed

Users interact with bots differently than with accounts they know to have humans behind them. By ensuring that bot accounts are labelled as such users are free to make informed decisions about the range of actions they have such as unfollowing, muting, blocking, and reporting.

C) Experimenting with user-controlled machine learning

The notion of a ‘personal AI’ is a powerful one and could be based on a number of variables. This could include the personal AI, for example suggesting:

Bonfire Networks

Star
Open Collective account Open Source