Digital Wildfire (Nov 2014 to Nov 2016) was an ESRC funded project that investigated the spread of harmful content on social media and identified opportunities for the responsible governance of digital social spaces. Our collaborative team of computer scientists, social scientists and ethicists investigated the impacts that content such as rumour, hate speech and malicious campaigns can have on individuals, groups and communities; we also and examined social media data to identify forms of ‘self-governance’ through which social media users can manage their own and others’ online behaviours. The project drew on the perspectives of other key players such as social media companies, legislators, the police, civil liberties groups and educators to explore ways in which the spread of harmful social media content might be prevented, limited or managed
Our key conclusions related to:
1) The scale and breadth of the ‘problem’ of harmful content spreading on social media
Through our interviews, observations and surveys we have found that a very wide range of agencies are now having to deal with rapidly spreading social media content that is in some way inflammatory, antagonistic or provocative. This includes the police, councils, news agencies, anti-harassment organisations, anti-bullying groups and schools.
2) The complexities and limitations of current governance
Various mechanisms currently exist to deal with social media content and/or its impact but these tend to have practical limitations. For instance, the law and governance mechanisms enacted by social media platforms (removing posts, suspending accounts etc.) are mostly retrospective – dealing with content after it has already spread and caused harm. They also tend to act on individual posts or users, rather than the multiple posts and users associated with a digital wildfire.
3) The potential value of counter speech and user self-governance
In contrast to other governance mechanisms, we find that user self-governance has some capacity to be prospective and limit the spread of harmful content in real time. The posting of counter speech to disagree with an inflammatory comment or unsubstantiated rumour can serve to encourage others to reflect carefully before sharing or forwarding content. It also upholds rather than undermines freedom of speech. Our analysis of social media content (involving qualitative and computational approaches) suggests that multiple voices of disagreement on a Twitter conversation and function to quell hate speech. Click here for further information.
4) The value of education and engagement
When we ask respondents to tell us what they feel are appropriate ways forward for the responsible governance of social media, they frequently emphasise the idea of communities working together and the value of fostering responsibility on social media through education.
In addition to reporting our findings to academic audiences we have written articles for The Conversation and EmergencyJournalism.net . In January 2016 we held a showcase workshop in which we presented some of our project findings and invited a series of speakers to explore issues relating to the spread of harmful content on social media and the responsible governance of digital social spaces. Our artist-in-residence Barbara Gorayska has produced two paintings designed to promote a creative understanding of digital wildfires to broad audiences. Recognising the value of education, we have focused much of our project impact activities on engaging with and providing resources for educators and young people. We have run two youth panel competitions and produced two sets of educational materials for secondary schools – focusing on e-safety and digital citizenship. We have also co-produced two video animations - #TakeCareOfYourDigitalSelf and Keeping Social Media Social.