Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock/Gustavo Frazao

Opinion First rule of social media? Don’t engage with The Horrible People

Aoife Gallagher says it’s tempting to share content that horrifies you, but it only serves to amplify the hate.

LAST UPDATE | 18 Jun 2021

AT THIS STAGE, the sequence of events is predictable. A known Horrible Person says a Horrible Thing on a livestream broadcast to only a handful of people online.

The Horrible Thing may involve speaking in a disgusting way about migrants, or a minority group, or sharing dangerous lies that have no foundation in reality. An understandably outraged person sees the Horrible Thing and shares it in a flash of anger and disgust. Before too long, the Horrible Person’s name is trending on Twitter.

This reaction is what I call shock sharing. People often feel the need to call out bad behaviour and our brain actually rewards us with a hit of dopamine when we do, according to this 2004 study. Social media has provided the ultimate tool for us to do this to our heart’s content.

Shock sharing in order to call out hate or lies is an understandable reflex that is hardwired into our brains. For the most part, those doing it believe that they are helping by raising awareness and fighting back. Sharing a Horrible Thing like this, however, causes a chain reaction of further outrage and further shock sharing, while also spreading the Horrible Person’s message to a much wider audience than was initially possible.

Studies have shown that more than any other kind of emotion, rage is the one most likely to drive reaction online. If we see something that gets us fired up, we’re more likely to share it. Many people are aware of this, from marketers to news editors to disinformation purveyors, but none more so than the social media platforms themselves.

Shocking sells

The business model employed by these platforms is centred around keeping people’s eyes on screens for as long as possible in order to target them with advertising. The algorithms, which platforms market to their users as “personalised feeds”, are therefore built to fulfil the needs of the business model by presenting the most engaging content, often content that has shock value, to keep people scrolling for longer.

A lack of transparency around the algorithms means that we’re not quite sure how they work. Can they tell the difference between quality journalism and the uninformed ramblings of a Horrible Person, or do they simply prioritise engagement over any other metrics?

Either way, engaging with content that is hateful, or downright untrue, only serves to boost that content and push it into more people’s feeds.

This cycle of rage and engagement leading to free publicity is especially problematic if the Horrible Person has been booted off mainstream social media platforms for being, well, horrible.

By resharing their content on platforms where they have been explicitly banned, you are providing them with undeserved oxygen and a veneer of relevancy, when in an ideal world the Horrible Person would be cast to fringe platforms and remain there in their own echo chamber.

People and groups who have an interest in spreading disinformation and hateful rhetoric are more than aware of the power of shock sharing and they often produce content with this exact phenomenon in mind.

Pawns in the game

At a pro-Repeal rally in Dublin in 2018, marchers were deceived into carrying placards emblazoned with the symbol for the British Union of Fascists. A user on 4chan, a platform commonly associated with white supremacist and neo-nazi movements, boasted that he and “some mates” had printed the posters and handed them out and that “the morons took the bait”.

The user shared photos from the march and encouraged others to “spread this shit”. The images went viral after campaigners on the anti-Repeal side called for condemnation of the rally-goers.

A year previously, a video recorded at another pro-Repeal event, and produced by a Canadian far-right news outlet, also went viral. The video was a selectively edited series of vox pops where a right-wing activist used provocative questions to evoke answers that made those in attendance seem uninformed and extreme in their beliefs. That activist has since admitted that the video was produced in a way to purposefully provoke outrage and feed the recommendation algorithms.

The real world consequences of outrage-induced shock sharing have only become more evident in recent years. The storming of the US Capitol building in January was a direct result of an abundance of false, deceptive and seductive content being propagated online to make people believe, among other things, that the election had been stolen. Anti-vaxx movements jeopardise the pandemic recovery every day by using false scare stories in order to spread hesitancy and fear over the vaccine.

We live in the era of the never-ending scroll, where we often have no control over what comes up next on our screen. It can be hard to resist the appeal of shock sharing as our Stone Age brains are trying to play catch up with the online world.

We’re in the very early, precarious days of our relationship with the internet and people need to become more mindful of the ripple effects of sharing content that is designed to outrage.

That’s not to say that Horrible Things should never be talked about. Hate and ignorance need to be called out, but this should be done in a way that doesn’t give undue promotion to those responsible for it, especially if their only weapon is stirring up outrage online. If we really want to punish the Horrible People, ignoring them when they say Horrible Things on the internet is the best line of action.

Aoife Gallagher is a research analyst with the counter-extremism think tank, the Institute for Strategic Dialogue, where she investigates how conspiracy and far-right networks use online platforms to spread their messaging.  

VOICES LOGO

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
12 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds