Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Shutterstock/RoSonic

The features of online hate speech in Ireland and why it's often not flagged

A DCU report said that there were two types of people who posted hate speech: those “versed” in it and those who just reproduce it.

AN ACADEMIC REPORT has tracked the features of hate speech on Facebook and Twitter has found that there’s a tendency to circumvent social media community rules by using slang, irony, and pseudo-scientific references as a defence.

The report references Ibrahim Halawa and the stabbing in Dundalk as “trigger events” for hate speech, but notes that there is a “constant undercurrent of racially-toxic contents in circulation at any given time”.

It highlighted “a link between racist harassment and hateful speech on Twitter, with Irish Twitter users being directly harassed, attacked, and bullied online by other accounts based in the US, UK, Australia and other locations”.

Called Hate Track: Tracking and Monitoring Racist Speech Online, the report outlined features of hate speech:

Variants of racist discourses include whataboutery (eg. ‘what about our own’), narratives of elsewhere (eg. ‘look at Sweden’), use of bogus statistics (eg. ‘80% of Africans are unemployed’), and metonymies (substituting a word with something closely related, here in an ironic sense, for eg. ‘religion of peace’ to refer to Islam typically used with a view to associate Islam with violence).

The report says that a discourse of what it means to be Irish is constructed “as exclusively White and Christian”, and said that calling out racism “leads to accusations of being over-sensitive, ‘playing the race card’, or ‘being racist against white people’”.

This includes comments such as “stop blaming everything on whites”, and “the fact is it was white people who ended slavery for all”.

The report also said that online racism manifests itself in a “variety of more or less ‘coded’ discourses, which often do not make explicit reference to ‘race’, narrowly intended as a descriptor of skin colour or observable features, such as hair or eye colour”.

In relation to immigration, hate speech against migrants and refugees revloves “mainly around three inter-related tropes: access to welfare and housing; moral deservedness; and the good versus bad immigrant”.

Typically, Traveller and Roma people are targeted as undeserving, ‘uncivilised’, thugs and criminals; they can further be targeted using a dehumanising language.

On who is posting the comments, it said there were two main groups: “people versed in a particular ideological and political language and discourse, and those who merely reproduce” those same comments.

The report also highlighted why online hate speech is underreported in Ireland.

The barriers to reporting that we identified here include a kind of ‘first amendment absolutism’, which suggests a poorly understood notion of what constitutes freedom of speech/expression in Europe; a position that such contents are better dealt with by the broader community, who will identify and appropriately shut down the ‘idiots’; a view of the reporting process as pointless in the face of extremely large volumes of online racism; and a ‘bystander’ effect, in which responsibility is diffused because there are many others exposed to the same contents.

Methodology

Definitions of what constitutes online toxicity vary greatly: some definitions refer to toxic language as “language that is uncivil, aggressive or rude, while others focus on the demeaning or stereotyping content of a message irrespective of the language used”.

The research used a tool for identification and tracking hate on certain social media channels, it takes a preliminary look on material collected over period of three months, and it explores reporting barriers and cultures that feed into decisions to report or not report online hate speech.

The tool is intended to determine the current state of the digital public sphere, as opposed to being a censorship or removal tool.

Participants pointed out that social media users and trolls in particular have become more and more skilled at evading possible accusations of racism as well as circumventing hate speech community rules by using slang, circumlocutions, irony, and ambiguity.

At the same time, crude racism seems to be making a comeback, this time supported by pseudo-scientific references to genetics. Another informant, a Traveller activist, mentioned the case of an anti-Traveller Facebook page:

This page was just putting up everything negative on Travellers, just like racist debates ‘are Travellers even human?’, ‘are Travellers Neanderthals?’… all this kind of stuff… and debates about DNA stuff and genetics… like our brains are not able to absorb information and you know… all this kind of stuff and you get a message from Facebook saying ‘it doesn’t breach our community standards’.

The conclusion stated that “it should also be noted that Facebook and Twitter are not a representative window into Irish society so our analysis of online racist discourses can only shed limited light on the broader dynamics of racism. Social media are not used evenly by different groups; it is likely that socioeconomically disadvantaged groups are the least represented on these platforms”.

Comments are off because a matter that is before the courts is mentioned.

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Close
JournalTv
News in 60 seconds