Hate Crime Apathy. Do We Have A Responsibility To Intervene?
In recent years, a number of people have been prosecuted for posting online hatred, yet others have argued that policing our autonomy limits our self-expression and freedom of speech. Has the internet opened up a whole new platform for abuse and hatred? And what is our role and responsibility when faced with online hatred on our newsfeeds? Is simply re-posting or even just being an online bystander to something that could be viewed as offensive, sexist or homophobic be just as culpable as the perpetuators who created it?
Arguments which suggest that our police force should focus more on face-to-face crimes, taking place in the “real world”, coupled with sweeping statements, like “words don’t matter”, are at best naive, but at worst dangerous, as the dismissal of online abuse allows it to develop and have a greater impact. Words: our primary method of communication, help to shape and define who we are, and build the ideals and views we hold about ourselves, and within communities. Words online, therefore, can radically change how people think, and more dangerously, how they act.
A big concern with social media is that words, blogs, images and videos, have the potential to reach a very large number of people incredibly quickly. The internet easily allows likeminded people to find each other and come together. It’s not only worrying in terms of the quantity of people viewing hatred posts, its also disturbing the lack of control we have as to who will be exposed to such views and on whose timelines these opinions will materialise. Do these negative views pry on the eyes of the vulnerable, or people who already hold strong views? Alarming statistics reveal that 1 in 4 children experience upsetting social media content and last year over 12,000 ChildLine counselling sessions concerning children’s online were recorded (2017, NSPCC). Saying “words don’t matter” is a hollow argument when it is words that are used to organise hate events with the intention of promoting and spreading hate further afield.
Over time, online hatred and violence moves the boundaries of what is considered acceptable by society. Online bystanders become desensitised to the words and images they are exposed to day in and day out as they scroll through their newsfeeds. This shifts the cultural paradigm and brings with it the dangers of lost empathy. Do we respond in the same way to online discrimination and abuse in the same way we would, if we saw the same content and messages being used in the real world? Or does constant exposure in our virtual world leave us unshockable?
Emboldened by online approval of hate, will perpetrators begin to express their views away from social media – out into communities? This would most certainly have implications for integration and inclusion within society and the safety of individuals. The exposure to hate on the internet can potentially be a catalyst for many of the hate crimes committed outside the arena of social media.
In response, this year the Crown Prosecution Service have stated that online crimes will be treated as seriously as abuse committed face-to-face, and have set out new legal guidance for prosecutors deciding whether to charge suspects with offences. Cases motivated by online hostility towards people of different races, religions, sexuality, gender and disability are now being pursued with the same robust and proactive approach used with offline offending.
Amber Rudd, Home Secretary, who recently initiated a national police hub to tackle hate crime, stated, “What is illegal offline is illegal online, and those who commit these cowardly crimes should be met with the full force of the law”.
Whether it’s shouted down the streets, or tweeted into 140 characters or so, the impact and consequences of hateful abuse is equally devastating for the victim and remains everyone’s responsibility to report it. Surely we do not want to be among the bystanders who do nothing, allowing the haters to propagate their malevolence.