Online hate speech: if you're not part of the solution you must be part of the problem

Hate speech and trolling have become synonymous with social media – and this can have a knock-on effect on brand reputation. So how can organisations turn this around, safeguard their social spaces and start using their platforms for good?

Joshua Gornell, Director, The Social Practice at MediaCom explains how brands can make a positive impact through technology.

There are many wonderful things that have been made possible through social media. Baby and animal memes; connecting loved ones during the COVID-19 pandemic; creating likeminded online communities; globally empowering the #blacklivesmatter and #metoo movements. The list goes on.

But there are also the highly publicised, negative effects of social media on society. 

Trolling and hate speech are among the most prominent impacts, particularly given the high-profile targets abused by those hiding behind the anonymity of social media profiles; Bukayo Saka, Jadon Sancho, and Marcus Rashford were all subjected to racist abuse after England’s 2021 Euro final loss, with the hate messages yet to completely recede.

And while it is these high-profile occurrences shining a spotlight on trolling and hate speech, they are not isolated or unique; racism, homophobia, transphobia, profanity, and threats are just a few rife themes.

The state of social media 

Hateful content feels omnipresent in this day and age. A 2021 Ofcom survey found that more than half of all children aged 12 – 15 in the UK have been subjected to hateful content online. The situation is improving, but users and brands still do not have complete control.

Where toxic comments break the law, they should be taken down, but there are grey areas and delays which are continuously exploited. For instance, using emojis to convey inappropriate sentiments or changing letters in colloquialisms and slang by u$ing symbols to avoid detection. 

Shockingly, despite women being the most common target for hateful content online, misogyny is still not a hate crime under UK legislation. Misogynistic comments are therefore considerably less likely to be addressed, as this would see social platforms cross the dreaded line from platform to publisher. 

As a result, comments with ill intent often remain present through trickery or oversight.

Karen Blackett OBE, GroupM UK CEO and WPP UK Country Manager, often talks about “using our voice for good” and driving “positive change”. GroupM agencies are invited to ask themselves “how can I make a difference?” and “what could, and should, we be doing as an organisation?”

More action, fewer words

Advertising on social media is one of the countless scenarios in which unacceptable comments are posted and subsequently live. Working alongside some of the most recognised brands globally, we asked ourselves what “can and should” be done to protect users and brands from exposure to hateful comments on social media within our scope of influence. 

There is a duty of care towards clients and users alike, the former spending millions of pounds on social platforms, the latter a significant amount of time. These hateful comments pose a genuine, brand-damaging risk that is often outside of anyone’s control.

Warren Buffett famously said, “it takes twenty years to build a reputation and five minutes to lose it”. 

This seems particularly pertinent in paid social advertising, considering 47 percent of UK adults say the points of view appearing in comments are an indication of their values and what they stand for. 

Change is coming – via the Online Safety Bill in the UK and the Digital Services Act in Europe – but it will not happen overnight. Brands have a role to play in taking responsibility for the actions that happen on these platforms. And agencies must ensure they are actively working against hateful comments and content, and not passively watching it continue.

Responding in unison 

This is why MediaCom UK has partnered with Respondology, a collection of digital veterans whose mission is to do something about the hate and damage produced by social media users – an ambition mirroring our own. 

The technology puts the control back into brands’ hands and allows them to decide what they deem as acceptable on their posts. 

Exclusion lists of words, phrases, emojis, or combinations of these tailored to clients’ needs and tolerance thresholds can be hidden discreetly from all but the person who uses them in their comment and their immediate contacts. All in less than a second. It essentially allows brands to remove unwanted conversations before they happen. 

Crucially, it removes the podium from unacceptable and hateful comments getting any further exposure through paid promotion, which has the potential to reach millions of active social media users. 

Taking the first step…

Of course, there is a fine line between censorship and moderation, which must be carefully trodden. It is important to work closely with partners, clients and legal teams to ensure any technology – much like Respondology’s – is being used in an ethical, responsible way.

And while partnerships are a piece of the puzzle, they are not the solution to completing the jigsaw. 

It is up to media agencies, global brands, and social media networks to work together and alongside independent industry bodies to protect users online. As an industry, we must get to the point that we all want – where there is no hateful content directed at anyone. 

Until then, it’s necessary to guide clients on how they can use technology and work with like-minded organisations to make a positive impact.

Joshua Gornell


The Social Practice at MediaCom