By way of combating claims of spreading misinformation, Twitter has added a ‘community notes’ feature to its platform.
Aimed at creating “a better informed world by empowering people on Twitter to collaboratively add context to potentially misleading Tweets”, the feature allows contributors - Twitter users who sign up to write and rate notes - to leave additional comments on any Tweet and if enough contributors from different points of view rate that note as helpful, it will be publicly shown on a Tweet.
The social media platform claims that it has attempted to make the process as open and transparent as possible. The feature’s algorithm is open source and publicly available on GitHub, along with the data that powers it, “so anyone can audit, analyse or suggest improvements”.
“Ungovernable” user-base may cause problems
Although on the outset this may appear to be a progressive move from Twitter as it looks to bolster its advertising credentials after hiring Linda Yaccarino as CEO, marketers are concerned about Twitter’s “increasingly ungovernable” user-base using this tool irresponsibly.
Late last week Ben and Jerry’s announced it was cutting ties with the platform and this new feature has some marketers warning of an issue for brand safety, believing it has the potential to backfire.
Twitter’s “content moderation continues to slide away from any form of centralised responsibility”
Dan Moseley, Managing Director at Automated Creative believes this latest move from Twitter is “naive at best” and fails to account for the attitude of its user-base.
He said: "More sceptical minded marketers should be concerned; the platform’s stance on content moderation continues to slide away from any form of centralised responsibility and hands control to its – now well-documented – increasingly ungovernable user-base.
“Having already removed the guardrails around what accounts are blue tick verified, Twitter handing power to ‘the people’ begs the questions - can its community really tackle the challenge of misinformation and disinformation? Do we trust the community to not be manipulated by AI or power users gaming the system? And should it even be their responsibility at all?"
Already, an AI-generated image of an explosion at the Pentagon went viral on the platform, as did AI-created images of Pope Francis wearing Balenciaga streetwear - both of which tricked many users including some prominent accounts to believe them to be true.
2024 US Election and ‘Big Social’
Moseley continued: "An even more concerning note is that this is being announced in the run-up to the 2024 election cycle. Big Social’s previous issues in this space (eg. the Cambridge Analytica scandal) will look like child's play compared to the incendiary mix of open user verification, AI-powered disinformation and ever-increasingly extremist behaviour online.
"If you’re comfortable for your media spend to appear next to content that could be manipulated or trolled into some form of community verification, more power to you - we’re seeing major advertisers continue to keep their dollars away from the platform and this move should deepen, not thaw that long freeze."