Google, Meta, Microsoft, TikTok and Twitter have reportedly agreed to revisions of EU’s anti-disinformation code aimed to tackle more sophisticated disinformation such as deep fake videos.
A confidential report obtained by the Financial Times includes the details of an updated code of practice that some of the world’s largest tech companies have signed up to.
Advertising is a major focus of the code with companies involved required to tackle disinformation in adverts and improve transparency, especially for political promotions.
The EU has been on a mission to crack down on fake news and propaganda, and the new code will be backed up by the Digital Services Act.
According to a confidential report seen by the Financial Times, Facebook, Twitter, Google, Microsoft and TikTok will now be forced to disclose how they are removing, blocking or curbing harmful content in advertising and in the promotion of content.
Platforms will also have to share data to specific countries when requested.
“Disrupting the incentives that create disinformation”
Due to the rise of AI and machine learning, fake news and propaganda have moved beyond blogs and onto deep fakes used for negative and detrimental purposes.
Commenting on the move, Csaba Szabo, Managing Director, EMEA, Integral Ad Science (IAS), said: “Initiatives that aim to tackle groups behind the dissemination of fake news are a positive sign. Tackling disinformation, and disrupting the incentives that create it, is a moral, reputational and commercial consideration.
“It also feeds into a much broader narrative about responsibility within our online ecosystem. There were already very positive steps taken from the big tech players last year, such as Google’s blanket ban on running any ads on its platforms – including YouTube – pertaining to climate change misinformation.
6% fines on global businesses
The EU is in the process of updating its code of practice in one of many steps to crack down on fake news.
The updated code also has a role to play in Russia's ongoing invasion of Ukraine and could help address disinformation from Russia.
Once in effect, companies that fail to fulfil their new obligations will face fines of up to 6% of their global business. Companies in question will have six months to implement measures to combat disinformation on their platforms.
“Direct ad spend towards ad space that doesn’t damage”
Szabo continued: “Brands must also take responsibility into their own hands by ensuring their protocols, partners and brokers are able to proactively avoid bidding for ads on sites which commercialise inaccuracies and distort facts. Thus, it’s a concerted effort that relies on both human expertise and AI to identify ‘flags’ that can be combined to give an accurate assessment of a web page or domain. The ultimate goal, of course, is to direct ad spend towards ad space that doesn’t damage, but enhances a brand’s image, the outcome of which will fund quality journalism, protect users and defund bad actors.
“Reducing misinformation and creating a safer, better trusted experience on the internet is critically important across the globe, as is ensuring that advertising budgets do not support misinformation agendas. This work matters now more than ever.”