The Global Alliance for Responsible Media is announcing a series of new guidelines as it marks its third anniversary at the Cannes Lions International Festival of Creativity.
The updates will be featured during a dedicated panel hosted by GARM on the WPP Beach at Cannes on June 21.
The 3 new updates are as follows:
Misinformation guidelines
Adjacency standards framework
Embedding brand safety in the Metaverse
The WFA-led coalition of multinational advertisers, agencies and platforms will be releasing guidelines on misinformation, new standards on ad placements, and an outline of first steps to make the metaverse safe for advertising.
GARM is also working with the WFA’s National Associations Councils to ensure the whole framework, including the new misinformation guidelines, is applied at a local level through national advertiser associations.
“While digital media owners tend to be global players, it’s vital that we also take these vital controls down to a local market level. Brand safety standards also need to be applied by all digital media owners so that advertisers can be sure that their messages aren’t funding bad actors or appearing against content that damages their standing in the eyes of their customers,” said Stephan Loerke, CEO of the WFA.
Misinformation Guidelines
The new misinformation guidelines, introduced in the wake of COVID and the Ukraine war, will form an essential addition to the existing GARM Brand Safety Floor and Suitability Framework.
They have been designed to provide a structure for demonetising misinformation, and to build on the success that the framework has already delivered in changing how brands set strategies, how media agencies build media schedules and how platforms and ad tech partners structure their tools.
In the GARM tradition of “uncommon collaboration”, the misinformation guidelines have been developed in coordination with the European Commission and in consultation with NGO partners such as Consumers International, Reporters without Borders, ADL and NAACP.
Marc Pritchard, Chief Brand Officer of Procter & Gamble, said: “GARM has achieved much in short space of time – more aligned definitions of harmful content, enhanced measures, and the introduction of adjacency controls. But more still needs to be done. Broadening definitions to include misinformation, introducing adjacency standards and a proactive approach to monetizing the Metaverse are important next steps in ensuring that our brands can safely reach the diverse consumers we serve.”
Adjacency Standards Framework
As an additional effort, GARM is announcing a new Adjacency Standards Framework to ensure that media placement in safe but sensitive content is done in a more controlled way by advertisers, agencies, and platforms alike. The new framework defines for the first-time minimum standards and an approach for managing ad placement relative to sensitive content within News Feeds, Stories, In-stream Video, In-stream Audio, and Display overlays.
Rankin Carroll, Global Chief Brand & Content Officer, Mars Wrigley, said: “Building on the success of the GARM Brand Safety Floor and Suitability Framework, the Adjacency Standards Framework will provide advertisers and platforms with much-needed transparency together with a common language to better manage the proximity of advertising to sensitive content. This is a significant and meaningful step for the industry moving from a lack of visibility, control and significant brand risk to a world where brands feel confident to invest in platforms that actively strive to provide a safe, transparent and effectively managed environment”
The goal of these standards is to give ad buyers and ad sellers a common framework to better manage ad placements next to sensitive but suitable content around content such as death, injury or military conflict, allowing brands to restrict or allow where their messages appear. Further work on delivering similar standards for Livestream formats continues with the initial priority being to define a minimum safety standard for monetization, given abuse of the format in a recent mass shooting event.
Embedding Brand Safety in the Metaverse
GARM is also starting work to help industry stakeholders better understand brand safety principles and requirements within new metaverse spaces. The goal is to help identify appropriate opportunities in these new environments that bring together content and behaviours.
Rob Rakowitz, GARM Co-Founder and Initiative Lead, said: “GARM has been set up to ensure that the business model for advertising doesn’t fund harm and we did this in a reactive way – after business practices took shape in digital social media. We must help the industry understand safety requirements before commercialization begins in the metaverse. We’re being asked by our members to start on this journey as new spaces emerge. We must ensure that advertising is aligned with sustainable and responsible growth models.”
These new initiatives follow the publication of Volume 3 of the Aggregated Measurement Report, which tracks the progress of all platform members of GARM in delivering against the organisation’s eight-key metrics.
Published earlier this month, the report details measured improvements in performance by the major platforms, including YouTube’s MRC continued accreditation.
Since the first report was published in April 2021, the number of platforms covered has risen from seven to eight, with the addition of Twitch. The report has led to an improvement in the application of measurement best practices with Authorised Metrics submissions increasing 26 to 36 since launch.
The latest report illustrates continued improvement and areas of opportunity for further action, including:
· Enforcement in GARM Categories Spam & Malware and Adult & Explicit Sexual Content continues to be the largest and most automated, given that these content types are most easily identifiable by technology;
· Highly nuanced content Crime & Harmful acts to Individuals and Society is highly reliant on context and remains the most manual. This area will continue to require an understanding of behaviour and intent; and
· Areas such as Misinformation and Self-Harm are becoming priorities for enforcement and reporting given the highly sensitive nature of these types of content.