Brian Demitros, Vice President of Analytics at Merkle looks at how brands can set up their marketing operations to deliver on AI that does no harm.
As artificial intelligence (AI) permeates more aspects of marketing, there are growing efforts to ensure that it’s not operating in biased, irresponsible ways. Those efforts and the supporting frameworks and principles are called “ethical AI,” which aims to bring explainability to AI through human oversight and, as needed, intervention.
Much of the ethical AI conversation to date has revolved around data and modeling, speaking largely to analytics practitioners who control the AI-based processes. However, ethical AI has important applications in the media world, too, particularly with audiences and targeting. Media marketers aim to speak to the right person, in the right place, with the right messaging – but what if they’re only talking to certain people, missing out on entire relevant audiences because of how the segmentation model was built? What if the “right message” is rooted in data that only considers the needs of certain audiences, or relies on immutable characteristics instead of actual interests and behaviors?
The media industry has made important strides in diversity, equity, and inclusion over the past few years, using more diverse creative and redirecting budget to BIPOC publishers and vendors. Ethical AI is the next step, bringing equity and responsibility to the behind-the-scenes processes. But it’s not just about doing the right thing – implementing ethical AI principles can be good for business. Let’s look at some techniques brands can apply to start acting in accordance with ethical AI.
How to reduce bias in media
The first step is to limit the use of protected class attributes, for example age, gender, and ethnicity, in targeting. While there are some products or services where it makes sense to target based on some of these criteria, in most cases they’re used out of habit rather than necessity. To avoid a loss of audience relevance, you must introduce new attributes that capture interest in the product or service being marketed. Work with your agency to source new data and incorporate it into audience targeting algorithms.
Implementing a closed-loop validation process also helps ensure no unintended bias is present. This involves validating the data used in the algorithms to check for potential proxies for protected attributes, which are variables or attributes that strongly correlate with protected classes. Data validation should also include a review of algorithm outputs to confirm all protected classes are equally represented as targets in each media campaign.
Other considerations for ethical AI in media
There are a few common questions that arise when discussing ethical AI from an audience targeting perspective:
My targeting skews heavily toward a certain demographic because of the nature of my products – does that mean my audiences don’t adhere to ethical AI principles?
Not necessarily. Let’s take a baby diaper company as an example. The audiences for this product’s marketing efforts likely skew heavily toward parents – as they should. There’s nothing inherently wrong with a strong skew toward a certain group. The keys with ethical AI are:
- Understanding why your advertising and targeting skews so heavily toward those groups; and
- Ensuring that the algorithms are not causing harm by entirely excluding other potential customers or by decisioning based solely on inputs related to that majority audience.
I can work to uncover and address biases in my own audiences, but what about the audience and optimization decisions that my media/publishing partners make?
To ensure all of your audiences and targeting are held to ethical AI standards, your brand can take look-alike and propensity modeling in-house to better control targeting and audience inputs, then push those audiences into the appropriate platforms. This practice is becoming more common already as brands increasingly rely on first-party data.
However, there’s still the question of campaign optimization algorithms once the audiences are in-platform. For this, you (or your agency) should rely on the relationships you have with partners. You should ask platform partners what signals they use in optimization algorithms and, if any of the signals are concerning, consider ways to disengage that type of algorithm. For example, a cost-per-thousand (CPM) algorithm that looks to maximize impressions may be less prone to bias than a cost-per-acquisition (CPA) model that seeks to optimize conversions.
Conclusion
AI is an important tool for creating and targeting audiences, but it needs to be monitored. By combining the power of humans and technology, and enacting some of the principles outlined here, brands can move toward more responsible, sustainable marketing practices that are good for consumers and the business alike.
By Brian Demitros
Vice President of Analytics