Marketing, AI and Ethics

by | 19 Nov 2020 | Data

Marketing, AI and Ethics

What does a scandal look like before it’s a scandal? Ben Keith of Simple, the Intelligent Marketing Platform, explores AI in financial services marketing and reminds us of some important ethical and compliance considerations.

The UK PPI scandal

In the late 90s the UK’s ‘Which?’ magazine, a provider of independent consumer advice, warned that Payment Protection Insurance (PPI) was poor value, and that UK banks were using inappropriate sales practices to push what was for them a very profitable product. Two decades later, the UK’s PPI scandal had cost the financial sector a staggering £37.5bn. Lloyds was hit the hardest, having to pay over £20bn in compensation to mis-sold customers, Barclays was next with a compensation bill of some £10bn. Very large numbers indeed, but were there warning signs and why might they be relevant to a discussion about AI ethics in banking?

The written evidence provided by ‘Which?’ to the UK Govt panel on mis-selling and cross-selling gives an insight. Among the factors highlighted as root causes, the following bank behaviours stand out as particularly relevant:

  • A tick-box approach to compliance
  • Poor product design
  • Disregard of warning signs

Before getting into these three, and how we might be walking into the next FSI mis-selling scandal, a quick backgrounder on AI in marketing.

AI in Marketing

Marketing’s embrace of computing has been going on for as long as there have been computers to use and data to analyse. They offer the promise of enlightenment, of greater insight into markets, where a company might do better or new or different, and ultimately a deeper understanding of us as consumers, our fundamental needs and aspirations.

It’s probably fair to say that Marketing’s embrace of computing started at the top of the funnel, informing strategic planning and awareness, but has steadily advanced down through to the conversion end as both computing power and the range of digitally enabled channels have increased exponentially.

We are now very much in a place where consumers brush up against or interact directly with digital marketing channels almost constantly, channels that enable personalised marketing instantaneously and autonomously.

It’s perhaps hard to believe it’s been two decades since the now long-gone CRM start-up E.piphany offered a self-learning, multi-channel, real-time marketing decision engine based on Bayesian modelling. This is old hat now. The huge increases in computing power and data availability enabled by cloud computing has empowered AI to not just make ever more sophisticated marketing decisions, but an ability to learn autonomously from them in the service of making even better decisions.

Along with better decisions have come big improvements in both cost and ease of use. Implementing an AI model is relatively straightforward, no understanding of the underlying mathematics is required. The potential efficiency gains are large, the costs are low and the technology accessible. Combine this with increased returns from improved decision making and it might all start sounding like a marketer’s utopian fever dream, but it also might be a very big problem.

Why we are discussing Ethics?

Consider the sort of decisions FSI organisations are asking of AI:

  • Insurance policy pricing
  • Insurance claim classification and liability
  • Product offer recommendations
  • Fraud detection and product eligibility

All these decisions bring with them important ethical and compliance considerations. All have the potential to significantly impact the brand. Consider what might happen if the AI involved in any of the above was found to be discriminating based on gender, race, economic group or even language skills? What about legal requirements for decision transparency?

And AI going off-track is happening. An often-cited example is Tay, Microsoft’s AI Twitter bot that after 16 hours of tutelage by the internet became an unacceptable hate monger. A more recent example is this month’s decision by the UK government to abandon it’s ‘racist algorithm’ introduced to streamline visa applications, but based on ‘decades of institutionally racist policies’.

Is marketing accountable?

Marketing is the custodian of the brand, and as such is responsible for understanding and mitigating risks to it, so this issue falls squarely under it’s purview, but in many organizations they’re at an arm’s length to the AI they’ve (perhaps unknowingly) delegated marketing decisions to. Maybe AI belongs to IT or the data science team? Maybe they’ve got it covered?

It’s starting to sound a lot like those root causes identified by ‘Which?’. Is it also a new generator of revenue that happens to be ungoverned? The three questions:

  • Compliance: Who’s responsible? Where’s the audit trail?
  • Product design: Have ethical and legal requirements been considered? Who signed it off?
  • Warning signs: What are they? Who’s monitoring them? What’s the plan if it goes wrong?

Governance is a desperately unfashionable topic. The team that says no. Innovation’s handbrake. However, we’d suggest the idea of governance and innovation being mutually exclusive is lazy thinking. The best innovation welcomes constraints. The best governance is inherent to the process, not a bolt on at the end. In design and operation AI governance must:

  • Ensure the application of design standards and regulatory compliance
  • Be consistent with brand values
  • Be programmatically applied
  • Provide transparency and accountability
  • Provide clear indicators of potential out-of-bound performance
  • Be integral to marketing’s overall governance

So, what’s next? Will AI in marketing be the next PPI? As basic as it sounds, the answer lies in us getting our governance on.

You can read more content from Simple at https://simple.io/simple-insights/

This article for first published at medium.com.   Photo by Morgan Housel on Unsplash

Related Posts

Keep Your Dynamic Content Compliant

Keep Your Dynamic Content Compliant

Now more than ever financial services companies are using dynamic content to offer personalised online experiences as part of their marketing. But what of the compliance challenges this poses? The pressure is on for financial services firms to meaningfully engage with...

Meeting GDPR Requirements in Financial Services Marketing

Meeting GDPR Requirements in Financial Services Marketing

Data privacy remains a hot topic due to ongoing failures in security and concerns about how companies are using the personal data they collect about their customers or users. Due to the vast amounts of sensitive data they process on a regular basis, it is...

0 Comments

Submit a Comment