Data

Two tech optimists on the intersection of data privacy and AI

January 30, 2024 | By Maggie Sieger

It’s no secret that artificial intelligence has the potential to transform … everything. That’s the good news. AI is poised to multiply what industries can do, from finding new cures for diseases to stopping financial fraud before it starts. But AI is powered by data, making information a unique currency deserving of strong safeguards and constant vigilance.

Both Mastercard and IBM have been at the forefront of AI innovation for years — Mastercard, for example, has been using AI to make its fraud tools far more accurate and effective, and IBM has used it to address productivity concerns, bolster customer experiences and improve education.

Caroline Louveaux, Mastercard’s privacy and data responsibility officer, and Christina Montgomery, IBM’s chief privacy and trust officer, sat down with the Mastercard Newsroom during Data Privacy Week last week to share their perspectives on the evolving landscape of data privacy, AI regulation and the power of trust.

Governments all across the world have been looking at ways to ensure AI doesn’t encroach on privacy, limit opportunities or create bias. This year EU state ambassadors are expected to formally adopt the world’s first comprehensive AI regulation. How are your own AI governance practices helping prepare for regulations?

Montgomery: The regulatory landscape in the privacy space has exploded since Europe introduced its General Data Protection Regulation, or GDPR, in 2018. We have earned the trust of our clients for over 100 years, and we want to maintain that trust. We established principles around AI, including that AI should augment human intelligence, it should be transparent, explainable and fair, and data belongs to its creator.

Louveaux: It’s interesting to see that IBM and Mastercard are in two different industries but we actually have very similar positions, including what’s at the core of our own data responsibility and tech principles — the individual owns their own data. We believe they should control it, own it and benefit from it while we protect it.

In addition to establishing these principles to guide how Mastercard handles data and technology, we joined IBM as pioneers in the Data & Trust Alliance. DTA is a nonprofit consortium of leading businesses and institutions across multiple industries dedicated to learning, developing and adopting responsible data and AI practices.

Montgomery: It’s an example of how our respective companies are very externally engaged on helping to make progress happen in the space of trusted data, beyond just the four walls of our companies. We really believe that part of our role is to not just develop practices and advocate for policies consistent with those practices and be responsible stewards, but to help find solutions for the world and to share that externally. The Data & Trust Alliance is in the middle of creating data provenance standards — ­establishing standards for the world, not just for our company. It’s about data lineage and how to have a methodology and consistent framework for how data is tagged and be able to trace it from the beginning to the end consistently across the data ecosystem. It would be a game changer if we could align on a common set of data standards.

Louveaux: If you don’t have a trusted way to organize your data and understand it in real time, you end up with a patchwork of requirements about how data can be collected, used and shared, making innovation really difficult.

What benchmarks are used to measure success in this area?

Montgomery: This is where it gets a little bit challenging. You could look at things like data breaches or how rapidly you're responding to data subject rights requests. On the AI side of things, we look at  compliance with new regulations (i.e., how many days to compliance?), how quickly we're moving through the processes or how responsive we are to businesses. But much of the critical stuff is not measurable. How do you maintain the trust of very well-established brands like Mastercard and IBM? That’s a much harder thing to measure.

What keeps you up at night in terms of data privacy regulation?

Louveaux: Anything — whether it’s true or not — that puts the reputation of Mastercard at risk keeps me awake at night. Because, as we know, once trust is undermined, it is really difficult to get it back.

Montgomery: Trust is built in drops and lost in buckets. It is really easy to lose trust. And it’s especially challenging to keep the standards and expectations high in a landscape that is evolving this quickly.

With technology and regulations constantly evolving, you can’t rely on what you knew yesterday — you have to adjust every single day. So when it comes to data privacy and AI, what do you look for in terms of talent?

Montgomery: I really think the most important skill is having that growth mindset. I came into this job without knowing what AI ethics was, right? Nobody did. There was a very small group of people who knew what this was at the time. And so anybody can learn it. We’re at the ground floor, but you have to want to learn it, and it does take a lot of time to continually stay on top of what’s happening and to be thoughtful about it.

Louveaux: We also prioritize diversity of everything — disciplines, backgrounds, culture, geography, you name it. We have a lot of engagement with local players. It’s important to be locally connected, to understand what these new AI policies mean, how they are going be interpreted, and having the local context is vitally important. I think we can also learn from our technology, business and security teams. We are not trying to limit ourselves to the legal and regulatory issues. If you want to be good at that, you need to understand the rest, how they fit into the broader context.

What opportunities for growth do you see on the horizon?

Louveaux: There are going to be so many overlaps among regulations in AI, online safety, content moderation and the digital space. When GDPR came into play, we organized workshops for our customers, because a lot of them had a really hard time understanding what was expected. Now, as we encounter AI regulations, it’s going to be critical for us to understand what is required to find commonalities among them and build the programs and the baseline that will enable us to comply with new regulations as they emerge, and then help our customers do the same.

Montgomery: I couldn’t agree with that more. I’ve never felt more optimistic about our ability to play such a significant role in the future of our companies and policymaking. Everyone wants to learn about AI. Policymakers need to learn about AI, because if they don’t, then they’re going to make mistakes in terms of the policy recommendations. There are huge opportunities for us to make a positive impact.

 

Maggie Sieger, Contributor