AI is growing fast. How we’re growing with it
December 20, 2023 | By Caroline MorrisAI was the “It” technology of 2023. Generative AI software in particular grabbed headlines, with stories ranging from think pieces on the future of creativity to scandals about writers who don’t exist.
In truth, AI has already been hard at work in payments for years, and gen AI, a niche in deep learning, is already beginning to change how we live for the better, from identifying payment scams in real time, improving customer experiences in the e-commerce marketplace, supporting frictionless digital purchases and personalizing user experience. And AI is building trust, teaching computers how to spot fraudsters and misinformation to make the digital ecosystem safe for all.
But even with the potential for advancement in AI, industry leaders must approach the technology with great caution and care. AI that operates off incomplete, inaccurate or biased data can create far-reaching negative consequences. Innovators must keep in mind what technology is meant to serve: human good.
Here are some of the ways AI moved us forward in 2023, as well as some insights on how to keep the technology on the side of positive impact.
Gen AI is poised to transform how we sell goods and services. Already presenting clear use cases, this technology will be revolutionary for nearly every sector, including large enterprises, small businesses, banking, retail and travel. AI-enabled specialized solutions will help us oversee complex, multi-party payments. We may also see customized AI applications that use a single personalized AI bot to orchestrate other bots for AI-to-AI commerce, coordinating purchases, deliveries and payments with little to no human intervention.
Improving security – and ourselves?
Many shoppers no longer pay in cash. They’re armed with payment cards and mobile wallets. But these innovations in payments give bad actors new avenues for committing fraud. AI, however, is built to stop scams in their tracks. On the Mastercard podcast “What’s Next In,” Amyn Dhala, the chief product officer for Brighterion, a Mastercard company, explains how AI has been helping financial institutions for years. “We have models which are actually personalized for your card, which indicate to us what your behavior with that card is,” he says. This personalization can flag scams faster and reduce friction when it comes to real purchases, with the power of AI making purchasing safer and easier for customers and businesses alike.
More recently, Nitendra Rajput, head of Mastercard’s AI Garage, joined the podcast to discuss other ways the company is accelerating the adoption of AI, the guardrails it has developed to ensure responsible innovation, and how AI may even be able to help us understand ourselves better.
“The whole concept of generative AI is that the human brain has this amazing potential of being able to make sense out of data, even if we have only few points of the data available to us,” Rajput says. By studying how AI can similarly make sense of data despite missing pieces, we may be able to understand better how our own brains work, and, he adds, “perhaps we can use it for much more powerful uses as well.”
Who can we trust? What can we believe? Those questions get more complicated when it comes to digital sources, whether that’s when interacting with someone on social media or making a financial request online. And humans aren’t the only ones who struggle with this. “If a computer system is getting new information, it should be able to give reasons for what it concludes is true, rather than just hunches,” says Aaron Hunter, director of the Centre for Cybersecurity at the British Columbia Institute of Technology and the university’s first Mastercard Chair in Digital Trust.
And when your data is in the hands of a computer, you want it to know what to trust. “My lab is trying to develop technologies that will flag when [deception is] about to happen, using traditional AI or machine learning tools to find patterns in previous fraudulent interactions,” Hunter says. These tools can fortify trust in digital banking and e-commerce.
Among the inherent risks associated with AI are bias in datasets, insufficient privacy protections for people’s data after it’s fed into AI models and “hallucinations,” or the repetition of falsehoods by AI. Strong data responsibility principles and practices are the foundation of any generative-AI governance, says JoAnn Stonier, the Mastercard Fellow specializing in responsible AI and data. At Mastercard, the guidance includes running queries multiple times to makes sur outcomes are accurate and equitable.
So how should companies get started once they’ve established clear AI guardrails? Identify the technology, tools and capabilities needed to scale; take stock of the proposed concepts across product teams; and experiment through internal hackathons and innovation challenges, says Mohamed Abdelsadek, Mastercard’s executive vice president of data, insight and analytics. “We want to put together the right infrastructure and right processes to avoid some of the risks that come with generative AI,” he says. “At the same time, we want to ensure we’re enabling innovation across the organization.”
The more we engage thoughtfully with new technology, the more equipped we will be to use it well. Or as Rohit Chauhan, Mastercard’s executive vice president for AI for Cyber & Intelligence, says, “The biggest risk of AI is not using it.”