AI

In the age of AI, how do computers decide what to believe?

October 24, 2023 | By Christine Gibson

Humans have always built artificial intelligence in their own image. From midcentury attempts at mimicking thought to modern neural networks, AI research echoes concepts in psychology and cognitive science.

As AI develops and we live more of our lives online, this connection is only growing stronger. That means computer systems and people will increasingly face the same challenges: With so much information available on the internet, what should we believe? As we interact more and more with people we can’t see, how do we know whom to trust?

These questions are at the heart of Aaron Hunter’s work. Hunter, director of the Centre for Cybersecurity at the British Columbia Institute of Technology and the university’s recently appointed Mastercard Chair in Digital Trust, is exploring the intersection of AI and information security. Through BCIT’s partnership with Mastercard via the Mastercard Global Intelligence and Cyber Centre of Excellence in Vancouver, he leads faculty and student research on how AI can solve real-world problems in digital banking and e-commerce.

Hunter recently sat down with the Mastercard Newsroom to share his thoughts on trust, fighting fraud and improving diversity in the field.

One of your research topics is belief change. What is that?

Hunter: It’s amending what you believe based on new information. If somebody tells me they’ve got a bird, I assume it can fly. But what if they add that it’s a penguin? I should retract that belief, because now I have better information.

As psychologists will tell you, people are bad at this. They hold on to beliefs that they shouldn't and jump to irrational conclusions. But if a computer system is getting new information, it should be able to give reasons for what it concludes is true, rather than just hunches.

What factors does a computer system rely on to decide whether to act on information?

Hunter: Whether you’re a human or a machine, it basically works the same way — you’re constantly calculating how much faith to put in online information or financial requests. This is what digital trust is all about: developing confidence that our online transactions are safe and secure. When people talk about trust, they usually focus on honesty: “I don’t trust this guy because he’s a liar.” Some real-life security problems do breach trust by purposefully instilling belief in something false, like convincing a person or computer system that I am somebody I’m not.

But there are other factors to consider, such as expertise and access to information. Maybe my doctor says, “You’ve got a viral infection. By the way, that’s a fake diamond in your necklace.” I should believe her about my health because she’s an expert in that subject, but not necessarily about the jewelry.

Those interactions are subtle. You’re also balancing your counterpart’s knowledge against their motives. Are they trying to trick you into doing something or divulging something?

So what are the ramifications for digital financial transactions?

Hunter: In online transactions, you have to trust the system with your data. Trust is not something you give to others; trust is a relationship. You’re in a trust relationship with the online vendor. That relationship may change depending on their behavior, so they need to make sure you continue to feel safe using their system.

How can governments, the private sector and academia collaborate better to develop AI for the public good and secure the digital economy?

Hunter: These three sectors are doing what they can to solve these problems. Over the past 20 years, governments have enacted strong legal protections to safeguard people’s private data. Governments could go even further by setting general standards and increasing funds for pure and applied science.

Private companies used to be very secretive. In addition to keeping their data secret, they kept the ways they secured their data secret. But that meant everyone was working alone, and nobody was sharing information about security flaws. Academics were always saying, “We need open standards and research. This is a problem that everybody’s trying to solve, and we need to talk about it.”

Industry has come to the same conclusion: Collaboration is key. That’s why Mastercard is investing in academic research that advances security as a field. A more secure digital world will benefit everyone.

At the same time, industry provides us academics with real-life problems. When I go to the Mastercard offices, they say, “This is what we’re facing today.” It’s not a research paper about a hypothetical threat; I’m applying what I’ve developed to a real scenario.

What projects are you planning as the director of the BCIT Centre for Cybersecurity?

Hunter: I’m putting together a team of faculty and students to address some of these hard problems. In my research, I’m concerned with fraud and deception. There’s an obvious connection with trust. Deception typically happens when somebody has convinced me to trust them and then they don’t hold up their end of the bargain. My lab is trying to develop technologies that will flag when that’s about to happen, using traditional AI or machine learning tools to find patterns in previous fraudulent interactions.

This is all to improve digital trust, how much people believe the system works.

How can we fill the workforce gap in cybersecurity?

Hunter: That’s obviously a big challenge, and I don’t have a quick fix. But I do think that we could improve the talent pool available by addressing some long-standing diversity problems in the area. For example, we know that gender diversity is a problem in computing, and that men are overrepresented in educational programs. We also know that Indigenous students have been consistently underrepresented in educational programs in computing. These diversity problems at the student level of course lead to the same diversity problems in professional practice. This is a real issue for our workforce, particularly in security, where you need people to constantly be thinking about novel attacks and defenses. Diverse groups come up with diverse solutions. If we could figure out how to make security an interesting area for a wider group of students and professionals, that would go a long way to fixing the workforce gap.

Christine Gibson, contributor