Trust

Diagnosis: Fraud. How AI can detect scams in healthcare

June 28, 2021 | By Vicki Hyman

Modern genetic testing can not only reveal a patient’s risk of developing cancer, but it can help determine what medical treatments could be used to fight it. It is why, when Mastercard data scientists began training artificial intelligence models to detect potential fraud in health care, one cancer patient’s genetic and molecular treatments quickly raised a red flag.

The woman was already in hospice.

That kind of testing would typically be ordered during or following her initial diagnosis, not at the end of her life. AI picked up on the time frame, the dollar amounts charged and the lack of recent procedures that would have normally warranted genetic testing. The software issued an alert for an insurance company’s fraud investigators to follow up.

Americans spent $3.8 trillion on health care in 2019, the latest year federal data is available, and it’s believed as much as 10% of health insurance claims are fraudulent. But only a small portion of that is identified and intercepted, resulting in higher insurance premiums and reduced benefits or coverage, according to the National Health Care Anti-Fraud Association. And for Medicare and Medicaid fraud, it also means tax dollars are being wasted.  

Fraud might include “upcoding,” billing for more expensive services or procedures that are never rendered as well as “unbundling,” billing for each step of a procedure as if they were separate procedures. It could even involve performing medically unnecessary diagnostics to generate insurance payments, according to NHCAA. A Michigan oncologist who prescribed aggressive chemotherapy treatments for patients who didn’t even have cancer is currently serving 45 years in prison. In May, a Virginia doctor was sentenced to 59 years in prison for performing unnecessary gynecological procedures to collect insurance money, including hysterectomies and regular dilation and curettages he called “annual cleanouts.”

“It’s not just excessive tests or hitting the system for a few extra bucks,” says Tim McBride, who manages Mastercard’s Healthcare Fraud, Waste and Abuse solution within the Cyber & Intelligence team. “Sometimes it’s actually really bad people doing really bad things to patients.”

McBride is now using AI to do what he did manually for years — noticing unusual patterns in claims from doctors, hospitals and other health care providers. McBride developed his expertise from his earlier work as a claims processor while spending hours poring over thick Medicare and Medicaid manuals to learn the nuances of reimbursement policies. He can glance at a string of medical codes and tell you precisely what happened behind the closed doors of the doctor’s office or surgical suite.

AI can spot these inconsistencies much faster — an analysis of the claim history of someone undergoing chemotherapy would have turned up a lack of the progression toward cancer treatment, such as diagnostics like a needle biopsy, he says. 

Mastercard has been using AI to detect fraud in financial services for years, and more recently has been applying its artificial intelligence and machine learning expertise to health care. Early trials show that the technology is able to increase detection of fraud in health care claims by two to three times while decreasing false positives by 10 to 20 times, says Beth Griffin, who leads Mastercard’s Healthcare Vertical for the Cyber & Intelligence team.

“Instead of paying out claims and then chasing fraudsters after the fact, it stops the fraud when it happens. It can prevent payers and patients from becoming victims.”
Tim McBride

That can increase efficiencies for insurance companies, reduce fraud and waste, and offer more clarity and peace of mind for the patients, Griffin says. “We want patients to have the confidence that the claims they’re seeing, via the explanation of benefits we all receive, are more accurate and more reliable.”

A recent pilot with a health care payment integrity vendor to identify suspicious data in recently submitted claims quickly identified 2,700 high-risk providers and $240 million in potential savings for the payer. Some of the schemes could have been detected through manual auditing, but AI can quickly teach itself to identify sophisticated schemes that even veteran investigators haven’t seen before.

Using historical data, the team built models to identify suspicious claim and provider patterns. One model uses the data to build a baseline for each provider — their volume of claims, their average number of patients and claims submitted, and the amount of money billed, for example. Another model looks at specific claim details to understand whether claims are appropriate, given the diagnosis, procedure billing codes and medical history. A third looks at whether the providers adhere to coding standards, guidance and principles.

Taken together, this establishes a risk score. The higher the number, the more likely it will trigger additional action — from initiating an investigation to suspending future payments.

“Ultimately, it’s a tool that investigators can use that will help them make decisions around whether there is a case to open or not,” McBride says. “It’s never going to replace people, but it will make their jobs easier by providing that level of confidence to investigate claims — and can help them identify fraud in almost real time. Instead of paying out claims and then chasing fraudsters after the fact, it stops the fraud when it happens. It can prevent payers and patients from becoming victims.”

Vicki Hyman, director, communications, Mastercard