Breaking Stories

Artificial Intelligence Is ‘Shining Star’ in Fight Against Healthcare Fraud 

The healthcare industry’s fraud, waste and abuse (FWA) problem has gone from bad to worse during the pandemic, but new signs suggest the insurers and providers at the center of this $300 billion annual problem are making fresh progress toward reining it in.

By deploying the cutting-edge digital technology that has been widely used in other industries, market watchers say healthcare has begun to turn the tide when it comes to using artificial intelligence (AI) for FWA detection.

In fact, 100% of health insurers with more than $1 billion in revenue told PYMNTS they plan to invest in AI in the next one to three years, while 89% of those with revenues between $100 million and $1 billion plan to do the same. This is according to findings in AI In Focus, a collaborative study by PYMNTS and Brighterion, a Mastercard company.

“One thing that’s a shining star right now is what’s going on with artificial intelligence,” Beth Griffin, vice president, security innovation – healthcare vertical cyber and intelligence at Mastercard, told PYMNTS in a recent interview. “The use of artificial intelligence has definitely been able to break through some of the historical approaches that have been taken in healthcare to attack fraud, waste and abuse.”

Increasing Trust in AI

There’s an increasing trust in AI at all levels, Griffin said. Consumers have more and more trust in AI because it’s working behind the scenes in their everyday life when they use tools like Amazon or Google Maps. Payers, too, have come to trust AI over the last three to five years as they’ve begun to use it for FWA detection and have seen strong results.

“In adjacent industries, like Mastercard’s traditional payments business, we’ve been using artificial intelligence to detect fraud for the last 15, almost 20 years now,” Griffin said. “So, we’re able to take what we know, what our experience has been, and help translate that into the healthcare market as well.”

In a recent pilot with one of the largest payers in the country, they looked at Medicare activity in one state. By running their AI against that one state database, Mastercard’s Brighterion identified almost $18 million in incremental savings beyond what the state had been able to do historically.

Making AI More Cost Effective

Despite this track record, cost is sometimes an inhibitor for payers that are interested in AI. While large payers tend to have more funding available, small- and mid-tier ones are more challenged.

As a result, AI companies are creating different payment models. The traditional methodology has been to charge on a per-member basis or a per-transaction basis. That’s still an option, and it can be tiered so that volume drives a more cost-effective rate.

But there are also value-based pricing structures that take the risk off the payer altogether, if there isn’t incremental savings identified. In this case, the payer only shares a portion of the incremental savings that is identified by the AI company.

“We’re also seeing that the small- and mid-tier are able to take advantage of AI because more and more of the payment integrity vendors that serve them are incorporating AI into their solution set,” Griffin said. “They’re able to make it more cost-effective because of the volume they can drive across the portfolio.”

Adding More Layers of Security

AI spots anomalies and trends that are happening among both providers and consumers. The majority of FWA activity still comes through providers, but with AI there is more opportunity to identify FWA activity among consumers, too.

From a consumer perspective, Griffin suggested that healthcare companies think about two other layers of security.

First, implement more identity verification solutions when someone enrolls in the member application or platform both at the health plan and at the health system. Then, monitor that, to ensure that the person trying to interact with the application is the same person who was validated.

Second, healthcare companies should have the right kinds of cybersecurity solutions in place, so that patient data is protected on an ongoing basis. That should include not only the company’s own internal system but also any third-party systems that it is using.

Sometimes FWA activity is a result of not fraud but inaccuracies. Claims may be inconsistent with the contract between the payer and the provider, or they may simply include errors. Artificial intelligence can identify these inaccuracies on a real-time basis and stop the transaction on a prepayment basis.

Staying on Top of Fraud

“Of course, [payers] want to keep that strong relationship with their provider, and they can just educate them, they can train them,” Griffin said. “They’re doing a lot of that already, but leveraging AI rather than just rules-based approaches on the prepayment side is a huge benefit to the company.”

Beyond that, Griffin suggested that healthcare companies continue to collaborate with their AI vendors. Every healthcare company has its own nuances in how it processes claims, so they should share those with their vendors. Companies also can share other data, such as aggregated credit card data that’s not specific to individuals, so the AI vendor can see trends and activities that are happening around payment.

Griffin concluded, “I think those are things that should be considered, thought about and integrated to continue to stay very strategic and on top of that fraud that continues to try to perpetuate the market.”

What is your reaction?

In Love
Not Sure

You may also like

Leave a reply

Your email address will not be published. Required fields are marked *