a hand holding a guitar

Insights

ARTICLE

Banking technology trends: The use of AI in Canadian financial institutions

Self-described "propeller head" Phill Moran has a lot to say about artificial intelligence in financial services. It's no surprise, given that Phill and Chamin Bellana started Lexington Innovations more than eight years ago to help banks, credit unions and other financial services providers use technology to optimize processes, mitigate risk and, ultimately, save money.

BLG's Cindy Zhang sat down with Phill for an AI tell-all, exploring the pros and cons of one of the hottest banking technology trends. Together they cover the differences between rules-based technology and true AI, show how one bank used AI to track and predict money laundering activity, discuss whether there's a role for large language models like ChatGPT in financial services and more.

Cindy: What is the current state of AI adoption in the financial services industry?

Phill: Right now, all the AI in financial services is proofs of concept. All the big banks are doing an anti-money laundering (AML) and fraud program in a narrowly defined AI engine. But because AI tools are mostly focused on reducing losses rather than increasing profits, spending on these tools falls lower on the priority list for financial services companies. That said, people are interested in what’s hot and there’s a lot of talk about AI right now. I see great potential for AI in call centers, fraud detection, AML, broker management and social media screening for financial institutions. For example, it would be suited for credit unions because credit unions monitor their members to make sure they're worthy of membership. But AI is surprisingly expensive.

Cindy: What adds to the cost of AI tools?

Phill: It's all around the data model. AI is not like an Excel spreadsheet. The AI model that you would use for fraud, for instance, is different from the model for AML because it's different data. Getting good results takes good quality data, time and potentially multiple models. You have to break the engine down and rebuild it until the model starts showing some cohesiveness. Once you've got that, then the feedback loop starts and the model can start learning. This can be time consuming and expensive.

Cindy: How are financial services firms already using AI?

Phill: Many financial institutions think they’re using AI now but they really aren’t.

There are two technology models: rules-based and machine learning. Let’s take fraud as an example. With a rules-based model, suspicious transactions — those over a certain dollar value, for example — are reported on a giant Excel spreadsheet and given a risk score, and somebody must manually review and determine whether they’re worth any follow up. Employees are supposed to look at north of 90 per cent of transactions, but that’s not sustainable and it doesn’t happen.

AI isn’t rules based – it's machine learning. The AI engine will go over the data and identify transactions that sit outside the normal curve. Instead of 90 per cent false positives, you’ll get down to 60 per cent or less. And unlike rules-based models, an AI engine learns over time from the data we feed it and can tell you why certain activity is suspicious.

At this time, financial institutions seem to lean towards using the rules-based model. That’s because users are unsure what the output means if they can't prove via a rule where it came from. Their technology providers have said the rules engines are now artificial intelligence, but that’s not actually the case.

Cindy: Can you give a hypothetical example to show us how AI works for AML compliance?

Phill: AI can be especially beneficial for AML because it highlights anything outside the norm. For example, this person has a higher transactional volume than most people. Or this person’s transactions go through more countries. Or their transactions are all in countries where it’s easier to launder money.

The AI engine flags all of those, the user looks at them and determines whether they’re fraudulent, and over time the engine gets smarter — without developers being involved. Eventually the engine will know, for example, that this person has family overseas, or this person just started a business, which is why their transactions appeared suspicious. Those people won’t show up again because you told the engine they’re not relevant. And the model improves.

Cindy: Is there a role for a large language model like ChatGPT in financial services?

Phill: Large language models can have useful applications for customer service, but there’s work to be done to avoid introducing bias. For example, you could use a large language model in a call centre to take a customer all the way from a general inquiry to approval for a second mortgage without having a human involved on the bank’s side — all through chat. Let’s say the AI engine behind the call centre chat knows you make $2 million a year and have $5 million in the bank and you live in an expensive neighbourhood. You’re approved. But if your home address was different, you might not be approved. This is where the issue of bias comes in. A large language model will always come back with a result, but it won’t tell you its level of confidence in the answer or how it got there. The AI models used today in money laundering or fraud detection, which aren’t large language models, are based on numbers, not words. Unlike large language models, they are able to tell you why they give the answers they do.

Cindy: So there’s a risk of bias and errors with large language models. How about the human resources risk — replacing people with AI?

Phill: I don’t see AI replacing workers, just completing tasks more efficiently. Here’s what’s happening with AML today. It’s 5 o’clock in the afternoon and a bank employee receives a 10,000-line Excel spreadsheet to review and flag for potential fraudulent transactions. Nobody wants that in their job. AI won’t replace that employee’s job but will perform tasks the employee didn’t have time for or didn’t want to do. With the call centre example, customers can complete much of a bank’s process on the web already. AI helps with the little bit the teller or loan officer has to do themselves — the stuff that doesn’t make their job more fulfilling.

Cindy: Can you think of any examples where a bank used AI to flag internal misconduct?

Phill: This is a good one. We ran three years’ worth of a bank’s broker data through an AI model to find the outliers. Most of the data was encrypted, but they left in things like branch numbers, regions and broker titles. In the results, we found that most of the suspicious transactions were coming from one branch. The bank knew that specific branch was troublesome, but they could never identify why.

The data showed the employees with the highest amount of risk had all worked for the same person at that branch. That person had moved to another branch, and you could see the next wave of aberrant behavior from the managers working under that same person. AI was able to take large sets of data and spot trends that wouldn’t have been apparent to an individual reviewer.

Cindy: Can AI have a role in prevention of fraud?

Phill: Absolutely. In the broker management example I just gave, a great revelation for the bank was that the AI model can learn from the data of people we know laundered money. What did their transactions look like? The model can use these patterns to predict who may launder money in the future. Now the bank can proactively address areas, conduct or people of concern, potentially stopping the money laundering in the first place.

Cindy: How soon do you see AI becoming part of the day-to-day in financial services?

Phill: I think it will happen over the next couple of years. Canadian regulators have said banks need to be moving in this direction. And it’s cool tech. ChatGPT has helped because people in financial services are using it and becoming comfortable with it. It won’t be long before they see the power of AI for fraud. They’ll continue to use the rules-based models but soon they’ll see the AI engines are pinpointing issues much faster and they’ll stop looking at the spreadsheets.

Cindy: You mentioned these tools are expensive. So, where should a financial services organization spend their money on AI?

Phill: They should really start with fraud. It’s a problem they need to solve. It will take the spreadsheets away from their employees. And eventually, they’ll be able to use the predictive nature of AI to work with HR to make sure their people don’t engage in fraud in the first place.


If you have your own questions about banking technology trends or are curious about how to prepare properly for the legal implications of AI, please reach out to Cindy Zhang.

  • By: Cindy Y. Zhang

Key Contacts