close
close

Utah bank uses gen AI to monitor emerging issues at fintech partners

Utah bank uses gen AI to monitor emerging issues at fintech partners

An AI photo of a fraud detection robot reviewing loan applications, created by DALL•E 2. AI is becoming increasingly adept at both photo creation and fraud checks.

Carter Pape via DALL•E 2

First Electronic Bank uses Spring Labs’ generative AI technology to analyze customer communications from its fintech partners and identify problems before they arise.

The Salt Lake City-based, online-only, $429 million-asset organization has several large, national fintech partners with millions of customers. Like all banks, it is under pressure from regulators to make sure its fintech partners are not running afoul of any laws and are keeping their customers happy. In the past year, several banks have received consent orders from fintech partners admonishing them for compliance deficiencies, including: Green Dot Bank, Cross River Bank And Evolve Bank and Trust.

“We need to solve problems faster when they happen so we can deal with them,” the bank’s CEO, Derek Higginbotham, said in an interview. “If we don’t, they will come out, they will grow, and then they will explode in a way that’s worse for everyone.”

The bank deployed Spring Labs’ Zanko ComplianceAssist to detect signals that something was amiss in customer communications.

“Understanding customer complaints received by fintech partners is a big issue for sponsor banks and their fintechs today, as regulators look at whether the sponsor bank has implemented effective enough controls over its fintech to make this (banking as a service) model work,” John Sun, CEO and co-founder of Spring Labs, said in an interview. “Oftentimes, customer engagement is the first window into what exactly is happening between the customer and the fintech, and sponsor banks obviously want to get an accurate view of that.”

First Electronic Bank’s fintech partners collect all customer communications—transcripts of phone calls, emails, texts, and other messages—from their customer relationship management and case management software and turn them into data files that they share with the bank through application programming interfaces and file transfers. This data is then fed into Spring Labs’ generative AI model.

“It’s hard for a human to know everything that’s out there,” Higganbotham said. “We had to figure out how to synthesize the information so that human agent could be smarter.”

Spring Labs software first sorts complaints into categories for First Electronic Bank’s human reviewers.

It’s the kind of work that sounds simple, but when it’s done by multiple customer service representatives at different companies, “each one of them is going to interpret things a little differently,” Higginbotham said. Having humans do all the complaint tagging has forced the bank to limit the number of categories to a few dozen.

“This is a problem that most people don’t really think about — we all use categorized data without thinking about it,” Higganbotham said. “When you actually have to be the custodian of that data and create the labels, it’s really hard to get depth and consistency.”

He said AI can categorize with a much deeper level of accuracy. The bank gives the system specific labels to use, but also allows it to identify trends and create its own labels.

Large language models are better at labeling complaints than humans, Sun said.

In analyzing data from more than 100 fintechs, Sun said his team found that customer service representatives could detect complaints and associated regulatory risks with about 60% accuracy. “That’s a pretty low rate because it’s hard to train every frontline customer service representative to be a compliance expert,” he said.

He said compliance experts correctly identify compliance issues in customer complaints 70 percent to 80 percent of the time, while Spring Labs’ software is 90 percent to 95 percent accurate.

Once all complaints at First Electronic Bank are tagged, the AI ​​model looks for patterns and trends.

“You can look at how things are changing temporally,” Higganbotham said. “You can see the relative values ​​of how things behave if certain kinds of things are emerging.”

The generative AI system also generates alerts and reports on specific insights from customer complaints.

In the future, the system could be used to direct the most important or time-sensitive complaints to specialists.

This system has not changed any personnel so far, Higganbotham said.

“We have human representatives who look into complaints and know the joint ventures really well,” he said. “So this gives them additional insight into what’s going on in the programs.”

When a problem is detected in the system, a human moderator logs it and ensures that the customer’s needs are met by one of First Electronic Bank’s customer service providers.

Higganbotham sees this technology deployment as an effort to protect consumers and address corporate risks.

“The biggest push for me will be to ensure that consumer protection regulations are met, including the principles regarding unfair, deceptive or abusive acts or practices,” Higganbotham said.

He said the bank chose Spring Labs because of its team’s depth of technology skills and consumer finance business knowledge.

Generative AI models are good at tasks that require strong language understanding but don’t require much logical inference, fact-finding or deep pattern recognition, Sun said.

“There’s a lot of language processing, a lot of reading complaints, reading news articles or reading regulations and trying to apply them to various aspects,” Sun said.

Spring Labs’ software, he said, relies on small language models to maintain certain processes. It uses large language models for some processing and productive capabilities. It can work with any model, he said.

Older, keyword-based compliance systems are more likely to trigger false positives and false negatives, Sun said. Such systems can’t understand context or code words, for example.

“If someone uses the word ‘Asian’ in a completely innocuous context, that could be perceived as a potential fair credit violation,” Sun said, giving an example of a false positive.

In addition to categorizing and flagging customer complaints, Spring Labs’ system can also be used to assign workflows to customer complaints, and some customers are already using it that way, Sun said.

Some experts agree that this use case for generative AI makes sense.

“I think the concept and the approach is valid,” Marcia Tal, founder of PositivityTech, a firm that helps companies understand customer complaints, said in an interview. “All banks are trying to solidify their sophistication, accountability and responsibility in this (banking as a service) space.”

But he notes that a productive AI system should never be the sole monitor of customer interactions; people with domain expertise need to be involved. And customer privacy needs to be protected as data is transferred between assets and systems.

“The richness of these conversations that happen (in customer service interactions) is real,” Tal said. “People will sometimes tell you stories about what’s going on with them. Why would you want that to end up anywhere else? Or why wouldn’t an organization care about this data like they care about other data assets they have?”