Could AI in Fintech Turn into a Race between Good and Bad Actors?
Fintech, the intersection of finance and technology, has been dramatically influenced by AI advancements. The technology’s impact will grow even more pronounced in the coming years. From enhancing customer experiences to revolutionizing risk management, AI is poised to reshape the fintech landscape.
However, investors hoping to take advantage of AI-led growth in the industry should consider both sides of the coin. This article looks at two areas of fintech believed to benefit from AI: fraud detection and risk management. While AI can have a positive effect, it also introduces complexities and vulnerabilities to consider.
Fraud Perpetration vs. Detection
One of the most obvious uses of AI in the financial services arena is detecting fraudulent activities. Predictably, fintech companies have used AI algorithms and machine learning techniques to analyze large volumes of data and detect patterns that indicate fraudulent behavior.
This has significantly improved the accuracy and speed of fraud detection, which once relied on manual processes and human intervention. Traditionally these processes also worked with historical data, whereas AI can analyze large amounts of real-time data to detect potential risks and anomalies.
However, while we hoped that AI would totally thwart fraudsters, the reality is that criminals also use AI tools. Payments security company Eftsure says card payment fraud will reach $49 billion by 2030. It offers some frightening statistics. For example, 71 percent of organizations have been victim to business email compromise (BEC) scams, one of the most common causes of payment fraud. Counterfeit cards account for over a third of all credit card fraud in the U.S.
Despite this, 40 percent of cardholders elect not to enable email or text security alerts from their card providers. Almost 50 percent of American cardholders have been victims of fraudulent card charges, with more than a third experiencing multiple incidences. Just over 25 percent of victims lose more than $1,000, while 7 percent lose over $10,000.
So, while AI algorithms continuously learn from new fraud patterns, it’s currently a game of chicken-or-the-egg. Fraud remains an ongoing concern for financial institutions and their customers. As a result, several industry consortiums have recently been formed to tackle payments-related fraud issues. They include an initiative by Mastercard with UK banks Lloyds, Halifax, and Monzo, among others. The Plaid-led Beacon network counts Credit Genie, Tally, and Veridian Credit Union among its founding members. Whether fintech companies will stay ahead of fraudsters to safeguard their customers' assets and maintain trust in their services remains to be seen.
Risk management is a critical responsibility of every financial institution, and AI is becoming an increasingly important tool for this, too. AI algorithms can identify patterns and correlations in the data that human analysts may miss. By analyzing historical data and detecting hidden trends, AI-powered risk management systems can provide valuable insights into potential risks and help financial institutions make informed decisions.
Such data could include real-time information about market trends, customer behavior, and financial risks. The technology can be used to better manage risk through improved portfolio management, optimized risk management strategies, and new investment opportunities. Ultimately, AI-assisted decision-making should lead to improved profitability and lower the risk of loss.
However, once again, there is a caveat. Because AI is dependent on data, if that data is flawed or biased, the results will be too. Experts caution that existing data could perpetuate existing social biases unless conscious efforts are made to counter them. An excellent example of such bias is the IBM and Microsoft facial recognition algorithms that were found to be better at recognizing white people and men because the data it was trained on had more examples of both these categories.
In the Fintech space, a similar bias was uncovered with Apple's credit card launch in 2019. Women were routinely issued lower credit limits than men even though gender was excluded from the algorithm. The results initially confounded even Apple, but research has shown that AI will infer characteristics such as gender and race from proxy data points. Where someone shops or lives, what products they purchase, and other information can all be used to infer race, gender, religion, etc.
The results have an impact beyond finance, extending to education, justice, and health services. Fintechs balance the use of AI algorithms with a potentially growing customer suspicion of "black box" decision-making. They will need to incorporate best practices to avoid algorithmic bias, such as those released by the Brookings Institution.
As AI evolves, its impact on the fintech industry will remain profound. These advancements will enable fintech companies to remain competitive, serve their customers, and drive industry innovation. However, AI solutions must be implemented with full awareness of the potential weaknesses.