Fintech AI Review #11
En route to Fintech Meetup, will AI help fintech get its groove back, and plenty of fraud prevention use cases
Hello from the friendly skies, fellow fintech and AI enthusiasts! I’m writing from my flight to Las Vegas for Fintech Meetup and am excited to connect with and learn from colleagues old and new. It’s my first time attending this particular conference, and based on speaking to many of last year’s attendees, I have high expectations. From the numerous conversations, threads, and messages I’ve exchanged preparing and scheduling meetings, there’s a lot of enthusiasm for the next few days and probably more optimism than when many of us last convened in the same place several months ago for Money2020.
Reasons for enthusiasm include the hope that advances in the application of AI will provide a boost to what has been a tough fintech market over the past 18 months. Not too long ago, there were plenty of fintech companies talking about AI or claiming to use it, but in superficial, pie-in-the-sky ways with few real proof points. Today, while we’re still extremely early, there has been enough practical, real-world usage to facilitate better-informed conversations and more realistic assessment of opportunities and challenges.
There are, of course, many open questions! Among these:
Will AI be dominated by a few giant foundation models that acquire such incredible general intelligence that they can be applied to nearly every problem but require immense computing resources to train and run, or will there be numerous small models, tailored to narrower areas of intelligence and specific use cases? Or both? Recent advances help bolster the small-model case, particularly Mistral’s impressive benchmarks with its 7B and 8x7B models and Predibase’s incredibly cool "LoRA Land" library of fine-tuned small models that outperform GPT-4 on particular tasks and can be run on a single GPU.
Will proprietary AI models continue to be more prevalent in the ecosystem, or will open source AI eventually take over? How will the open/closed dynamic compare to that of previous tech innovations, including PCs, smartphones, and the internet? This is a heated debate unlikely to be resolved any time soon.
Will the best models be trained and served on expensive hardware from a small number of public cloud providers, or will more tasks (particularly inference) be pushed to the edge/client? Does it make sense for large financial services companies to run their own AI hardware?
Will AI be a boon to incumbents who already have the capital and data to build the best AI-powered products and the distribution to sell them, or will it be rocket fuel for new disruptors who build companies in an AI-first way, from scratch?
If a company uses AI in its product, is it best served putting that AI usage out front in a public way, or is it better to focus on customer value regardless of the underlying technology. In a recent 30 Minutes to Presidents Club Podcast, Nick Casale of Handoffs argued that the term “AI” should never be in your sales pitch.
The preceding list applies to the entire AI field but is particularly interesting to consider in the realm of financial technology, given its intersection with policy, privacy, and so many aspects of our economic lives, personal and professional.
Today’s newsletter covers a variety of developments and perspectives on fraud, something that can be both enabled and protected against by AI. It also covers the theme of idea recycling in fintech and asks whether new technologies will make it possible for previously unsuccessful ideas to finally thrive. In addition, we consider the ability for AI solutions to streamline small business credit application workflows and increase access to capital.
If you’re at Fintech Meetup and would like to chat, please let me know. Looking forward to the next issue soon, which will cover highlights and learnings from the conference and much much more!
As always, please share your thoughts, ideas, comments, and any interesting content. If you like this newsletter, please consider sharing it with your friends and colleagues. Happy reading!
Recent News & Commentary
Machine learning or Generative AI: What's better for Fraud Prevention? - Sardine
Given how clearly useful generative AI tools are in committing fraud, it follows that there would be great interest into using these technologies to prevent fraud. In this well-thought-out blog post on the Sardine website, CEO Soups Ranjan explains the differences between ‘traditional ML’ and generative AI, why ML solutions are often the superior approach, and where generative solutions do have potential in the fraud prevention world. Given the need for accuracy, the importance of domain-specific feature engineering, and the deployment and cost advantages, ML is currently better than AI at fraud detection. Soups highlights gradient-boosted trees for prediction on structured data and clustering algorithms for anomaly detection and fraud ring identification. Sometimes, XGBoost is all you need. In the view of this post, the best use for generative AI in fraud today is as a co-pilot for compliance operations. For example, generative tools can perform case reviews, generate SAR filings, or evaluate the usage and impact of a particular rule set. He also imagines more ambitious uses for genAI in fraud detection that would require a plentiful supply of labeled training data that does not yet exist in sufficient quantity.
Signals: Why is there so much idea recycling in fintech? - This Week in Fintech
Fintech aficionado Nik Milanović wrote an excellent piece on why we keep seeing the same ideas over and over again. His examples (PFMs, autopilot for your money, etc.) ring true, and the central theme is that while many of these products are quite good and seemingly useful, they haven’t seen success, because many builders haven’t studied the market history well enough to understand why previous attempts didn’t work. As plenty of consumer-facing applications have struggled to find enduring success time after time, many of the real breakout winners have been B2B fintech infrastructure companies (e.g. Stripe, Plaid). However, even the potential of great infrastructure is fundamentally limited by the underlying health of end-user-facing products. Selling shovels to gold miners only works as long as some are striking gold! To avoid the endless recycling of ideas that don’t work, it’s critical for founders and builders to understand the previous attempts, what went wrong before, and why “now is different” or “we are different”. Nik’s post is so good, and you should read it. What will be really interesting is if AI capabilities help answer the “why now” question by delivering products that couldn’t previously exist that solve real problems in a way that was not previously viable.
FCC Makes AI-Generated Voices in Robocalls Illegal - FCC
It’s trivial to imagine how useful AI-based voice generation would be to those wishing to perpetrate robocall scams. In this recent ruling, the Federal Communications Commission affirmed that AI-generated voices would be considered “artificial” under the TCPA (Telephone Consumer Protection Act). Of course, running a fraudulent phone scam to manipulate and steal money from vulnerable people is already a crime, but the escalation in the quality of AI-generated voices and the ease of using them for impersonation led the FCC to adopt a more targeted approach. Automated dialing systems are not only used by scammers. In fact, plenty of financial services firms utilize automated calling in their customer service or sales operations, activity that is governed by TCPA rules. These rules specifically prevent the use of an artificial or prerecorded voice in an automated call without a consumer’s consent. With this latest ruling, AI-generated voices fall under this category even if any difference from a human voice is imperceptible. There may be situations, however, in which an AI-generated voice is tolerable by a consumer, even preferable. In such cases, a consumer could opt in. What comes to mind are cases where someone is truly reluctant to speak to a person, for instance if they feel embarrassment around financial hardship. If such an individual could have a conversation with a well-informed and helpful non-human AI agent, would he be willing, or would he anthropomorphize the bot thoroughly enough that discomfort would be preserved? My guess is that the technology, thinking, customer experience, and regulation in this area will continue to evolve.
11 Frauds and Scams That Will Be SuperCharged With AI - Frank on Fraud
If you’re in the fraud prevention business and you haven’t read Frank McKenna’s prolific writings at Frank on Fraud, you should really check it out. Side note, if you work in financial services, you are in the business of money, which bad guys would like to steal, and you therefore have a fraud problem, whether or not you have the data to identify and measure it yet. In this post, Frank created an infographic/poster of 11 fraud techniques that are likely to be further enabled by advances in AI. A few standouts: extortion scams where fraudsters generate explicit images of a target and threaten to release them if demands are not met; ransom scams where fraudsters use AI to clone the voice of a purportedly kidnapped family member and demand ransom; AI-constructed fake retail sites that appear real and are used to harvest personal information and credit card details. The full list of scams here is thought-provoking and merits preparation and planning if you’re potentially vulnerable. Not a bad thing to print and put on the wall for your fraud team!
Bankwell Bank pilots generative AI in small-business lending - American Banker
So many small businesses rely on financing to bridge the gap between expenses today and revenue tomorrow. Yet applying for credit, particularly at a bank, can be an onerous, time consuming, and confusing process. In fact, in a recent survey, 67% of SMBs opting for non-bank financing cited banks’ difficult application processes and lengthy decision timeline as reasons. Bankwell, a $3 billion asset bank in CT, is exploring the potential for generative AI to streamline the operational process around SMB credit application and deliver a better customer experience. The pilot uses Cascading.ai’s AI-driven loan origination system to automate key components of the approval workflow but also to engage with applicants 24/7, reducing communication gaps and apparently increasing completion and funding rates. I’m interested to see the progress here, as there aren’t many great systems available in small business lending. If Cascading and Bankwell can demonstrate the abilities and impact of an AI-first LOS, it can go a long way in streamlining access to capital for small business owners.
Deepfake face swap attacks on ID verification systems up 704% in 2023 - CyberSecurity Magazine
In unsurprising news, deepfakes make it a lot easier to commit fraud, and a proliferation of tools - many of which were designed for perfectly legitimate purposes - are being used for unsavory ends. Whereas live video was previously an effective way to prove true identity, fraudsters can now use ‘face swap’ apps to create realistic, full-motion emulations of an individual, often bypassing both human and algorithmic detectors. As with any fraud problem exacerbated by AI, the creative use of this technology is critical to its solution. Building identity verification systems that are resilient to the use of generative impersonation capabilities is a very hard problem and a multi-billion dollar opportunity.