Discover more from Fintech AI Review
Fintech AI Review Volume #2
Perspectives on transformation, AI model fine-tuning, and implications for model governance in financial services.
Welcome to all the new subscribers this week! I’m grateful to be on this fintech AI journey with you and look forward to learning and engaging together!
It’s been another exciting week in the world of AI, with product announcements, compelling content from industry thought leaders, and thought-provoking research papers that inspire visions of the potential for AI in fintech. For me, reading the content linked below led to 2 particular areas of thought.
First, with any technology, it’s useful to consider multiple phases of adoption and maturity. In one phase, it may allow us to do things we do today, just better. In another phase, it may present opportunities that never could have been possible or even imaginable. Generative AI, and specifically LLMs like ChatGPT, are so immediately useful even today, that people and businesses are already integrating them into the first category at a blistering pace. In Ben Evans’ excellent writing, he sometimes describes machine learning as “infinite interns” or “one intern with super-human speed and memory”. What can you do with this super-human intern? A lot, it turns out. This is also a reason why it’s important for policy makers and those who influence them to focus on outcomes and use cases for AI rather than the technology itself. There’s such a massive set of knowledge-work tasks where this type of technology can be useful that we may actually spend a lot of time in the first phase, especially in regulated markets like financial services.
It’s even more exciting to consider what happens when we use generative AI to achieve things in fintech that are completely impossible with earlier technology. This is many orders of magnitude more difficult, and even that is probably an understatement. However, I suspect that many of the innovations in this category will come from a realization that so much of what we call ‘intelligence’ or ‘knowledge’ can be thought of and interacted with as language in ways that were hard to imagine before we all started spending so much time with chatGPT.
Second, just as the concept of ‘language’ takes on more meaning, the concept of a ‘model’ used in financial services needs to evolve as well. For financial institutions, model governance is an incredibly important function. Any model used to make decisions or to manage a portfolio of assets needs to fit into a well-documented process and pass the scrutiny of many stakeholders, including regulators. Typically, model governance entails a highly-traceable data lineage, tightly-specified conditions for model training and evaluation, and clear explainability. In the world of generative AI, however, most projects start with a giant pre-trained model from OpenAI, Meta, Google, Anthropic, etc. and then fine-tune it to align to a specific application. In this paradigm, it’s extremely hard for a model governance process to understand all the details of large-model pre-training. Perhaps some financial institutions will only use models that they can train in-house completely from scratch, even at the cost of performance. Alternatively, reviewers and regulators may eventually consider some of the foundational models to just be part of the stack - like an operating system or language runtime - and not really subject to observation (I think this is less likely). I may ask some of my friends in the model governance world about this and share the results in the next edition.
As always, please share your thoughts, ideas, comments, and any interesting content. Happy reading!
Latest News and Commentary
Karen Webster of PYMNTS and QED Partner Amias Gerety had a thoughtful and engaging fireside chat about just how transformational generative AI can be. In doing so, they reference Robert Gordon’s thesis that the technology innovation between the civil war and 1970 had a much greater impact on productivity than anything in the 50 years since. Can AI potentially be as transformative? They discuss how some of the greatest potential from generative AI may come from its ability to lower the cost of exploration for new ideas, thereby potentially leading to more breakthroughs in technology, science, and health. Referencing previous platform shifts (desktop→mobile, on-prem→cloud) generative AI may be more likely to be on that level than other recent technology waves (such as voice or crypto), though it’s only truly possible to judge a real ‘platform shift’ after the fact. Amias also shares his perspective on evaluating the (many) AI-related pitches that VCs now receive, including how if GPT is available to everyone, it still comes down to whether a founder/team has a unique experience or advantage that makes something easy for them that is hard for others. In addition, Karen and Amias had some great thoughts around sensible areas of focus for AI policy and industry self-regulation.
In this fascinating and somewhat surprising paper, researchers from Meta, Carnegie Mellon, USC, and Tel Aviv University described how they took a pre-trained large language model - LLaMa, open-sourced by Meta AI - and fine-tuned it using only 1,000 hand-curated, very high quality examples. They then compared the quality of its results on a set of prompts to those of 5 well-known language models, which were generally tuned with far more intensive techniques (e.g. DaVinci003 from OpenAI was tuned using a massive amount of RLHF). Amazingly, humans rate LIMA as just as good or better more than half the time!* The authors use the results to emphasize that almost all the knowledge in LLMs is learned in pre-training, and that it takes only a limited amount of high-quality tuning to make the model very effective at particular tasks. The paper is a pre-print and likely needs some clean-up. However, the concept has some thought-provoking implications. If nearly all knowledge comes from pre-training, what exactly is knowledge, and does it change how we think about domain-specific intelligence? This is somewhat different from traditional statistical model-building, where there’s great benefit to having the largest dataset. It’s essentially a computer version of what we already know intuitively about humans. If you find someone really smart who has a ton of foundational knowledge and knows how to learn, it doesn’t take too many examples to teach that person a new task, and if the instruction you give that person is very high quality, it may not take much for this smart person to start doing high quality work too.**
Anna Patterson of Gradient explains 8 key principles for companies building B2B applications using LLMs. This post is a great read for founders building something new and also for those looking to build new AI applications within an existing business. She explains each principle in detail using specific and thoughtfully-curated examples. Enterprise technology, whether using AI or not, needs to solve real business problems and demonstrate quantifiable value in a particular domain. One of the great things about the present moment in AI is that one can easily envision numerous valuable uses for the technology, and the tools to explore it are generally here today.
Ramp announced a set of impressive AI-powered features across its products. The capabilities include contract analysis, automated vendor negotiation, smart transaction coding, expense automation, and a smart AI assistant known as “Copilot”, which lets finance teams ask natural-language questions and receive answers based on their own data, even suggesting potential actions and workflows. The offerings appear to be incredibly useful and quite smoothly baked into the product experience, and I’m sure CFOs are clamoring for early access.
In this paper, researchers evaluate an alphabet soup of deep learning techniques in the context of predicting corporate credit ratings. This is a different domain than most of the visible work in fintech, which often involves consumer and small business lending, rather than the credit ratings of large corporations. In the paper, the authors gather a large volume of structured (e.g. bond performance, financial ratios, market data) and unstructured (e.g. earnings call transcripts) data. The results overall are encouraging and do vary quite a bit by the types of models used and the techniques used to combine them. Of particular interest is that text-based data outperformed numerical data in model prediction, suggesting great potential for the use of unstructured data in corporate credit rating, especially as LLMs evolve.
ComplyAdvantage, the UK-based financial crime detection technology provider, announced a new suite of AI-powered fraud detection capabilities. The solution appears to cover over 50 forms of fraud through a diverse set of machine learning techniques. They list major fintech companies such as Holvi, Novo, and Realpage as users of the new product. Fraud is an unfortunate reality of the real world, and smart financial services companies need to constantly evaluate new technologies and data sources to stay a step ahead.
Leaders at Chime, Tricolor Auto Group, and Envestnet discuss opportunities for AI-driven personalization in fintech. The article provides a good overview, and the video is available for viewing on-demand. For most of history, personalization and scale have been natural opposites. Only with sufficiently advanced technology and abundant computing power can companies personalize experiences in a scalable efficient way. Perhaps generative AI can take personalization in fintech to new levels, increasing access to financial services and getting the right products to the right customers, on the right terms, at the right time.
*If instead of humans, you ask GPT-4 to be the judge, it gets similar results. Weirdly, GPT-4 even prefers LIMA over itself 19% of the time.
**This is part of why I’m so grateful my wife and I were able to send our son to Montessori preschool, with its emphasis on hands-on, foundational learning.
Thanks for reading Fintech AI Review! Subscribe for free to receive new posts and support my work.