Discover more from Fintech AI Review
Fintech AI Review #7
AI engineers, up-skilling, who really wants a chatbot, and many many new takes on where value is likely to be created applying AI to financial services
Happy Monday, everyone! This week’s newsletter is all about people and opportunity.
AI is built by, used by, and used for the benefit of humans. While fundamentally a human creation, many people seem to easily forget this. It’s a bit like how people think of a ‘corporation’ as some alien entity separate from, and perhaps even at odds with, people, when in reality, a corporation is owned by people, run by people, employs people, and produces goods and services for people. It’s just a useful abstraction. Indeed, many of the most important questions surrounding AI involve humans, and a lot of them concern employment. What kinds of jobs will skyrocket in popularity and value when the AI revolution hits financial services? “AI engineer” might be the hot new tech job of the next decade. Some companies - perhaps finding that the availability of AI talent lags behind the size of their AI ambition - are also retraining and up-skilling their workforces to take advantage of this new technology.
As countless new companies in fintech emerge with modern AI at their center, there’s a yet-unsettled question of where AI is best used. Furthermore, there is vigorous debate about where value is likely to be created and what types of companies can build a compelling business and a defensible moat. Below are links to the thoughts and frameworks published by 2 venture capital firms. There’s room here for a vast marketplace of ideas, not only about which types of companies are most likely to succeed but also about the capital structures best suited to funding them.
As always, please share your thoughts, ideas, comments, and any interesting content. If you like this newsletter, please consider sharing it with your friends and colleagues. Happy reading!
Latest News & Commentary
In one of the largest deals so far in the AI space, Databricks, a leading data infrastructure platform provider, purchased MosaicML. Mosaic allows companies to train their own LLMs and other modern generative AI models using their own data and in their own environment. These two companies seem to have a natural fit, as many of the large firms that use Databricks as a data warehouse and analytics platform are also interested in building proprietary models using that data. For instance, the WSJ piece mentions Replit, who already used Databricks for their data pipelines and who then used Mosaic to build a proprietary code-generation model that accelerates and amplifies the work of developers. Specific to the adoption of AI in financial services, past issues of this newsletter have addressed many issues that are likely to be relevant, including data ownership, model governance, and security. The Mosaic approach seems to be quite compelling given these concerns. It will be interesting to see if financial institutions like TD Bank or Capital One, who already use Databricks, will use the combined company’s platform to train and tune proprietary models on their own infrastructure that comply with their model governance and regulatory approaches.
The team at BCV has been putting out a ton of high quality content lately, and it’s a fantastic gift to the ecosystem. This recent post is a 10-part guide to their ‘field notes’, a thorough exploration of the state of the industry with respect to generative AI along with a variety of well-considered observations and valuable frameworks. There is so much useful and thought-provoking information in this piece. For this post, its length is an asset, and it’s totally worth a read. So sit back with a drink during your nightly AI learning hour (this is totally a thing, right….?) and read the whole thing!
Many past eras in technology have seen the emergence of a ‘new hot role’, a job that didn’t exist before but that suddenly becomes the most highly-sought position. These roles require specialized knowledge, are highly competitive, and demand high compensation. Some established practitioners shift into these roles through their work, while others train for them specifically. They are often ‘of the moment’ and subject to their own hype cycle. For example, in 2012, Harvard Business review called “data scientist” “the sexiest job of the 21st century”. This piece makes the case that “AI engineer” is the next hot role, that it requires a unique combination of skills and will be in massively increased demand. According to the author, an AI engineer is somewhere between an ML engineer or data scientist and an application developer. This person is someone who doesn’t build the models or machine learning architectures but rather ties AI capabilities together to build applications, autonomous agents, or AI-enabled tools. My sense is that experienced software developers and ML engineers already have the talent required to do ‘AI engineering’, as long as they spend some time learning a new set of tools and frameworks and shift their mindset to AI-specific applications. It’s likely not a bad career move for the right people to target this niche, and there’s a good chance that every company will want to make sure it has sufficient AI engineers to realize its AI-related ambitions.
Rather than (only) hiring AI experts into new roles, Citizens Bank is retraining its own workforce to utilize AI capabilities for the benefit of the bank and its clients. According to the article, this can take the form of training branch employees to be more advice-oriented (perhaps using AI tools, though it doesn’t specify), and it is also developing employees on the technical side. In fact, they may be offering people with computer science degrees who work at the bank but not as developers the opportunity to retrain and take on new, technical roles focusing on AI. The article also makes the point that while some companies may be speaking of AI as a reason to cut headcount, the reality is that AI adoption also requires a lot of work from people. It’s good to see this kind of retraining effort being discussed. An explosion of new technologies, tools, and use cases creates compelling career opportunities for individuals, and organizations looking to use AI to their advantage will need to have a well-considered talent strategy.
Every few years, companies seem to get really excited about…chatbots. Previously, however, these bots - whether in the form of text or ‘interactive voice response’ (‘IVR’) were extremely poor substitutes for human customer service. In the absence of real data, I’d still wager that the highest-frequency statement uttered to a voice-response customer service bot is: “I WANT TO SPEAK TO A HUMAN!” However, now that LLMs have provided examples of truly useful AI using a chat-based interface, the chatbot concept is getting a major reboot. In this piece, Mark Hurst argues that nobody wants a chatbot and that these bots only will exist to create efficiencies for the companies that deploy them, rather than to improve service for customers. There’s quite a bit to agree with here, though I’d make the case a bit differently. First, rather than simply deploying chatbots to just replace the work done by human agents, I’d hope that companies would use AI within their operation to automatically solve the problems that require such assistance in the first place! Next, for a finite (but expanding) set of tasks, AI agents may actually perform better than some proportion (again, expanding) of human agents, which would reserve the human agents for the most critical, most complex problems. Perhaps the role of human customer service agent will become a much lower-volume, higher-skilled, higher-paying job given the focus solely on the most difficult and important issues. Finally, whether or not service-providing companies use bots for some of all of their customer service interactions, there’s a major use case for customers using bots to handle their side of the interaction, as the author alludes to with DoNotPay.
Everyone in the tech and venture ecosystem is trying to develop a point of view on the best applications for AI and to hone a framework that will help them identify companies with a unique and differentiated ability to win in the market. This piece from Emergence Capital looks at multiple functions or ‘jobs to be done’ in financial services and rates each on 3 dimensions: the extent to which AI can incrementally enhance existing processes or business models, the urgency of customer ‘pain’ or need, and a function’s tolerance for risk. After assigning a high/medium/low rating for each dimension for each of the functions, they arrive at 5 areas of particular promise. These are: fraud, accounting, insurance, payment risk, and financial research. Finally, they consider what infrastructure needs to exist, and how trust and regulatory frameworks will continue to evolve. We’re only at the very beginning of the AI fintech era, and it’s hard to say which of these arguments will pan out, but it’s great to see people in the field putting their ideas out there and engaging with builders about what could be most valuable.
Our friends at This Week in Fintech are hosting a Virtual Job Fair on Aug 7-9 to help candidates find and connect with companies hiring in the fintech space. Details & registration are available on the Fintech Job Fair page.
Thanks for reading Fintech AI Review! Subscribe for free to receive new posts and support my work.