Fintech AI Review #5
Disrupting intellectual property, earlier monetization, and fintech/AI moats.
Happy Friday, and welcome to all our new subscribers. It’s great to be on this journey with you exploring the intersection of fintech and artificial intelligence!
Something fun about studying and experimenting with generative AI is that it can inspire us to consider aspects of human cognition we take for granted. Computers and humans clearly operate on different hardware, but as large, pretrained language and computer vision models become capable of human-level creative output, they can cause us to think about both the mechanisms and meaning of human thought and creativity.
Intellectual property, in concept as well as law, is likely to be affected by an accelerating AI wave. A couple of the articles linked below discuss aspects of this issue. While there is widespread concern - and even a few lawsuits - about copyrighted data being used in model training, it’s an interesting thought exercise to consider a completely different way of thinking about this. If a human learns by reading a lot of copyrighted material, it would seem extremely strange to subject the independent creations of that human’s mind to some form of IP scrutiny. That’s because the human is not memorizing and regurgitating specific material but rather encoding a representation of knowledge into one’s own unique brain. To go a step further, Jon Stokes has an excellent blog post which conceptualizes every single ‘file’ - i.e. any text, image, video, or audio that has ever existed or could ever exist - as an integer on an infinite number line. When a user enters a prompt to a generative AI model, the model is using that prompt to search through its latent space to output something that can be represented as a number, a number that, metaphysically at least, already exists. Taken ad absurdum, this idea pretty much invalidates all intellectual property law, which is extremely unlikely and undesirable, but it’s good food for thought.
It’s also interesting to contrast the operational and financial aspects of generative AI with previous innovation-enabling technologies, such as open source software. Generative AI is comparatively expensive, relying on specialized GPU hardware that is in the midst of a supply crunch and also costly to operate in terms of electricity use. Costs will undoubtedly come down with scale, but for the time being, startups building AI solutions may face pressure to monetize earlier than their peers in previous years. This is likely positive, as it could drive a wave of companies with practical, demonstrable real-world value, catalyzing even more investment and innovation in the sector. One of the links below is to a post from Lightspeed Venture Partners discussing the companies and teams most likely to succeed in Fintech+AI and what they can do to build a meaningful moat.
As always, please share your thoughts, ideas, comments, and any interesting content. If you like this newsletter, please consider sharing it with your friends and colleagues. Happy reading!!
Latest News and Commentary
Consumers are paying for AI - Bhoram Cho
New technologies inevitably spur countless ideas and birth ambitious new companies seeking to build products that best take advantage of their capabilities. For a while, many of these technologies came in the form of open source software, which was free to acquire and cheap to run on ever-more-efficient commodity hardware. In contrast, much of today’s generative AI boom is powered by companies building solutions on top of large language models offered by larger companies at a cost. GPT-4, for example, is priced at $0.03/1K input tokens and $0.06/1K output tokens. There are 2 ways to look at this. The first is that it’s incredible to be able to access such impressive, world-changing technology for literal pennies with almost no barrier to entry. The second is that if a company is using these technologies in a production use case, the pennies will start to add up quickly. Bhoram Cho - an entrepreneur, product leader, and former co-founder of a company I really wish still existed (KitchenSurfing) - wrote a post mostly focusing on this second point. Companies building AI-based products on top of commercially available large language models are going to either need to monetize sooner than those in the previous wave of startups or control costs through the use of smaller, local language models.
Lenders court Gen Z with ChatGPT-like tech - National Mortgage News
Plenty of financial products are commodity-like in nature, so it’s interesting to think about what makes consumers choose 1 provider vs. another. For example, if 2 lenders offer a 30-year fixed-rate mortgage on the same exact terms/rates, how can 1 of these lenders differentiate itself via a unique customer experience? This piece in National Mortgage News considers how lenders might utilize various communication channels, including using AI-chat-assistant-style experiences to personalize borrower interactions. One technology vendor quoted in the article makes the point that younger consumers are not necessarily opposed to speaking with a live human; it’s just that they only want to do so if more automated methods have been exhausted, making it absolutely necessary. Of course, the mortgage sector is highly regulated, so lenders using chat AI technology will have to be extremely focused on accuracy and on making sure that an AI agent doesn’t do anything that can only legally be done by an NMLS-licensed rep, such as rate quotes or preapprovals.
Name, Image, Likeness — But Make It Gen AI
If copyrighted data is used in training a large, pre-trained generative AI model, can new outputs of the model potentially be considered derivative works and therefore subject to copyright restriction? Serene Papenfuss explores the implications of generative AI in copyright law in this interesting post, focusing in particular on the concept of “name, image, and likeness” (NIL). While explaining the details of a supreme court case involving a Vanity Fair-commissioned Andy Warhol painting based on a photo of Prince, she quotes a portion of the U.S. Constitution’s Intellectual Property Clause: “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” In other words, part of the reason for the existence of copyright protection is not to restrict, but to encourage, new creative work. The use of textual or image data in AI models is an interesting application certainly not considered by the framers! On one hand, if protected intellectual property is used in training a model, there’s a decent argument that the owners of that IP are entitled to some consideration. On the other hand, if large, pre-trained models mimic human learning and creativity, we may need to think about this a different way. For instance, a person might learn by reading a lot of copyrighted content but then store mental representations of it in ways that are unique to that person’s own brain and inextricably linked to other experiences. It would be laughable to consider the knowledge in that person’s brain a copyright violation, and an output of that person’s brain would need to be quite a blatant copy to be considered a violation. Generative AI represents a new frontier in intellectual property law, and we can expect a lot of activity here over the next few years.
How generative AI is enabling the greatest ever theft/opportunity - Diginomica
In this wide-ranging piece, George Lawton discusses many potential intellectual property issues brought to the forefront by the ascendance of generative AI. Many of these examples were perfectly possible, even if perhaps in different form, before the current AI wave, but all the attention has brought increased scrutiny. For example, Getty Images is suing Stability AI for allegedly using its content in training data. While the post has a bit of a glass-half-full perspective, it’s worth reading. He’s certainly right that generative AI has triggered a new wild west that will require people, businesses, and governments to think differently about IP.
Big banks are talking up generative A.I. — but the risks mean they’re not diving in headfirst - CNBC
There’s practically no company, big or small, that isn’t talking about the potential for generative AI in its business. Executives at large financial institutions have been aggressively touting their organizations’ experimentation in this area, testing the technology’s ability to do things such as providing financial advice, automating existing processes, or aiding in fraud investigations. This article recounts conversations and presentations from the recent Money2020 conference in Amsterdam, demonstrating an incredible amount of enthusiasm but also quite a bit of cautiousness. Banks in particular are sensitive to concerns around data privacy, accuracy of information, and the many sensitivities around customer interaction. This of course comes as no surprise, and large, heavily regulated institutions are smart to exercise some restraint. However, the winners are likely to be the firms who are able to conduct many experiments and then actually deploy targeted solutions in a rapid and compliant way.
Fintech x AI: The Lightspeed View
This post from Lightspeed Venture Partners discusses the differences between traditional machine learning and generative AI for fintech applications. They make a case for the combination of predictive AI and generative AI depending on the use case, particularly in cases where accuracy matters. It also displays a useful market map of the firm’s existing AI investments as well as a view on trends they are seeing today. As in many areas, the best founders at the intersection of fintech and AI bring deep subject matter knowledge as well as technical expertise to truly understand the problem being tackled and the factors that will lead to a valuable and enduring competitive moat.