Fintech AI

Generative to Agentic

Adam T. Hark

Managing Member
Fintech AI Expert

AI in Commerce

The pace of diffusion of AI in commerce continues to accelerate. The implications of which are not entirely known, nor can they be, when the essence of this general purpose technology is a continuous lift in processing capability and autonomy. However, we are seeing the initial impact of generative AI applications and nascent agentic machines throughout the fintech, payments and banking sectors, and it’s a necessary commercial attribute for all ecosystem participants, meaningfully affecting not just operations, but long-term value creation.

The fintech sector is one of the most fertile areas for AI adoption because of the tremendous amount of data it produces, both structured and unstructured. Though most of today’s useful fintech-specific applications of AI stem from large language models and generative algorithms, the continued compounding effects found in advanced architectures (like Retrieval-Augmented Generation) are rapidly decreasing error rates, creating more thorough and accurate outputs, and setting the stage for the epic transition from analysis, prediction, and recommendation engines to autonomous, self-executing machines. 

AI’s Role in Modern Finance

As the volume of worldwide financial data has exploded alongside a proportionate increase in innovation, computational complexity has soared, leaving legacy financial technology systems – core banking, payment processing switch, and trading settlement platforms – on a collision course with obsolescence. Further, global financial markets operate 24/7/365 requiring not just around-the-clock transactional capabilities but risk management, compliance and threat protection. 

Thus, the demand for highly scalable, low cost, automated and semi-autonomous “digital workers” is virtually inexhaustible right now.

Whereas five years ago, the difference between legacy and next generation financial systems was defined by cloud versus server-based computational frameworks. Today, next generation financial systems are defined by those that have implemented advanced machine learning algorithms, providing meaningful automation and early-stage autonomy, and those which haven’t. And the implications of this distinction are not solely restricted to the market’s (investor) perception of corporate value. The implications fuel users’ perceptions of companies’ product and service capabilities, utility, and efficiency.

Trust in Agentic AI

Lastly, one can’t discuss the implementation of AI in payments, fintech and banking without addressing the aspect of trust. Especially in the area of commerce and payments, the convergence of generative, probabilistic systems with Robotic Process Automation software (deterministic systems) toward agentic AI (with various degrees of autonomy) begs the question as to how reliable these systems will perform? As agentic AI systems become more autonomous, it exposes the unique dynamic of having humans have less control over decision making and processes, yet requiring the same to provide greater guidance and oversight. Agentic systems will be useless unless both companies and consumers can trust them.

Data rights and privacy also feed into the trust issue. Perhaps more so in finance than any other sector but for healthcare. For agentic AI systems to not only function, but function well and true within human directed guardrails, agents must have access to massive amounts of both consumer and merchant data. As of this writing, it’s not clear whether this aspect of agentic AI is fully appreciated by either. However, understanding that in commerce and payments the most likely scheme will be agent-to-agent, data access will have to be consented on both sides of the transaction. This consent leads us to believe that regulation may factor into deployment and adoption rates.

Caution and Limitations

Along with the grand promise of AI, whether in fintech, payments, banking or in general, we cannot properly advise clients, or bank these assets, without also recognizing its limitations. As referenced above, trust in these systems is the foundation of their efficacy and utility. Thus their use and implementation must be well considered, and for all of us, confidence in these systems must be qualified with caution.

Intelligence

At the top of our list of limitations is the notion of intelligence itself. Today’s state of the art LLMs operating on transformer architectures are so powerful and effective that the Turing Test (proposed by Alan Turing in 1950, stating that the threshold for a computer to be deemed “intelligent” was determined by a human’s inability to tell the difference between the computer program and another human within the context of a natural language dialogue) has been rendered obsolete. When contemplating the human-to-AI interaction through prompting LLMs, this threshold has already been exceeded.

However, we cannot ignore that these systems’ intelligence is in fact simulated (not real in the human sense). They are inherently probabilistic (not deterministic as in general computation). They lack context, common sense, and persistent coherence – the ability to output the same response to the same inquiry over an extended period of time. They also lack memory and logical consistency. For business owners or consumers, the practical approach to evaluating the efficacy and trustworthiness of today’s AI must take this into account.

Data

Data is next on our list of AI limitations and for multiple reasons. By now, most of us realize the importance of data quantity in AI: for AI to have meaningful commercial impact, machines must have access to enormous amounts of data – to reinforce statistical rigor and to find patterns and connections that humans never would. Be mindful that LLMs are probabilistic (statistically based), and for their outputs to be meaningful, the size of the data space they explore has to be very large, whether structured (like spreadsheets) or unstructured (like text). Be mindful that agentic AI heavily relies on LLMs too, so the data quantity issue doesn’t diminish as machines trend toward greater autonomy.

There’s also the data quality issue, or data accuracy. In machine learning, the accuracy of the training data is critical. If a system is being trained on incorrect, inaccurate, outdated, or incomplete data, then its outputs will be rife with high error rates. Thus, understanding how machines are trained, and the data that’s being used to train them, is crucial for any operator to understand prior to the purchase and implementation of such a solution. Data issues (in combination with prompting issues) are also how those pesky hallucinations are derived: bad prompts plus incomplete or incorrect data will precipitate high-confidence outputs that are not only incorrect, but completely fabricated and detached from any logical etiology.

Human Limitations

Yes. Ironically, we need to look at the limitations in AI systems through the lens of human constraints too. One of the more obvious ones, especially when working with generative AI, is the ability or inability to properly prompt the model. If the prompt doesn’t comport to the system’s logical framework, the output will be useless. A more subtle, yet insidious human constraint imposed on an AI system, albeit unintentionally, is how the model is imbued with its human designers’ biases. The most obvious example of this is in credit decisioning: an example being if a model approves a credit score from an applicant from a predominantly white zip code as opposed to denying the same credit score from an applicant from a more diverse zip code. 

Lack of Visibility

There is the very real problem of the human inability to “see” why machines come to certain conclusions. Neural networks are “black boxes” whose ends (outputs) justify their means: that’s how they are designed to operate. Machines correct for error rates (in AI, this is “learning”) but don’t offer visibility into, or the rationale behind, why the specific nodes in a neural layer adjust their weightings. Because humans lack this visibility, they can’t “fix” the model in the same way they could fix a lawnmower or general computational code.  

Ethics and Judgement

There’s a whole area of AI constraints and limitations that deals with ethics and judgement, but that’s too far afield from what we’re examining here.

However, spoiler alert: AI programs don’t have either. 

Innovation and Creativity

Are today’s AI models truly creative? Innovative? They may seem to be but the LLM outputs, whether graphical, musical, text or otherwise, again, are probability-based guesses based on specific training data and large, but limited, data spaces. Therefore, they are not truly creative or innovative. Some may argue that pattern recognition in massive data sets is a creative element, but to us that falls more under the type of intelligence responsible for discovery. Einstein and Darwin didn’t create gravity and evolution respectively, they discovered it. Thus, creativity and innovation remains firmly in the realm of a human endeavor. What generative AI systems do produce are iterations, mash-ups, remixes and amalgamations, based on probabilities, but lacking true human ideation and intentionality.