Skip to content

Revolutionizing Financial Services With Private AI

By Guest Contributor

Revolutionizing Financial Services With Private AI

By Zach Adams, co-founder, XFinLabs

Artificial intelligence is here, and radical change is inevitable. In the financial services sector, AI’s ability to analyze huge amounts of data in milliseconds has enormous potential to sharpen investment strategies, fine-tune risk management, and improve financial decision-making. But radical change can also invite radical risk, especially when co-mingling private and public data. Nowhere in the business world is that risk higher than in the financial sector. To harness AI systems safely, we need to understand how they evolved and understand the differences between public and private AI implementations.

The Evolution of AI

While it may seem like AI appeared overnight, its foundations are in machine learning, which started in the 1950s. It’s taken this long to make the transition from teaching a machine to learn to teaching it to think. That’s the quantum leap to today’s AI.

Bigger, faster, and more powerful machines crunching ever-larger amounts of data are what drives AI innovation. Using a sort of brute force, machines can now spot and understand patterns and trends in data that allow them to understand reason and context. That contextual reasoning mirrors the way that our brain’s complex neural networks make sense of the world.

In AI, neural networks are trained algorithms, modeled and perfected over time. Once trained, these models can understand associations between ideas and concepts. A great example of this is when an AI model reads the word “club” in a sentence. It knows, based on context, that the user is likely referring to either a sandwich, someone’s golf game, a visit to the gym, or a night out.

To illustrate an AI model at work, I asked OpenAI’s DALL-E to create an image of a cat doing a bench press. DALL-E was able to create the image because it has been trained on what a cat looks like and understands that the word “bench press” is typically associated with a gym.

The Move to AI Conversations

Generating pictures of cats lifting weights is, without a doubt, a fun use of AI. But most financial professionals are more interested in how to use the tech to boost their productivity. For example, in financial services, AI-enhanced search capabilities can drastically improve due diligence processes and investment research. Search is a great place to begin to understand the extraordinary power of this new tool. If we go back to search in Web 1.0, a typical query might have been “barbells.” In Web 2.0, we got more descriptive: “where can I find barbells?” Now, with AI, we can have a full conversation with a machine:

Prompt: What is the best type of barbell for a person just getting started in weightlifting?

Response: For someone just getting into weightlifting, we recommend that you look at the ACME Home barbell series. These come in a variety of colors and weights and are perfect for home gyms. They are also more cost effective than some of the higher-end barbells that you see in gyms. Of the people that have purchased ACME Home barbells, 92% have rated them a 5 out of 5.

This ability to converse with a machine has opened up an entirely new industry, focused on leveraging the power of AI for everyday tasks. We see it with leading tech companies using new AI-powered agents, taking over from chatbots, to enhance search. They’re using a technique called natural language processing, or NLP, to teach a machine to “talk.” This is no simple task, as language is one of the most complex and unique aspects of human intelligence. To effectively use language, AI systems need to understand intricate concepts like grammar, syntax, and semantics. The power of NLP lies in its ability to transform diverse data into coherent grammatical structures, allowing AI not only to read and comprehend but also to generate text in a remarkably human-like manner.

This illustrates a piece of writing split into parts called tokens, with colors pointing out how the AI model “understands” the written word. Tokens are an important concept, not just because they are used for language but also because they determine cost. Most AI companies charge based on token usage, so the more data asked and answered, the higher your costs.

Risk Factor: When Public AI Recommendations Go Too Far

The financial services sector faces extremely strict compliance and regulatory requirements. Now that we can have a conversation with a machine, how do we make sure that the user and the machine agree on honoring those requirements? Using public AI models in this context can carry significant risks due to data privacy concerns, potential regulatory violations, and the sensitivity of financial information.

Some might even consider these risks to be an existential threat to civilization, when the machines inevitably take over the world. We’ll leave that sort of speculation to the futurists, but it does highlight some of the challenges facing AI and, in particular, the technology’s use in financial services.

AI is fallible and can generate false responses, commonly known as “hallucinations.” For financial professionals, that’s an unacceptable risk and a barrier to widespread adoption. It’s also a characteristic of using publicly available AI models. What is happening here? The answer lies in how an AI model is fine-tuned and trained. The web has countless examples of fine-tuning fails, where the AI has been specifically trained to return certain types of responses. Perhaps one of the more infamous examples comes our way courtesy of Google Gemini:

Source: Google Gemini

Prompt: Determine the best credit card offer for a customer with a low credit score.

Response: Based on the information provided, we recommend applying for a secured credit card from ACME Bank. These cards are designed for individuals with poor credit histories and will help rebuild your credit over time.

The response to that prompt might seem helpful, but closer examination reveals that it highlights a subtle form of bias that often appears in financial AI systems. While recommending a secured credit card may be a reasonable option for someone with a low credit score, an AI system could exhibit bias by disproportionately directing certain demographics – such as younger individuals or those from specific socioeconomic backgrounds – toward higher-interest, higher-fee, or otherwise less favorable financial products.

It’s common for an AI system’s potential bias to come from the publicly available data used to train it. Public datasets can contain historical biases and reflect societal inequalities, which the AI may inadvertently learn and replicate. For instance, if the training data includes patterns of discrimination or unequal treatment of certain demographics, the AI could perpetuate these biases when making recommendations. If the data is incomplete or not representative of the target population, it may result in biased outcomes, leading the system to favor or disadvantage certain groups unfairly.

This underscores the importance of careful fine-tuning and training, particularly as AI systems make their way into financial services organizations. For the successful and ethical use of AI in financial services, there can be no unintended biases and consequences when analyzing risks, portfolio management, and compliance monitoring. To help avoid these situations, financial services companies need greater control over how AI is configured to respond to their questions. That’s why we are now seeing the growth of private AI solutions that leverage today’s AI models but also give control over the tuning of how those models respond.

The Benefits of Private AI Systems

In a private AI system, a company controls access to their own data, which never leaves their environment, which in turn helps ensure security and privacy. This innovation, called retrieval augmented generation, or RAG, creates a new database trained solely on the data uploaded by the company. Since the universe of data is now constrained to known sources, the responses tend to be more accurate and precise in a RAG system, avoiding the many hallucinations found in public AI models.

To exercise greater control over the output of a RAG system, users employ a technique called prompt engineering to refine how the AI thinks and responds. This is often referred to as the process of giving your private AI a personality and is very important in financial services, where the accuracy of information can impact investment-making decisions and regulatory compliance.

For example, companies can tell their private AI that it is a “domain expert and should respond to queries in a professional tone, as opposed to a more casual tone.” You can even narrow its core competency to a specific topic, such as informing the AI model that the data contained in the private AI database is “only related to the industrial revolution circa 1800 to 1840.” Once these prompts have been entered, the next time a person searches, the private AI system will ignore anything outside of the above criteria or will reference it as immaterial.

To refine the information returned by a private AI even further, some RAG systems allow users to hyper-tune, or configure, the underlying algorithms. This tuning is accomplished under terms like “temperature,” “chunk size,” and “top K.” The nomenclature might be confusing, but the basic principle is this: You dial in how much information your AI can access, how creative it should be in its responses, and how best to balance performance over cost. In combination with thoughtful prompt engineering, organizations can ensure that any enterprise implementation of AI is focused on the analysis of private data over private networks and that it does not have the chance to co-mingle ideas and information from public sources. This does not mean that an enterprise won’t have access to important public information. Quite the contrary. In financial services, any AI solution should be able to look at publicly available documents, such as U.S. Securities and Exchange Commission filings, market reports, or even company press releases.

The key message is simple: financial service companies can create their own private AI systems by using RAG, best-practice prompt engineering, and customized algorithms. This approach ensures compliance with strict regulations, safeguards data privacy, and protects sensitive financial information – all while leveraging the power of AI to make life easier and boost productivity for financial services professionals.

For more information visit www.xfinlabs.com.

XFinLabs is an AI solutions company helping enterprises build RAG private AI networks.

Zach Adams

Zach Adams is the co-founder of XFinLabs, where he advances retrieval augmented generation, or RAG-based AI platforms tailored to financial services – setting new benchmarks in due diligence, client portfolio management, and regulatory compliance. With over a decade in financial services, Adams combines expertise in structuring and syndicating a diverse range of investment products – including private real estate investment trusts, business development companies, preferred stock, interval funds, and private placements. Today, he leverages this industry knowledge to build AI-driven solutions that streamline decision-making and enhance operational efficiency in financial services. 

Throughout his career, Adams has led capital markets teams, driving the distribution and adoption of innovative financial products across independent broker-dealers, registered investment advisers, family offices, and high-net-worth channels. He provides practical solutions to complex industry needs, bridging the gap between technology and financial expertise. Adams currently holds FINRA Series 7, 24, and 63 licenses.

The views and opinions expressed in the preceding article are those of the author and do not necessarily reflect the views of The DI Wire.

Click here to visit The DI Wire directory page.

follow the DI Wire on LinkedIn

follow the DI Wire on Google News