REPORT

Five traits organizations at the forefront of AI in finance have in common

What are those that are leading the way on AI in finance doing that you might not be? That's what we examine in this Pigment report.

AI is increasingly a differentiating factor for finance departments, with those that have aligned their strategy correctly reaping significant, well-documented benefits. But there are also plenty of examples of AI deployments costing too much money for little to no gains.

With that in mind, this report will analyze five traits that organizations who are succeeding with AI have in common.

01. They have a common organizational understanding of AI

AI is, for good reason, a hot topic right now - but if you ask five people to give you a definition, the chances are you’ll get very different answers.

There are reasons for this. Fever-pitch hype is one, but a well-known phenomenon known as the AI effect documents how the definition of ‘artificial intelligence’ shifts constantly as capabilities progress. There was a time when chess computers like Deep Blue were the most ubiquitous representation of ‘AI’, but that crown would now probably go to ChatGPT.

It’s for this reason that it’s important to be specific when talking about AI. To prevent misunderstandings and to ensure you have productive conversations, everyone in your organization needs a common understanding of terms like:

Machine learning (ML)

Uses data and algorithms to enable AI to imitate how humans learn, gradually improving its accuracy. As new data is fed to these algorithms, they learn and optimize their operations to improve performance, developing 'intelligence' over time.

Supervised learning

A type of machine learning where an algorithm is trained on labeled data, meaning that each training example is paired with the correct output. The goal is for the model to learn a mapping between inputs and outputs so that it can predict the label for new, unseen data accurately. When you complete a CAPTCHA and are asked to identify all the bridges in an image, you’re labeling data to be used for supervised learning.

Unsupervised learning

A type of machine learning where the model is trained on data that does not have labeled outputs. The algorithms involved try to find hidden patterns and structures within the data - useful for fraud detection, for example, since it can be used to detect anomalies in transaction patterns.

Neural networks

A machine learning technique that uses artificial neurons, or nodes, to process data in a way that mimics the human brain.

Deep learning

A subset of machine learning, and involves neural networks with many layers (hence "deep"). It’s particularly effective in tasks like image and speech recognition and natural language processing.

Generative AI

Enables users to create generative content from various inputs, including text, images, sounds, animation, and 3D models. It differs from traditional AI in that it does not merely analyze data, but produces new data in multiple forms of media, including text, code, images, audio, and video.

Large Language Models (LLMs)

Use deep learning to understand and generate language. LLMs are trained on large amounts of textual data, sometimes hundreds of billions of parameters. LLMs learn statistical relationships from text documents through a combined self-supervised and semi-supervised process. Generative Pre-trained Transformer (GPT) is an LLM created by OpenAI, but other examples include Gemini by Google and Claude by Anthropic.

Robotic process automation

Used to automate repetitive tasks traditionally performed by humans. RPA ‘bots’  interact with digital systems and applications in a way that mimics how a human would operate them. RPA helps organizations improve efficiency, reduce human errors, and free up employees to focus on more strategic work.

Predictive analytics

Uses historical data, statistical algorithms, and machine learning techniques to predict future outcomes or trends. It involves analyzing past data to identify patterns and relationships, which are then used to make forecasts.

There are plenty more than we can list here, and for that reason it’s recommended to organize company-wide training sessions: organizations taking AI seriously need to understand AI seriously. They understand the relative strengths and weaknesses of LLMs, for example, so they understand how best to deploy them and the organizational behaviors required to work effectively alongside them.

To make sure that’s the case, there needs to be someone at the organization responsible for driving that understanding, as well as an overall strategy for the business - which is covered in our next tenet.

02. They’re trusting their Head of FP&A to drive AI strategy

According to Gartner, entrusting AI efforts to the Head of FP&A or Finance Analytics increases the likelihood of success by 50%. When efforts are led by accounting or controlling, organizations are twice as likely to fail. There are a few different reasons that make those roles a good fit.

First, they have a strong analytical background. FP&A requires a deep understanding of financial data, forecasting, and strategic planning. The same traits that make someone good at FP&A make them good at spotting where and how AI can be put to the best use.

Second, they’re connected to the business at a deep, strategic level. FP&A sits at the intersection of various business units (sales, marketing, operations) so is very clued into what’s going on in the organization as a whole.

And finally, they have access and authority over the cross-functional data that will be required to train and run AI models on. Data management is already a critical part of the FP&A function, which will be a helpful skill to possess in the age of AI.

03. They understand the AI maturity framework

Understanding where you are and where you want to get to is essential. Everyone has lofty goals, but many getting the most value today has:

Exploratory stage

AI initiatives are scattered and not strategically aligned with business goals. There may be small-scale AI experiments, often driven by individual teams or departments.
Experimentation with AI solutions in isolated use cases, but there’s little cross-functional collaboration or a coherent AI strategy.
Challenges here may include lack of leadership support and an insufficient understanding of AI’s potential value, and limited data capabilities.

Intermediate

At this stage, organizations begin formalizing AI goals, aligning them with business objectives, and engaging in more significant AI projects.
There is an effort to build foundational AI skills and implement technology within the organization.
Challenges at this stage often revolve around change management, as well as managing legal and IT concerns - governance.

Advanced stage

AI is increasingly embedded into core business processes and decision-making frameworks. The organization has established a cross-functional AI team or center of excellence (CoE) to lead and manage bespoke AI projects.
Advanced organizations are scaling AI solutions across different business functions, leveraging advanced analytics, machine learning, and automation to improve efficiencies and outcomes.
Challenges at this stage include ensuring integration across systems, managing AI governance, and addressing risks such as data privacy and algorithmic bias.

Extremely advanced

AI is fully integrated across the enterprise, with AI-driven decision-making embedded into day-to-day operations. AI technologies are continually improved, and there’s a culture of data-driven innovation.
Organizations at this stage have established frameworks for ethical AI governance, transparency, and accountability.
Challenges at this stage may include maintaining balance between automation and human oversight and ensuring ongoing ethical considerations.

Taking a realistic view, there are very very few organizations at the advanced level or above. Establishing a cross-functional team and hiring data scientists/AI talent is an incredibly expensive investment, and it’s only one that should come after organizations feel comfortable at the development stage.

For most organizations, buying AI is by far the better choice right now.

04. They know how to identify AI technology that’s worth their time

As we’ve just explored, most organizations will find themselves finding the most efficient way to derive value from AI right now is by buying it, rather than building it themselves. And although there are ‘pure AI’ solutions like ChatGPT that represent an entirely new product category, the biggest gains are to be made with AI implementations that work with the software businesses already use every day: your ERP, your CRM, your planning platform…

But the market is awash with solutions, and not all are created equal. Retrofitting AI to an existing solution requires a lot of work to make something that’s worthwhile. Buyers should keep a few points in mind when evaluating the potential impact of an AI-enabled solution:

Properly integrated

A bolted-on experience quickly gives itself away. Many existing pieces of technology have hastily-developed AI that’s tacked on to an existing product. But if said product doesn’t have a particularly modern architecture, the development process can be incredibly difficult.

What that translates to is an ‘AI experience’ that’s only able to operate in a walled garden. An AI assistant needs to understand how to navigate the platform itself if it’s to help users - it must understand the correct levers to pull within the product.

Trained on industry/use-case-specific data

Large language models are powerful tools but they are not magic. They are good at understanding user intent, interacting with users, categorizing text or writing summaries. Robust data analysis is a skill they aren’t as good at (for now) - that’s why, when developing Pigment AI, we focused our efforts on creating proprietary models that bring value to this specific task.

To properly leverage LLMs on the tasks detailed above, we had to provide the context they needed to interpret the nuance of user instructions and requests, and to match them with the correct levers to pull within the platform. We also spent a great deal of time on providing Pigment AI business planning knowledge and best practices, and worked closely with customers to ensure it behaves as it should for the tasks they complete every day.

Security is taken seriously

AI can present very serious and unique challenges from a security standpoint. It’s essential that a vendor takes security seriously, doesn’t train on your data without permission, and that any AI assistant is also forced to follow any access controls that may be in place. This ensures information does not get into the wrong hands.

05. They’re employing "capability diffusion" as a guiding principle

It’s no secret that FP&A teams are always stretched. The scope of the role is ever-expanding, with growing expectations for hands-on business partnering being the leading reason.

The response to this for FP&A teams so far has been to automate routine FP&A tasks to make more room for internal consulting. But this model can prove hard to scale: it often demands hard tradeoffs on which parts of the business will be supported, and to what extent.

Research has found a better method that’s 3-4x more effective at making the FP&A delivery method more sustainable: which Gartner refers to as ‘capability diffusion’. It hinges on making decision makers more autonomous, by leveraging tools like machine learning and generative AI to make technology their first port-of-call. In other words: ask AI first.

“New developments in the finance technology vendor market, and the prevalence of tools with embedded capabilities such as graph analytics, machine learning and generative AI make it easier than ever before for FP&A to transfer expertise to decision makers for complex decisions.”Randeep Rathindran, Distinguished Vice President, Research, Gartner Finance practice.

With capability diffusion, finance business partners take on more of a tutorship role: focussing on helping decision makers learn to use tools. This encourages self-sufficiency and is a far more efficient way of working.

06. Pigment AI

It’s this ‘ask AI first’ principle that we’ve designed Pigment AI to address. It will enable the FP&A team to work faster, but it will also democratize the process of planning and access to data.

This is an easy way of achieving capability diffusion, helping:

Analysts/modelers: Conduct the work you do every day faster

Rapidly find and analyze data faster than before, onboard and ramp up new users faster, and lean on Pigment AI’s best practice when it comes to planning and get answers faster.

C-Suite and the wider team: Self-serve insights and learn to navigate the platform

Quickly find and interrogate data, and focus on the details that matter - whether you’re a CRO looking to quickly understand performance vs quota across territories, or an HR lead looking to understand headcount trends. Ask questions to learn how to navigate the platform and access the data and insights within it.

“Pigment AI, even in the early days, shows great potential to foster Uberall's ‘self-FP&A’ vision by providing an easy to use interface and powerful query options, especially for non-finance users. Pigment AI will minimize the time FP&A teams spend answering simple questions from stakeholders and basic modeling, which enable them to focus on value-added analysis.”

Eda Karael, FP&A Director, Uberall

07. Next steps

It’s an exciting time to work in finance, with a number of opportunities opening up thanks to developments in the field of AI - your degree of success will depend on the approach you adopt today. Learn more about Pigment AI here.

You’ve read 10% of the content. Fill out the form to read more.