This piece was originally published at Founders Circle Capital.
July 25, 2023
As generative AI continues to have its moment, finance leaders are paying close attention to the implications within the finance function. A poll of executives in The Circle revealed that 57% anticipate their finance team’s use of generative AI to increase in the next 12 months. Yet many of those leaders are still unclear on the practical use cases for generative AI or have lingering concerns about data accuracy, privacy, and the need for internal controls and policies.
To help demystify this hot topic, we recently invited over 80 finance leaders in The Circle to discuss how they’re using generative AI today and reveal their biggest questions about the benefits and risks in conversation with: Boris Segalis (Partner in Goodwin’s Data, Privacy & Cybersecurity practice), Kat Orekhova (Co-Founder & CEO of Vareto), and Alexander Hagerup (Co-Founder & CEO of Vic.ai).
Here are five critical questions about generative AI on finance leaders’ minds:
#1 What can I do with Generative AI today?
Most leaders were curious to hear practical use cases for generative AI within finance. Kat offered a high-level description of how leaders can think about using generative AI today:
“In the short-term, think of generative AI as an order-taker and a co-pilot. For example, you can ask it to build reports and interpret data analyses. As a co-pilot, AI can also proactively give you data insights and provide helpful suggestions as you build financial models. Everything that your finance team could use software to build, you can now use natural language to ask AI to do it.” – Kat Orekhova, Co-founder & CEO of Vareto
Vareto offers software that helps finance teams automate and streamline much of the FP&A workflow including reporting, planning, and forecasting. While humans are still required to make the final decisions, software like Vareto helps eliminate manual work such as pulling data from different sources, calculating key metrics, and rolling up model inputs into forecast summaries. Similarly, Vic.ai automates accounting tasks such as invoice processing and approval.
Alexander and Kat agreed that the possibilities for automating finance workflows using generative AI seem endless; however, it will be a few years before companies start to experience meaningful revenue impact from generative AI.
#2 Will it replace human decision-making?
Similar to what our CHRO community heard about generative AI, it will not replace the need for human decisions. “People ultimately decide how many people to hire, how much budget to spend – AI can give suggestions that get you 80% of the way there, but humans will still need to make the decisions,” said Kat. Boris added that learning language models (“LLMs”) like ChatGPT sometimes fabricate information – a phenomenon known as “AI hallucination” – and from a risk and legal stance, there will always be a need for human fact-checking when it comes to impactful decisions.
#3 Should I worry about data privacy concerns?
Data privacy is an evolving issue with generative AI. Italy famously banned and then reinstated ChatGPT out of data privacy concerns, but otherwise, there hasn’t been widespread concern about sensitive information within an LLM being leaked publicly.
Still, Boris says that leaders considering using third-party AI tools should remain cautious when sharing proprietary data.
“Not every startup will be buttoned up on confidentiality and security. Consider using a synthetic data set for the pilot. If you choose to proceed with the vendor, you can start to perform due diligence on the types of controls and security measures they have in place to ensure it aligns with what your other vendors have.” – Boris Segalis (Partner in Goodwin’s Data, Privacy & Cybersecurity practice)
Boris pointed out that many companies train LLMs on public datasets, which may be subject to privacy protections in some regions, like Europe. Kat added that companies should consider how information is accessed and shared internally. For example, if you’re a publicly-traded company, there could be risks around employees accessing privileged, confidential information that might constitute insider trading.
#4 How should I think about the ROI of building vs. buying?
Building an LLM internally will likely take more time and effort than buying a tool off the shelf. The potential benefit of a company making a solution internally is being able to train the LLM on its existing dataset. However, the ROI of an internal tool may be uncertain.
If you’re considering an outside vendor, it’s perfectly reasonable to ask them to help you quantify the ROI, said Alexander:
“I would always challenge the vendor to help you understand the ROI based on the data you give them. If they cannot fully calculate the ROI, they should at least provide an ROI based on other companies’ performance data within their system.” – Alexander Hagerup (Co-Founder & CEO of Vic.ai).
#5 What policies or controls does my organization need?
Even if your finance team hasn’t implemented a generative AI tool yet, chances are your employees are already using one at home or work. Providing clear guardrails and recommendations for use in a generative AI policy may help mitigate liability and business risk. According to a poll, less than 50% of companies represented during the discussion have or are in the process of creating an AI policy.
“It’s critical to understand that there’s a lot of blind spots in implementing AI,” said Boris. He likened generative AI’s ability to scan and parse emails and documents to being able to find a needle in a haystack. “Companies may need to change their approach to data permissions and who has access to what data when implementing an LLM,” he added.
Kat suggested applying that same level of scrutiny to third-party vendors. “When selecting a vendor with AI features, ask them about their approach to internal controls and permissions,” she said. “If they haven’t thought about it or don’t have a good answer, that’s a red flag.”
The takeaway
The future of generative AI in finance is truly anyone’s guess, though Alexander did offer a bold prediction for the accounting world: “I think over the next three to five years, if there’s 30 to 50 different accounting tasks and processes that have to be done, almost all of them will probably be performed by AI.”