Articles
Incorporating AI into Revenue Forecasting
- By AFP Staff
- Published: 11/1/2023
Set a model for human-machine interactions.
The AFP FP&A Case Study series is designed to help you build up key FP&A capabilities and skills by sharing examples of how leading practitioners have tackled challenges in their work and the lessons learned.
Presented at an AFP webinar, this case study contains elements that are anonymized to maintain privacy and encourage open discussion.
Insight: The forecast can be built up with layers like a cake; use AI as the base that aggregates history and data, then lay on top human adjustments and data overrides, and company maturity with the technology.
Company Size: | Large |
Industry: | Automative |
Geography: | North America |
FP&A Maturity Model: | Analytics, Plan and Forecast Development |
Analytics: Predictive and prescriptive analysis facilitates the exploration and explanation of data, data science approach, computation and visualization.
Plan and forecast development: Strategic alignment. Connect long-term strategy to current and anticipated operational activities, financial performance, and risk framework.
Background: General Information About the Company
The company in this case study is an auto parts retailer with an annual revenue of $5 billion. Finance owns revenue forecasting, but they want to supplement the existing process in order to demonstrate human bias. They want to achieve this by incorporating external predictors such as GDP, gas prices, miles driven and auto sales into their revenue forecast.
Challenge: The Work or Difficulty FP&A Had to Address
The challenges they faced included a skills gap, existing bias and low accuracy. After addressing these challenges, the team wants to have a consistent, predictable and explainable forecast.
Approach and Outcome: How FP&A Addressed the Challenge and What Came of the Efforts
They started by automating elements of the existing process and then incorporated external factors and predictors. Finally, they used outside consultants to teach the internal team, thus addressing the skills gap. Market conditions, gas prices and more were incorporated into the revenue forecast, resulting in a forecast that surpassed 97% accuracy at the category level, a 30% increase over the previous model.
Discussion: Q&A with Webinar Speakers
Do you set buffers or recommend that companies set buffers within their AI that allow that human override? Or do you lock it down?
“Our approach is to have a baseline forecast that's pure AI,” said Justin Croft, Vice President of Data Science & Solution Architecture for QueBIT Consulting. “We are transparent, making it clear that this is the AI first draft, and then give people the opportunity to overlay adjustments.
“One of the shortcomings of AI is that it's based on historical data, so it doesn't know about recent business decisions. Things that have just happened or are about to happen, such as the introduction of new products, geographic expansions, pricing or promotional changes, all have to be added in and layered on manually. If we think about it in terms of layers, the base layer of the cake is AI, and then everything gets stacked on top of that by humans.”
Did you find that new skills gaps evolved, or is that something you can plan for so that you're training your team while implementing AI?
“There are lots of different kinds of AI, and what works for finance may not work for marketing, may not work for operations,” said Croft. “What we have found is that within the office, the skills and technology are very accessible, and it doesn't take long to get people up to speed.”
Are different models of human interaction appropriate for different opportunities? Are there different models that you like, or that you've seen for different levels of automation?
“It depends on the organizational maturity and the extent to which they're ready to adopt AI,” said Croft. “Some companies are going to be more comfortable having decisions fully automated by AI, whereas having a human's job augmented by AI is a smaller step. There’s a continuum of trust and adoption, and it depends on how advanced an organization is.”
“I always ask the question, why are you overriding, and what is your data set or your data purpose for overriding?” said Amy Johnson, FPAC, former Senior Finance Manager at Amazon. “After that, we look at it, we agree, but we also have a multi-departmental review of the final product to make sure everyone agrees, and we all sign off and say this final product makes sense and we can execute it.”
How do you update models and keep them constantly tuned to changes in the world?
“We use that human buffer to bring our data to the table,” said Johnson. “For example, we had to adjust our attendance model for the Taylor Swift concert here in southern California, because it did not take that concert into account. Similarly, I have a warehouse very close to Coachella, so whenever they have a large concert over the full weekend, or their three- or four-day concerts, we have to adjust for staff being out.
“We keep finetuning it, working with our PhDs; we have feedback loops, processes or tickets that we could put in that tell it this is happening, can you go look at it? We provide the data so that they don't have to recreate it or try to find something that they may not be able to find. But it's a constant iteration. You'll get one thing done, and you'll think I've got it, and then you'll build the next piece on, and you may have to go back and revisit. It is slowly becoming more and more robust.”
Read the rest of the articles in this series:
Copyright © 2024 Association for Financial Professionals, Inc.
All rights reserved.