Strategy

The Accuracy Trap: Accuracy is how we score forecasts but decisions drive the business

March 1, 2026 · By AshPoint Solutions · 7 min read

In most “AI in FP&A” conversations, everything still rests on a single question: can it increase plan and forecast accuracy? It’s the wrong place to land. Not because accuracy doesn’t matter, but because optimising for it tends to crowd out the thing finance teams actually need — help making better decisions in a changing market.

A forecast only gets scored once the window to act has closed. Yes, historical accuracy tells you your model is capturing real patterns. But it’s a calibration check, not a guarantee that those patterns will hold when conditions shift, which is exactly when the forecast matters most.

Before going further: sophisticated forecasting tools don’t help much if your data foundation is broken. This is a more common root cause than most teams want to acknowledge. Sort out definitions, lineage, and reconciliation first. Assuming you have a workable foundation, here’s what I think we’re getting wrong.

Where this started for me

A few years ago, I co-founded a company that built ML-driven forecasting tools. We’d integrated macroeconomic indicators, built sensitivity analyses, and could generate many scenarios quickly. The technology held up well in practice.

But I couldn’t find product-market fit. After six months pitching to mid-market CFOs, I kept running into the same conversation. Someone would ask what our accuracy rate was. When I said 89%, the follow-up was immediate: a competitor had claimed 94%. That was usually the end of the discussion.

The frustrating part was that our system was faster to train on new data, and it used prescriptive modelling to show you the drivers behind a scenario, not just the prediction itself. None of that registered. I don’t blame those CFOs. If you’re comparing two unfamiliar products and one gives you a higher number on the one metric you know how to evaluate, that’s a rational choice. The problem wasn’t their reasoning. It was that the evaluation framework never got past accuracy to where the real differences lived.

When I started looking at how major EPM vendors were marketing AI, I saw the same logic at work. Better accuracy, lower error rates, faster turnaround — usually demonstrated in controlled conditions where the definition of “accurate” doesn’t shift and the underlying business behaves predictably.

The problem with treating accuracy as the goal

You can only confirm a forecast was accurate after the period closes. By that point, the opportunities to act on it have mostly passed. Accuracy is useful for calibrating models, less useful for running a business. There’s also the question of what “accurate” even means. Researchers have spent considerable effort documenting how common accuracy measures can be misleading when applied to real business data.[1]

Finance adds a wrinkle that most accuracy metrics don’t account for: the cost of being wrong in one direction is often very different from being wrong in the other. A $200K beat gets celebrated. A $200K miss gets escalated. Optimising a model to minimise absolute error treats those two outcomes as equivalent. There are technical fixes for this — asymmetric loss functions exist — but most finance teams aren’t having that conversation at all. They’re reporting MAPE on a scorecard and moving on, which means the forecasts look precise without reflecting how executives actually weigh risk.

Underlying all of this is the fact that the business doesn’t pause while you’re executing against a forecast. Competitors shift their pricing. A customer you were counting on slows their buying. A deal that looked uncertain closes early. The question finance should be asking isn’t whether the original number was right. It’s what to do now, given what’s become clearer.

What useful planning output looks like

There’s a meaningful difference between a system telling you “you’ll hit $2M next quarter” and one telling you there’s a 75% probability of landing between $1.8M and $2.3M, the biggest factors are pricing and deal velocity, and that pushing toward the high end would require specific trade-offs in headcount and discretionary spend.

The first output gives leadership something to accept or reject. The second gives them something to work with. They can make a call, set a watch-point, and understand what they’re committing to.

A simple illustration: two retailers each receive a demand forecast for winter stock produced by the same model with the same accuracy. The first retailer uses it to validate last year’s approach and executes accordingly. The second receives the forecast with three scenarios showing how demand shifts if winter arrives early, arrives late, or tracks to the historical average — along with procurement decision points. They commit early on the categories where all three scenarios align, hold off on the volatile ones, and set a review point to decide the rest. Both forecasts had the same accuracy. The decision architecture is what converted it into something useful.

The executive override problem

Something I see repeatedly in EPM implementations: a team spends months building a detailed bottoms-up plan, getting the driver logic right, building in sensible assumptions. Then the executive team overrides the output to align with a strategic target.

This tends to frustrate the finance team, but I think it’s diagnostic. It reveals something about how organisations make decisions. Leadership isn’t extrapolating from historical patterns. They’re steering toward a destination, and the plan needs to be the bridge between that destination and the operational reality of how to get there.

The missed opportunity is that most systems treat the override as the end of the conversation. What a well-designed planning process should do instead is work backwards from the override: given this target, which assumptions must hold, which levers are controllable, and which indicators would tell us early if we’re off track. That’s the part that makes the number useful rather than just authoritative.

Accounting bodies have described this as the connective tissue between forecasts and operational action[2] — the idea that a forecast should trace back to specific decisions that can be made, not just a number that gets reported.

Where the implementation work lives

Most mainstream EPM platforms won’t deliver this kind of decision support straight out of the box. They’re strong systems with real capabilities, but they provide the materials rather than the finished structure: driver models, scenario infrastructure, workflow, reporting.

That’s where I spend most of my time in implementations. Not configuring the platform, but working out which models are worth building, which assumptions are genuinely controllable, what the leading indicators are for the things that matter, and how to present options in a way that executive teams can act on. The platform is infrastructure. The decision layer is something you have to build deliberately.

What this means as AI gets more capable

McKinsey has reported AI accuracy improvements in demand forecasting in the range of 10–20%.[3] Those are real gains. But as that capability becomes more widely available, it stops being a differentiator on its own.

The organisations that get the most out of better forecasting tools won’t necessarily be the ones with the best models. They’ll be the ones who’ve built a planning process that connects what the model surfaces to how decisions get made. That’s a harder problem than improving the model. It requires someone who understands the finance function well enough to know where the real uncertainty sits, what leadership needs to hear, and how to design a system that serves those needs rather than just reports against them.

In the next piece, I’ll look at why so many organisations have the right tools for this already and still end up with scenario plans that don’t change anyone’s decisions.


Robyn Halbot, MBA, BSc, PMI-ACP is Principal at AshPoint Solutions, with fifteen years of EPM implementation experience. She previously co-founded an ML-based forecasting startup and is currently building AI applications that connect financial planning to strategic objectives.

Whether you’re evaluating EPM platforms, rethinking how your current build supports decision-making, or curious about where AI fits in your planning process, I’m always happy to talk through it. Let’s connect.


References

[1] Hyndman, R. J., & Koehler, A. B. (2006). “Another look at measures of forecast accuracy.” International Journal of Forecasting. https://www.sciencedirect.com/science/article/abs/pii/S0169207006000239?via%3Dihub

[2] ICAEW. “Are you using the right tool for the job? Financial forecasting and scenario planning.” https://www.icaew.com/technical/business/financial-management/financial-modelling-and-forecasting/are-you-using-the-right-tool-for-the-job

[3] McKinsey & Company. “Most of AI’s business uses will be in two areas.” https://www.mckinsey.com/capabilities/quantumblack/our-insights/most-of-ais-business-uses-will-be-in-two-areas