Back to all articles

Inventory Forecasting for Non-Data-Scientists

Forecasting does not need a data team. This guide shows how to build a practical inventory forecast with simple methods, cleaner inputs, and accuracy checks that make sense on the floor.

In this article

Inventory forecasting for non-data-scientists sounds harder than it is. Most teams do not need a black-box model. They need a clean sales history, a repeatable method, and a way to spot when the data is lying. If an item stocked out last Friday, the spreadsheet is not missing math - it is missing context.

That is good news, because simple methods are often stronger than people expect. In Green and Armstrong's review in the Journal of Business Research, the authors found 97 comparisons across 32 papers and no balance of evidence that extra complexity improves forecast accuracy. Forecasting is still hard, but the first win usually comes from discipline, not from fancy software.

Field note

A useful forecast is not the one with the most tabs. It is the one a buyer can explain, challenge, and use before the next order deadline.

What a useful inventory forecast actually does

A forecast is an estimate of future demand over a defined window. For inventory, that window should match your buying rhythm: supplier lead time plus the time until the next review. If you order every Monday and the supplier takes 21 days, you care about the next 28 days, not a theoretical annual average.

Protect availability

A forecast gives you enough warning to reorder before your A items hit preventable stockouts.

Right-size cash

It keeps slow movers from absorbing working capital just because someone ordered a little extra to feel safe.

Create one baseline

Sales, purchasing, and operations can argue over one number instead of defending three different gut feelings.

Forecasts are not promises. They are starting points. You still layer in supplier problems, commercial events, and business judgment. The goal is not perfection. The goal is fewer surprises.

Start with simple methods, then add complexity only when it earns it

The NIST handbook on smoothing describes averaging as the simplest way to smooth data and reduce random variation. That is exactly where most inventory teams should begin. If your history is reasonably stable, simple moving averages and smoothing methods will give you an operational baseline quickly.

Method 1: moving average for stable demand

A moving average takes the last few comparable periods and averages them. If you forecast weekly, a 4-week moving average is often enough to start. Example: if the last 4 weeks sold 92, 104, 96, and 108 units, next week's baseline forecast is (92 + 104 + 96 + 108) / 4 = 100 units.

4-week moving average

Next week's forecast = (week -1 + week -2 + week -3 + week -4) / 4. Use comparable periods: weeks with weeks, months with months.

Method 2: exponential smoothing when recent history matters more

If recent sales matter more than older history, step up to simple exponential smoothing. In Forecasting: Principles and Practice, Hyndman and Athanasopoulos show it as a weighted average of the most recent actual and the previous forecast. In plain English: yesterday matters more than last quarter, but last quarter is not ignored. That makes smoothing useful when demand is drifting but not strongly seasonal.

Method 3: add a seasonal factor when the calendar really matters

If demand rises and falls on a repeating calendar pattern - December gifting, summer weekends, back-to-school, month-end ordering - separate the seasonal effect from the base level. In Forecasting: Principles and Practice, the practical move is to forecast the seasonally adjusted series and then add the seasonal pattern back. That is a technical description of a simple idea: this December should look more like last December than like last May.

Seasonality is worth the effort. In the same Green and Armstrong review, seasonal adjustment reduced MAPE from 23.0 to 17.7 percent on 68 monthly series in the original M-Competition. That is a useful reminder that simple calendar structure can beat a lot of extra math.

Stable demand

Use a 4 to 8 period moving average when the item sells regularly and the level is not drifting much.

Slow drift

Use simple exponential smoothing when demand is moving gradually and recent periods deserve more weight.

Clear seasonality

Use a base forecast plus seasonal factors when the same calendar lift repeats often enough to trust.

Inventory planner reviewing simple demand patterns on a tablet at a warehouse worktable beside cartons and storage bins.
Simple forecasting starts with a visible pattern and a method the team can explain.

Clean the history before you trust the math

Forecasting method matters, but input quality matters more. Returns, one-off project orders, supplier short-ships, and promotional spikes can all distort the baseline. If you feed noise into a model, you just automate bad judgment.

Stockouts are the biggest trap. In research on demand forecasting under lost-sales stock policies, the authors note that if enough stock is available, sales are an unbiased demand estimate, but in the presence of stockouts, sales understate demand and push forecasts downward. That creates the exact spiral operators hate: under-forecast, under-order, stock out, repeat.

Do not average stockouts into the baseline

When the shelf is empty, sales stop measuring demand and start measuring availability.

A simple stockout adjustment example

Say an item sold 210 units in a 30-day month, but it was only in stock for 21 days. The naive daily rate is 7 units. The stockout-adjusted rate is 10 units because 210 / 21 = 10. For replenishment planning, the second number is much closer to reality. The first number bakes the stockout into next month's forecast.

Clean-history rules

  • Flag stockout periods:Track days or weeks with zero availability so they are excluded or adjusted, not averaged in.
  • Separate promos from baseline:A clearance week or marketing spike should sit in an event column, not permanently inflate the base forecast.
  • Remove one-off orders:Large project buys, launch fills, and internal transfers are planning events, not ordinary demand.
  • Use inventory truth, not sales alone:If record accuracy is weak, fix counts first. Dirty stock records distort both history and buying. See the cost of inaccurate stock levels.
  • Forecast families before variants when needed:Thin history on size-color or pack variants often forecasts better at group level first, then allocates down.
Warehouse worker scanning a partially empty shelf with plain bins, highlighting that lost sales can hide true demand.
When a shelf goes empty, sales history stops telling the full demand story.

A spreadsheet workflow you can run every Monday

You can run a respectable forecast in one sheet with rows by SKU and columns for the last 12 to 24 periods, in-stock flags, event notes, forecast, actual, and error. The point is not to create a beautiful model. The point is to create a repeatable routine.

Monday forecast routine

  • Export history by week or month:Weekly is better for fast movers. Monthly is enough for slower catalog items.
  • Add two helper columns:one for in-stock status, one for event notes. Those two fields prevent a surprising number of bad forecasts.
  • Choose one base method per item class:Start with moving averages for stable items and smoothing for slowly drifting ones.
  • Apply seasonality only when it repeats:If you can point to the same calendar lift more than once, add a seasonal factor. If not, keep it simple.
  • Forecast the replenishment window:Forecast demand across supplier lead time plus the gap to the next order review.
  • Write down every override:If sales says a customer win adds 300 units next month, enter the override and the reason. Hidden overrides destroy learning.

Three accuracy checks normal humans can compute

You do not need a dashboard full of statistics. You need a few measures that tell you whether the forecast is systematically wrong and by how much.

Bias

Average signed error. Positive bias means you keep over-forecasting. Negative bias means you chronically under-forecast and invite stockouts.

MAE

Mean absolute error, the average miss in units. As Green and Armstrong note, MAE is a simple and useful measure for production and inventory control decisions.

WAPE

Weighted absolute percentage error. AWS Supply Chain's demand planning docs use WAPE as an aggregate accuracy metric because it shows total forecast miss relative to total actual demand.

Use MAPE carefully. In Hyndman's accuracy guide, MAPE becomes undefined when actual demand is zero and can explode when actuals are close to zero. That makes it a poor choice for slow movers, launch items, or any series with frequent zero-demand periods.

Simple scorecard

Start with bias, MAE, and WAPE. Add fancier metrics only after those three are stable and understood.

Backtest before you roll it into purchasing

A forecast is not ready because it looks reasonable. It is ready after you test it on past periods it did not see. Hyndman's time series cross-validation guide describes rolling forecast origin: move through history, forecast forward, and average the errors. That is the grown-up version of asking, 'Would this have worked last quarter?'

Quick backtest

  • Hold out the last 8 to 12 periods:Do not use them to build the first model.
  • Run each candidate method:moving average, smoothing, and any seasonal version you want to compare.
  • Measure bias, MAE, and WAPE:Judge the methods on periods they did not see.
  • Pick the method people can explain:If two methods are close, choose the one the team will actually maintain.
Operations staff reviewing a tablet with simple demand bars during a short warehouse-side inventory meeting.
A short weekly review is often enough to compare forecast, actual demand, and next actions.

Know where the spreadsheet struggles

  • New products: Borrow history from a similar item, category, or launch plan because the new SKU has no stable pattern yet.
  • Lumpy or intermittent demand: Forecast at family or category level first, then plan individual replenishment with more manual review.
  • Promotions and project business: Add event overrides separately instead of asking the baseline model to guess special events.
  • Poor inventory accuracy: If receiving, adjustments, and location control are weak, fix the process first. A forecast layered on bad records still buys the wrong quantity.

This is where prioritization matters. Use ABC analysis to decide which items deserve the most forecasting attention, and pair the forecast with a disciplined safety stock review so uncertainty does not turn into blanket overbuying.

Final takeaway

Inventory forecasting for non-data-scientists is less about advanced math and more about operational honesty. Clean the history. Start with moving averages or smoothing. Add seasonality only when it repeats. Measure bias and absolute error. Backtest before you trust the number.

Next step: pick 20 important SKUs, build one weekly sheet, and compare forecast versus actual for the next 8 weeks. After that, forecasting will stop feeling theoretical and start becoming part of how you buy.

Related articles

Fresh guides for inventory teams and operators.

SKU Rationalization Guide: Prune the Catalog

More SKUs rarely means more sales. This guide walks you through a practical SKU rationalization process - from scoring and segmenting to communicating changes internally - so you can cut complexity and free up cash.