Forecasting large collections of time series

With the recent launch of Amazon Forecast, I can no longer procrastinate writing about forecasting “at scale”! Quantitative forecasting of time series has been used (and taught) for decades, with applications in many areas of business such as demand forecasting, sales forecasting, and financial forecasting. The types of methods taught in forecasting courses tends to be discipline-specific: Statisticians love ARIMA (auto regressive integrated moving average) models, with multivariate versions such as Vector ARIMA, as well as state space models and non-parametric methods such as STL decompositions. Econometricians and finance academics go one step further into ARIMA variations such as ARFIMA (f=fractional), … Continue reading Forecasting large collections of time series

Forecasting + Analytics = ?

Quantitative forecasting is an age-old discipline, highly useful across different functions of an organization: from  forecasting sales and workforce demand to economic forecasting and inventory planning. Business schools have offered courses with titles such as “Time Series Forecasting”, “Forecasting Time Series Data“, “Business Forecasting“,  more specialized courses such as “Demand Planning and Sales Forecasting” or even graduate programs with title “Business and Economic Forecasting“. Simple “Forecasting” is also popular. Such courses are offered at the undergraduate, graduate and even executive education. All these might convey the importance and usefulness of forecasting, but they are far from conveying the coolness of forecasting. … Continue reading Forecasting + Analytics = ?

Visualizing time series: suppressing one pattern to enhance another pattern

Visualizing a time series is an essential step in exploring its behavior. Statisticians think of a time series as a combination of four components: trend, seasonality, level and noise. All real-world series contain a level and noise, but not necessarily a trend and/or seasonality. It is important to determine whether trend and/or seasonality exist in a series in order to choose appropriate models and methods for descriptive or forecasting purposes. Hence, looking at a time plot,  typical questions include: is there a trend? if so, what type of function can approximate it? (linear, exponential, etc.) is the trend fixed throughout the period … Continue reading Visualizing time series: suppressing one pattern to enhance another pattern

Forecasting stock prices? The new INFORMS competition

Image from www.lumaxart.com The 2010 INFORMS Data Mining Contest is underway. This time the goal is to predict 5-minute stock prices. That’s right – forecasting stock prices! In my view, the meta-contest is going to be the most interesting part. By meta-contest I mean looking beyond the winning result (what method, what prediction accuracy)  and examining the distribution of prediction accuracies across all the contestants, how the winner is chosen, and most importantly, how the winning result will be interpreted in terms of concluding about the predictability level of stocks. Why is a stock prediction competition interesting? Because according to … Continue reading Forecasting stock prices? The new INFORMS competition

Forecasting with econometric models

Here’s another interesting example where explanatory and predictive tasks create different models: econometric models. These are essentially regression models of the form: Y(t) = beta0 + beta1 Y(t-1) + beta2 X(t) + beta3 X(t-1) + beta4 Z(t-1) + noise An example would be forecasting Y(t)= consumer spending at time t, where the input variables can be consumer spending in previous time periods and/or other information that is available at time t or earlier. In economics, when Y(t) is the state of the economy at time t, there is a distinction between three types of variables (aka “indicators”): Leading, coincident, and … Continue reading Forecasting with econometric models

Data mining competition season

Those who’ve been following my postings probably recall “competition season” when all of a sudden there are multiple new interesting datasets out there, each framing a business problem that requires the combination of data mining and creativity. Two such competitions are the SAS Data Mining Shootout and the 2008 Neural Forecasting Competition. The SAS problem concerns revenue management for an airline who wants to improve their customer satisfaction. The NN5 competition is about forecasting cash withdrawals from ATMs. Here are the similarities between the two competitions: they both provide real data and reasonably real business problems. Now to a more … Continue reading Data mining competition season

Cycle plots for time series

In his most recent newsletter, Stephen Few from PerceptualEdge presents a short and interesting article on Cycle Plots (by Naomi Robbins). These are plots for visualizing time series, which enhance both cyclical and trend components of the series. Cycle plots were invented by Cleveland, Dunn, and Terpenning in 1978, and seem quite useful. I have not seen them integrated into any visualization tool, although they definitely are useful and easy to interpret. The closest implementation that I’ve seen (aside from creating them yourself or using one of the macros suggested in the article) is Spotfire DXP‘s hierarchies. A hierarchy enables … Continue reading Cycle plots for time series

The Riverplot: Visualizing distributions over time

The boxplot is one of the neatest visualizations for examining the distribution of values, or for comparing distribtions. It is more compact than a histogram in that it only presents the median, the two quartiles, the range of the data, and outliers. It also requires less user input than a histogram (where the user usually has to determine the number of bins). I view the boxplot and histogram as complements, and examining both is good practice. But how can you visualize a distribution of values over time? Well, a series of boxplots often does the trick. But if the frequency … Continue reading The Riverplot: Visualizing distributions over time

Accuracy measures

There is a host of metrics for evaluating predictive performance. They are all based on aggregating the forecast errors in some form. The two most famous metrics are RMSE (Root-mean-squared-error) and MAPE (Mean-Absolute-Percentage-Error). In an earlier posting (Feb-23-2006) I disclosed a secret deciphering method for computing these metrics. Although these two have been the most popular in software, competitions, and published papers, they have their shortages. One serious flaw of the MAPE is that zero counts contribute to the MAPE the value of infinity (because of the division by zero). One solution is to leave the zero counts out of … Continue reading Accuracy measures

Lots of real time series data!

I love data-mining or statistics competitions – they always provide great real data! However, the big difference between a gold mine and “just some data” is whether the data description and their context is complete. This reflects, in my opinion, the difference between “data mining for the purpose of data mining” vs. “data mining for business analytics” (or any other field of interest, such as engineering or biology). Last year, the BICUP2006 posted an interesting dataset on bus ridership in Santiego de Chile. Although there was a reasonable description of the data (number of passengers at a bus stations at … Continue reading Lots of real time series data!