There is a host of metrics for evaluating predictive performance. They are all based on aggregating the forecast errors in some form. The two most famous metrics are RMSE (Root-mean-squared-error) and MAPE (Mean-Absolute-Percentage-Error). In an earlier posting (Feb-23-2006) I disclosed a secret deciphering method for computing these metrics. Although these two have been the most popular in software, competitions, and published papers, they have their shortages. One serious flaw of the MAPE is that zero counts contribute to the MAPE the value of infinity (because of the division by zero). One solution is to leave the zero counts out of … Continue reading Accuracy measures
There are a multitude of performance measures in statistics and data mining. These tend to have acronyms such as MAPE and RMSE. It turns out that even after spelling them out, it is not always obvious to users how they are computed. Inspired by Don Brown’s The Da Vinchi Code, I devised a deciphering method that allows simple computation of these measures. The trick is to read from right-to-left (like Hebrew or Arabic). Here are two examples: RMSE = Root Mean Squared Error1. Error: compute the errors (actual value – predicted value)2. Squared: take a square of each error3. Mean: … Continue reading Acronyms – in Hebrew???