Abstract: A fairly common approach to evaluate if a given time series Y_(t+1) is predictable, compares the Mean Squared Prediction Error (MSPE) of a plausible predictor for Y_(t+1) and the MSPE of a naïve benchmark like a constant forecast or the historical average of the predictand, which display zero or a small covariance with the target variable. If the MSPE of the plausible predictor is lower than that of the benchmark, Y_(t+1) is considered predictable, otherwise is considered unpredictable. This intuitive and standard approach might not be truly capturing the essence of predictability, which in words of some authors refers to a notion of dependence between the target variable and variables or events that happened in the past. In particular, when the plausible forecast under evaluation is inefficient, it might face a paradoxical situation: On the one hand, it could have a strong and positive correlation with the target variable, much greater than the correlation of the benchmark with the same target variable. Yet, on the other hand, it could be outperformed in terms of MSPE by the same naïve benchmark. We propose to evaluate predictability directly, with a simple test based on the covariance between the forecast and the target variable. Using Monte Carlo simulations we study size and power of three variations of this test. In general terms, they all behave reasonably well. We also compare their behavior with a traditional test of equality in MSPE. We show that our covariance tests can detect predictability even when MSPE comparisons do not. Finally, we illustrate the relevance of our observation when forecasting monthly oil returns with a forecast based on the Chilean peso.
Keywords: Mean Squared Prediction Error, Correlation, Forecasting, Time Series, Random Walk
Abstract: A fairly common approach to evaluate if a given time series Y_(t+1) is predictable, compares the Mean Squared Prediction Error (MSPE) of a plausible predictor for Y_(t+1) and the MSPE of a naïve benchmark like a constant forecast or the historical average of the predictand, which display zero or a small covariance with the target variable. If the MSPE of the plausible predictor is lower than that of the benchmark, Y_(t+1) is considered predictable, otherwise is considered unpredictable. This intuitive and standard approach might not be truly capturing the essence of predictability, which in words of some authors refers to a notion of dependence between the target variable and variables or events that happened in the past. In particular, when the plausible forecast under evaluation is inefficient, it might face a paradoxical situation: On the one hand, it could have a strong and positive correlation with the target variable, much greater than the correlation of the benchmark with the same target variable. Yet, on the other hand, it could be outperformed in terms of MSPE by the same naïve benchmark. We propose to evaluate predictability directly, with a simple test based on the covariance between the forecast and the target variable. Using Monte Carlo simulations we study size and power of three variations of this test. In general terms, they all behave reasonably well. We also compare their behavior with a traditional test of equality in MSPE. We show that our covariance tests can detect predictability even when MSPE comparisons do not. Finally, we illustrate the relevance of our observation when forecasting monthly oil returns with a forecast based on the Chilean peso.
Keywords: Mean Squared Prediction Error, Correlation, Forecasting, Time Series, Random Walk
JEL codes: C52, C53, G17, E270, E370, F370, L740, O180, R310
DOI: ...