Working Paper
Can out-of-sample forecast comparisons help prevent overfitting?
Abstract: This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data-based design similar to those in Lovell (1983) and Hoover and Perez (1999). In each simulation, a general-to-specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post-sample tests are roughly correctly sized, as long as just the in-sample portion of the data are used in model selection. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modeling quarterly U.S. inflation.
Access Documents
File(s): File format is application/pdf https://www.kansascityfed.org/documents/5423/pdf-RWP00-05.pdf
Authors
Bibliographic Information
Provider: Federal Reserve Bank of Kansas City
Part of Series: Research Working Paper
Publication Date: 2000
Number: RWP 00-05