Search Results
Showing results 1 to 10 of approximately 105.
(refine search)
Working Paper
Tests of equal forecast accuracy for overlapping models
This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy when the models being compared are overlapping in the sense of Vuong (1989). Two models are overlapping when the true model contains just a subset of variables common to the larger sets of variables included in the competing forecasting models. We consider an out-of-sample version of the two-step testing procedure recommended by Vuong but also show that an exact one-step procedure is sometimes applicable. When the models are overlapping, we provide a simple-to-use fixed regressor wild bootstrap ...
Working Paper
Evaluating Conditional Forecasts from Vector Autoregressions
Many forecasts are conditional in nature. For example, a number of central banks routinely report forecasts conditional on particular paths of policy instruments. Even though conditional forecasting is common, there has been little work on methods for evaluating conditional forecasts. This paper provides analytical, Monte Carlo, and empirical evidence on tests of predictive ability for conditional forecasts from estimated models. In the empirical analysis, we consider forecasts of growth, unemployment, and inflation from a VAR, based on conditions on the short-term interest rate. Throughout ...
Working Paper
Approximately normal tests for equal predictive accuracy in nested models
Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model. Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero. We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model. We describe how to adjust MSPEs to account for this noise. We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error ...
Working Paper
Testing for unconditional predictive ability
This chapter provides an overview of pseudo-out-of-sample tests of unconditional predictive ability. We begin by providing an overview of the literature, including both empirical applications and theoretical contributions. We then delineate two distinct methodologies for conducting inference: one based on the analytics in West (1996) and the other based on those in Giacomini and White (2006). These two approaches are then carefully described in the context of pairwise tests of equal forecast accuracy between two models. We consider both non-nested and nested comparisons. Monte Carlo evidence ...
Journal Article
Has the behavior of inflation and long-term inflation expectations changed?
From 1975 to 1980, inflation in core (nonfood and nonenergy) consumer prices rose sharply as crude oil prices more than tripled. Yet, as crude oil prices quadrupled from late 2001 to 2007, core consumer price inflation remained essentially flat. Some observers have attributed the stability of consumer price inflation in the more recent episode to the influence of long-term inflation expectations. While inflation expectations rose significantly in the second half of the 1970s, they remained largely unchanged from 2001 through 2007. The increased stability of inflation and long-term ...
Working Paper
Nested forecast model comparisons: a new approach to testing equal accuracy
This paper develops bootstrap methods for testing whether, in a finite sample, competing out-of-sample forecasts from nested models are equally accurate. Most prior work on forecast tests for nested models has focused on a null hypothesis of equal accuracy in population basically, whether coefficients on the extra variables in the larger, nesting model are zero. We instead use an asymptotic approximation that treats the coefficients as non-zero but small, such that, in a finite sample, forecasts from the small model are expected to be as accurate as forecasts from the large model. Under that ...
Journal Article
The Importance of Trend Inflation in the Search for Missing Disinflation
Some inflation-forecasting models based on the Phillips curve suggest that there should have been more disinflation since the Great Recession than has shown up in core PCE or core CPI data. One way researchers have found to make the disinflation disappear is to remove the long-term unemployed from the overall unemployment measure that is typically used in the models. This analysis shows that the disinflation arises in such models because of the way they account for the long-term trend in inflation. Under a different measurement of trend inflation, which historical forecast accuracy suggests ...
Working Paper
Modeling Time-Varying Uncertainty of Multiple-Horizon Forecast Errors
We develop uncertainty measures for point forecasts from surveys such as the Survey of Professional Forecasters, Blue Chip, or the Federal Open Market Committee?s Summary of Economic Projections. At a given point of time, these surveys provide forecasts for macroeconomic variables at multiple horizons. To track time-varying uncertainty in the associated forecast errors, we derive a multiple-horizon specification of stochastic volatility. Compared to constant-variance approaches, our stochastic-volatility model improves the accuracy of uncertainty measures for survey forecasts.
Working Paper
Evaluating long-horizon forecasts
This paper examines the asymptotic and finite-sample properties of tests of equal forecast accuracy and encompassing applied to predictions from nested long-horizon regression models. We first derive the asymptotic distributions of a set of tests of equal forecast accuracy and encompassing, showing that the tests have non-standard distributions that depend on the parameters of the data-generating process. Using a simple parametric bootstrap for inference, we then conduct Monte Carlo simulations of a range of data-generating processes to examine the finite-sample size and power of the tests. ...
Working Paper
Can out-of-sample forecast comparisons help prevent overfitting?
This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data-based design similar to those in Lovell (1983) and Hoover and Perez (1999). In each simulation, a general-to-specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are ...