Search Results
Working Paper
Latent Variables Analysis in Structural Models: A New Decomposition of the Kalman Smoother
This paper advocates chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector to better understand model inference about latent variables. Such a double decomposition allows us to gauge the inuence of data on latent variables, like the data decomposition. However, by taking into account the transmission mechanisms of each type of shock, we can highlight the economic structure underlying the relationship between the data and the latent variables. We demonstrate the usefulness of this approach by detailing the role of ...
Working Paper
Explaining Machine Learning by Bootstrapping Partial Dependence Functions and Shapley Values
Machine learning and artificial intelligence methods are often referred to as “black boxes” when compared with traditional regression-based approaches. However, both traditional and machine learning methods are concerned with modeling the joint distribution between endogenous (target) and exogenous (input) variables. Where linear models describe the fitted relationship between the target and input variables via the slope of that relationship (coefficient estimates), the same fitted relationship can be described rigorously for any machine learning model by first-differencing the partial ...
Working Paper
Finding Needles in Haystacks: Multiple-Imputation Record Linkage Using Machine Learning
This paper considers the problem of record linkage between a household-level survey and an establishment-level frame in the absence of unique identifiers. Linkage between frames in this setting is challenging because the distribution of employment across establishments is highly skewed. To address these difficulties, this paper develops a probabilistic record linkage methodology that combines machine learning (ML) with multiple imputation (MI). This ML-MI methodology is applied to link survey respondents in the Health and Retirement Study to their workplaces in the Census Business Register. ...
Working Paper
Simpler Bootstrap Estimation of the Asymptotic Variance of U-statistic Based Estimators
The bootstrap is a popular and useful tool for estimating the asymptotic variance of complicated estimators. Ironically, the fact that the estimators are complicated can make the standard bootstrap computationally burdensome because it requires repeated re-calculation of the estimator. In Honor and Hu (2015), we propose a computationally simpler bootstrap procedure based on repeated re-calculation of one-dimensional estimators. The applicability of that approach is quite general. In this paper, we propose an alternative method which is specific to extremum estimators based on U-statistics. ...
Working Paper
How Much Should We Trust Regional-Exposure Designs?
Many prominent studies in macroeconomics, labor, and trade use panel data on regions to identify the local effects of aggregate shocks. These studies construct regional-exposure instruments as an observed aggregate shock times an observed regional exposure to that shock. We argue that the most economically plausible source of identification in these settings is uncorrelatedness of observed and unobserved aggregate shocks. Even when the regression estimator is consistent, we show that inference is complicated by cross-regional residual correlations induced by unobserved aggregate shocks. We ...
Discussion Paper
Estimating the output gap in real time
I propose a novel method of estimating the potential level of U.S. GDP in real time. The proposed wage-based measure of economic potential remains virtually unchanged when new data are released. The distance between current and potential output ? the output gap ? satisfies Okun?s law and outperforms many other measures of slack in forecasting inflation. Thus, I provide a robust statistical tool useful for understanding current economic conditions and guiding policymaking.
Working Paper
In Search of Lost Time Aggregation
In 1960, Working noted that time aggregation of a random walk induces serial correlation in the first difference that is not present in the original series. This important contribution has been overlooked in a recent literature analyzing income and consumption in panel data. I examine Blundell, Pistaferri and Preston (2008) as an important example for which time aggregation has quantitatively large effects. Using new techniques to correct for the problem, I find the estimate for the partial insurance to transitory shocks, originally estimated to be 0.05, increases to 0.24. This larger ...
Working Paper
Easy Bootstrap-Like Estimation of Asymptotic Variances
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the bootstrap can be very time-consuming. In a recent paper, Honor and Hu (2017), we propose a ?Poor (Wo)man's Bootstrap? based on one-dimensional estimators. In this paper, we propose a modified, simpler method and illustrate its potential for estimating asymptotic variances.
Working Paper
Robust Bayesian Analysis for Econometrics
We review the literature on robust Bayesian analysis as a tool for global sensitivity analysis and for statistical decision-making under ambiguity. We discuss the methods proposed in the literature, including the different ways of constructing the set of priors that are the key input of the robust Bayesian analysis. We consider both a general set-up for Bayesian statistical decisions and inference and the special case of set-identified structural models. We provide new results that can be used to derive and compute the set of posterior moments for sensitivity analysis and to compute the ...