Search Results

SORT BY: PREVIOUS / NEXT
Jel Classification:C18 

Working Paper
Easy Bootstrap-Like Estimation of Asymptotic Variances

The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the bootstrap can be very time-consuming. In a recent paper, Honor and Hu (2017), we propose a ?Poor (Wo)man's Bootstrap? based on one-dimensional estimators. In this paper, we propose a modified, simpler method and illustrate its potential for estimating asymptotic variances.
Working Paper Series , Paper WP-2018-11

Working Paper
Latent Variables Analysis in Structural Models: A New Decomposition of the Kalman Smoother

This paper advocates chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector to better understand model inference about latent variables. Such a double decomposition allows us to gauge the inuence of data on latent variables, like the data decomposition. However, by taking into account the transmission mechanisms of each type of shock, we can highlight the economic structure underlying the relationship between the data and the latent variables. We demonstrate the usefulness of this approach by detailing the role of ...
Finance and Economics Discussion Series , Paper 2020-100

Report
On binscatter

Binscatter is a popular method for visualizing bivariate relationships and conducting informal specification testing. We study the properties of this method formally and develop enhanced visualization and econometric binscatter tools. These include estimating conditional means with optimal binning and quantifying uncertainty. We also highlight a methodological problem related to covariate adjustment that can yield incorrect conclusions. We revisit two applications using our methodology and find substantially different results relative to those obtained using prior informal binscatter methods. ...
Staff Reports , Paper 881

Working Paper
How Much Should We Trust Regional-Exposure Designs?

Many prominent studies in macroeconomics, labor, and trade use panel data on regions to identify the local effects of aggregate shocks. These studies construct regional-exposure instruments as an observed aggregate shock times an observed regional exposure to that shock. We argue that the most economically plausible source of identification in these settings is uncorrelatedness of observed and unobserved aggregate shocks. Even when the regression estimator is consistent, we show that inference is complicated by cross-regional residual correlations induced by unobserved aggregate shocks. We ...
Working Papers , Paper 2023-018

Working Paper
Poor (Wo)man’s Bootstrap

The bootstrap is a convenient tool for calculating standard errors of the parameters of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one-dimensional parameters. The paper contains no new difficult math. But we believe that it can be useful.
Working Paper Series , Paper WP-2015-1

Working Paper
Revisiting the Great Ratios Hypothesis

Kaldor called the constancy of certain ratios stylized facts, whereas Klein and Kosobud called them great ratios. While they often appear in theoretical models, the empirical literature finds little evidence for them, perhaps because the procedures used cannot deal with lack of cointegration, two-way causality and cross-country error dependence. We propose a new system pooled mean group estimator that can deal with these features. Monte Carlo results show it performs well compared with other estimators, and using it on a dataset over 150 years and 17 countries, we find support for five of the ...
Globalization Institute Working Papers , Paper 415

Working Paper
The Income-Achievement Gap and Adult Outcome Inequality

This paper discusses various methods for assessing group differences in academic achievement using only the ordinal content of achievement test scores. Researchers and policymakers frequently draw conclusions about achievement differences between various populations using methods that rely on the cardinal comparability of test scores. This paper shows that such methods can lead to erroneous conclusions in an important application: measuring changes over time in the achievement gap between youth from high- and low-income households. Commonly-employed, cardinal methods suggest that this ...
Finance and Economics Discussion Series , Paper 2015-41

Working Paper
Finding Needles in Haystacks: Multiple-Imputation Record Linkage Using Machine Learning

This paper considers the problem of record linkage between a household-level survey and an establishment-level frame in the absence of unique identifiers. Linkage between frames in this setting is challenging because the distribution of employment across establishments is highly skewed. To address these difficulties, this paper develops a probabilistic record linkage methodology that combines machine learning (ML) with multiple imputation (MI). This ML-MI methodology is applied to link survey respondents in the Health and Retirement Study to their workplaces in the Census Business Register. ...
Working Papers , Paper 22-11

Working Paper
Estimating (Markov-Switching) VAR Models without Gibbs Sampling: A Sequential Monte Carlo Approach

Vector autoregressions with Markov-switching parameters (MS-VARs) fit the data better than do their constant-parameter predecessors. However, Bayesian inference for MS-VARs with existing algorithms remains challenging. For our first contribution, we show that Sequential Monte Carlo (SMC) estimators accurately estimate Bayesian MS-VAR posteriors. Relative to multi-step, model-specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. For our second contribution, we use SMC's ...
Finance and Economics Discussion Series , Paper 2015-116

Working Paper
Learning About Consumer Uncertainty from Qualitative Surveys: As Uncertain As Ever

We study diffusion indices constructed from qualitative surveys to provide real-time assessments of various aspects of economic activity. In particular, we highlight the role of diffusion indices as estimates of change in a quasi extensive margin, and characterize their distribution, focusing on the uncertainty implied by both sampling and the polarization of participants' responses. Because qualitative tendency surveys generally cover multiple questions around a topic, a key aspect of this uncertainty concerns the coincidence of responses, or the degree to which polarization comoves, across ...
Working Paper , Paper 15-9

FILTER BY year

FILTER BY Content Type

FILTER BY Author

FILTER BY Jel Classification

C14 7 items

C10 5 items

C52 5 items

C11 3 items

C15 3 items

show more (37)

FILTER BY Keywords

bootstrap 3 items

inference 3 items

Machine learning 3 items

Gaussian process 2 items

binning selection 2 items

error cross-sectional dependence 2 items

show more (90)

PREVIOUS / NEXT