Search Results

Showing results 1 to 10 of approximately 12.

(refine search)
SORT BY: PREVIOUS / NEXT
Jel Classification:C18 

Working Paper
Simpler Bootstrap Estimation of the Asymptotic Variance of U-statistic Based Estimators

The bootstrap is a popular and useful tool for estimating the asymptotic variance of complicated estimators. Ironically, the fact that the estimators are complicated can make the standard bootstrap computationally burdensome because it requires repeated re-calculation of the estimator. In Honor and Hu (2015), we propose a computationally simpler bootstrap procedure based on repeated re-calculation of one-dimensional estimators. The applicability of that approach is quite general. In this paper, we propose an alternative method which is specific to extremum estimators based on U-statistics. ...
Working Paper Series , Paper WP-2015-7

Working Paper
Easy Bootstrap-Like Estimation of Asymptotic Variances

The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the bootstrap can be very time-consuming. In a recent paper, Honor and Hu (2017), we propose a ?Poor (Wo)man's Bootstrap? based on one-dimensional estimators. In this paper, we propose a modified, simpler method and illustrate its potential for estimating asymptotic variances.
Working Paper Series , Paper WP-2018-11

Working Paper
Learning About Consumer Uncertainty from Qualitative Surveys: As Uncertain As Ever

We study diffusion indices constructed from qualitative surveys to provide real-time assessments of various aspects of economic activity. In particular, we highlight the role of diffusion indices as estimates of change in a quasi extensive margin, and characterize their distribution, focusing on the uncertainty implied by both sampling and the polarization of participants' responses. Because qualitative tendency surveys generally cover multiple questions around a topic, a key aspect of this uncertainty concerns the coincidence of responses, or the degree to which polarization comoves, across ...
Working Paper , Paper 15-9

Working Paper
Latent Variables Analysis in Structural Models: A New Decomposition of the Kalman Smoother

This paper advocates chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector to better understand model inference about latent variables. Such a double decomposition allows us to gauge the inuence of data on latent variables, like the data decomposition. However, by taking into account the transmission mechanisms of each type of shock, we can highlight the economic structure underlying the relationship between the data and the latent variables. We demonstrate the usefulness of this approach by detailing the role of ...
Finance and Economics Discussion Series , Paper 2020-100

Working Paper
The Income-Achievement Gap and Adult Outcome Inequality

This paper discusses various methods for assessing group differences in academic achievement using only the ordinal content of achievement test scores. Researchers and policymakers frequently draw conclusions about achievement differences between various populations using methods that rely on the cardinal comparability of test scores. This paper shows that such methods can lead to erroneous conclusions in an important application: measuring changes over time in the achievement gap between youth from high- and low-income households. Commonly-employed, cardinal methods suggest that this ...
Finance and Economics Discussion Series , Paper 2015-41

Working Paper
In Search of Lost Time Aggregation

In 1960, Working noted that time aggregation of a random walk induces serial correlation in the first difference that is not present in the original series. This important contribution has been overlooked in a recent literature analyzing income and consumption in panel data. I examine Blundell, Pistaferri and Preston (2008) as an important example for which time aggregation has quantitatively large effects. Using new techniques to correct for the problem, I find the estimate for the partial insurance to transitory shocks, originally estimated to be 0.05, increases to 0.24. This larger ...
Finance and Economics Discussion Series , Paper 2019-075

Report
On binscatter

Binscatter is very popular in applied microeconomics. It provides a flexible, yet parsimonious way of visualizing and summarizing ?big data? in regression settings, and it is often used for informal testing of substantive hypotheses such as linearity or monotonicity of the regression function. This paper presents a foundational, thorough analysis of binscatter: We give an array of theoretical and practical results that aid both in understanding current practices (that is, their validity or lack thereof) and in offering theory-based guidance for future applications. Our main results include ...
Staff Reports , Paper 881

Working Paper
Estimating (Markov-Switching) VAR Models without Gibbs Sampling: A Sequential Monte Carlo Approach

Vector autoregressions with Markov-switching parameters (MS-VARs) fit the data better than do their constant-parameter predecessors. However, Bayesian inference for MS-VARs with existing algorithms remains challenging. For our first contribution, we show that Sequential Monte Carlo (SMC) estimators accurately estimate Bayesian MS-VAR posteriors. Relative to multi-step, model-specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. For our second contribution, we use SMC's ...
Finance and Economics Discussion Series , Paper 2015-116

Working Paper
Poor (Wo)man’s Bootstrap

The bootstrap is a convenient tool for calculating standard errors of the parameters of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one-dimensional parameters. The paper contains no new difficult math. But we believe that it can be useful.
Working Paper Series , Paper WP-2015-1

Working Paper
Aggregation level in stress testing models

We explore the question of optimal aggregation level for stress testing models when the stress test is specified in terms of aggregate macroeconomic variables, but the underlying performance data are available at a loan level. Using standard model performance measures, we ask whether it is better to formulate models at a disaggregated level (?bottom up?) and then aggregate the predictions in order to obtain portfolio loss values or is it better to work directly with aggregated models (?top down?) for portfolio loss forecasts. We study this question for a large portfolio of home equity lines ...
Working Paper Series , Paper 2015-14

FILTER BY year

FILTER BY Content Type

FILTER BY Jel Classification

C10 3 items

C52 3 items

C11 2 items

C14 2 items

C32 2 items

show more (23)

FILTER BY Keywords

PREVIOUS / NEXT