Search Results

Showing results 1 to 10 of approximately 11.

(refine search)
SORT BY: PREVIOUS / NEXT
Jel Classification:C18 

Working Paper
Estimating Dynamic Macroeconomic Models : How Informative Are the Data?

Central banks have long used dynamic stochastic general equilibrium (DSGE) models, which are typically estimated using Bayesian techniques, to inform key policy decisions. This paper offers an empirical strategy that quantifies the information content of the data relative to that of the prior distribution. Using an off-the-shelf DSGE model applied to quarterly Euro Area data from 1970:3 to 2009:4, we show how Monte Carlo simulations can reveal parameters for which the model's structure obscures identification. By integrating out components of the likelihood function and conducting a Bayesian ...
International Finance Discussion Papers , Paper 1175

Working Paper
Estimating (Markov-Switching) VAR Models without Gibbs Sampling: A Sequential Monte Carlo Approach

Vector autoregressions with Markov-switching parameters (MS-VARs) fit the data better than do their constant-parameter predecessors. However, Bayesian inference for MS-VARs with existing algorithms remains challenging. For our first contribution, we show that Sequential Monte Carlo (SMC) estimators accurately estimate Bayesian MS-VAR posteriors. Relative to multi-step, model-specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. For our second contribution, we use SMC's ...
Finance and Economics Discussion Series , Paper 2015-116

Working Paper
In Search of Lost Time Aggregation

In 1960, Working noted that time aggregation of a random walk induces serial correlation in the first difference that is not present in the original series. This important contribution has been overlooked in a recent literature analyzing income and consumption in panel data. I examine Blundell, Pistaferri and Preston (2008) as an important example for which time aggregation has quantitatively large effects. Using new techniques to correct for the problem, I find the estimate for the partial insurance to transitory shocks, originally estimated to be 0.05, increases to 0.24. This larger ...
Finance and Economics Discussion Series , Paper 2019-075

Working Paper
Easy Bootstrap-Like Estimation of Asymptotic Variances

The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the bootstrap can be very time-consuming. In a recent paper, Honor and Hu (2017), we propose a ?Poor (Wo)man's Bootstrap? based on one-dimensional estimators. In this paper, we propose a modified, simpler method and illustrate its potential for estimating asymptotic variances.
Working Paper Series , Paper WP-2018-11

Working Paper
Simpler Bootstrap Estimation of the Asymptotic Variance of U-statistic Based Estimators

The bootstrap is a popular and useful tool for estimating the asymptotic variance of complicated estimators. Ironically, the fact that the estimators are complicated can make the standard bootstrap computationally burdensome because it requires repeated re-calculation of the estimator. In Honor and Hu (2015), we propose a computationally simpler bootstrap procedure based on repeated re-calculation of one-dimensional estimators. The applicability of that approach is quite general. In this paper, we propose an alternative method which is specific to extremum estimators based on U-statistics. ...
Working Paper Series , Paper WP-2015-7

Working Paper
Poor (Wo)man’s Bootstrap

The bootstrap is a convenient tool for calculating standard errors of the parameters of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one-dimensional parameters. The paper contains no new difficult math. But we believe that it can be useful.
Working Paper Series , Paper WP-2015-1

Working Paper
Aggregation level in stress testing models

We explore the question of optimal aggregation level for stress testing models when the stress test is specified in terms of aggregate macroeconomic variables, but the underlying performance data are available at a loan level. Using standard model performance measures, we ask whether it is better to formulate models at a disaggregated level (?bottom up?) and then aggregate the predictions in order to obtain portfolio loss values or is it better to work directly with aggregated models (?top down?) for portfolio loss forecasts. We study this question for a large portfolio of home equity lines ...
Working Paper Series , Paper 2015-14

Discussion Paper
Estimating the output gap in real time

I propose a novel method of estimating the potential level of U.S. GDP in real time. The proposed wage-based measure of economic potential remains virtually unchanged when new data are released. The distance between current and potential output ? the output gap ? satisfies Okun?s law and outperforms many other measures of slack in forecasting inflation. Thus, I provide a robust statistical tool useful for understanding current economic conditions and guiding policymaking.
Staff Papers , Issue Dec

Working Paper
The Income-Achievement Gap and Adult Outcome Inequality

This paper discusses various methods for assessing group differences in academic achievement using only the ordinal content of achievement test scores. Researchers and policymakers frequently draw conclusions about achievement differences between various populations using methods that rely on the cardinal comparability of test scores. This paper shows that such methods can lead to erroneous conclusions in an important application: measuring changes over time in the achievement gap between youth from high- and low-income households. Commonly-employed, cardinal methods suggest that this ...
Finance and Economics Discussion Series , Paper 2015-41

Report
On binscatter

Binscatter is very popular in applied microeconomics. It provides a flexible, yet parsimonious way of visualizing and summarizing ?big data? in regression settings, and it is often used for informal testing of substantive hypotheses such as linearity or monotonicity of the regression function. This paper presents a foundational, thorough analysis of binscatter: We give an array of theoretical and practical results that aid both in understanding current practices (that is, their validity or lack thereof) and in offering theory-based guidance for future applications. Our main results include ...
Staff Reports , Paper 881

FILTER BY year

FILTER BY Content Type

FILTER BY Jel Classification

C10 3 items

C11 2 items

C14 2 items

C52 2 items

C15 1 items

show more (23)

FILTER BY Keywords

PREVIOUS / NEXT