Search Results

SORT BY: PREVIOUS / NEXT
Jel Classification:C18 

Report
On binscatter

Binscatter is a popular method for visualizing bivariate relationships and conducting informal specification testing. We study the properties of this method formally and develop enhanced visualization and econometric binscatter tools. These include estimating conditional means with optimal binning and quantifying uncertainty. We also highlight a methodological problem related to covariate adjustment that can yield incorrect conclusions. We revisit two applications using our methodology and find substantially different results relative to those obtained using prior informal binscatter methods. ...
Staff Reports , Paper 881

Working Paper
Explaining Machine Learning by Bootstrapping Partial Marginal Effects and Shapley Values

Machine learning and artificial intelligence are often described as “black boxes.” Traditional linear regression is interpreted through its marginal relationships as captured by regression coefficients. We show that the same marginal relationship can be described rigorously for any machine learning model by calculating the slope of the partial dependence functions, which we call the partial marginal effect (PME). We prove that the PME of OLS is analytically equivalent to the OLS regression coefficient. Boot- strapping provides standard errors and confidence intervals around the point ...
Research Working Paper , Paper RWP 21-12

Working Paper
Estimating Dynamic Macroeconomic Models : How Informative Are the Data?

Central banks have long used dynamic stochastic general equilibrium (DSGE) models, which are typically estimated using Bayesian techniques, to inform key policy decisions. This paper offers an empirical strategy that quantifies the information content of the data relative to that of the prior distribution. Using an off-the-shelf DSGE model applied to quarterly Euro Area data from 1970:3 to 2009:4, we show how Monte Carlo simulations can reveal parameters for which the model's structure obscures identification. By integrating out components of the likelihood function and conducting a Bayesian ...
International Finance Discussion Papers , Paper 1175

Working Paper
The Impact of Market Factors on Racial Identity: Evidence from Multiracial Survey Respondents

This paper examines the reported race of multiracial persons in the US Current Population Survey (CPS) before 2003, when limited response options exogenously constrained respondents to identify as a single race. Using this survey attribute and the 16-month longitudinal design of the basic monthly CPS, I explore whether market factors help causally determine racial identity. I find that pre-2003 race responds to state-level (1) racial composition, due largely to household composition, and (2) unemployment rates and wages by race. Although these findings suggest potential endogeneity of race, ...
Working Papers , Paper 24-13

Working Paper
The Income-Achievement Gap and Adult Outcome Inequality

This paper discusses various methods for assessing group differences in academic achievement using only the ordinal content of achievement test scores. Researchers and policymakers frequently draw conclusions about achievement differences between various populations using methods that rely on the cardinal comparability of test scores. This paper shows that such methods can lead to erroneous conclusions in an important application: measuring changes over time in the achievement gap between youth from high- and low-income households. Commonly-employed, cardinal methods suggest that this ...
Finance and Economics Discussion Series , Paper 2015-41

Working Paper
Aggregation level in stress testing models

We explore the question of optimal aggregation level for stress testing models when the stress test is specified in terms of aggregate macroeconomic variables, but the underlying performance data are available at a loan level. Using standard model performance measures, we ask whether it is better to formulate models at a disaggregated level (?bottom up?) and then aggregate the predictions in order to obtain portfolio loss values or is it better to work directly with aggregated models (?top down?) for portfolio loss forecasts. We study this question for a large portfolio of home equity lines ...
Working Paper Series , Paper 2015-14

Working Paper
Finding Needles in Haystacks: Multiple-Imputation Record Linkage Using Machine Learning

This paper considers the problem of record linkage between a household-level survey and an establishment-level frame in the absence of unique identifiers. Linkage between frames in this setting is challenging because the distribution of employment across establishments is highly skewed. To address these difficulties, this paper develops a probabilistic record linkage methodology that combines machine learning (ML) with multiple imputation (MI). This ML-MI methodology is applied to link survey respondents in the Health and Retirement Study to their workplaces in the Census Business Register. ...
Working Papers , Paper 22-11

Discussion Paper
Estimating the output gap in real time

I propose a novel method of estimating the potential level of U.S. GDP in real time. The proposed wage-based measure of economic potential remains virtually unchanged when new data are released. The distance between current and potential output ? the output gap ? satisfies Okun?s law and outperforms many other measures of slack in forecasting inflation. Thus, I provide a robust statistical tool useful for understanding current economic conditions and guiding policymaking.
Staff Papers , Issue Dec

Working Paper
In Search of Lost Time Aggregation

In 1960, Working noted that time aggregation of a random walk induces serial correlation in the first difference that is not present in the original series. This important contribution has been overlooked in a recent literature analyzing income and consumption in panel data. I examine Blundell, Pistaferri and Preston (2008) as an important example for which time aggregation has quantitatively large effects. Using new techniques to correct for the problem, I find the estimate for the partial insurance to transitory shocks, originally estimated to be 0.05, increases to 0.24. This larger ...
Finance and Economics Discussion Series , Paper 2019-075

Working Paper
Estimating (Markov-Switching) VAR Models without Gibbs Sampling: A Sequential Monte Carlo Approach

Vector autoregressions with Markov-switching parameters (MS-VARs) fit the data better than do their constant-parameter predecessors. However, Bayesian inference for MS-VARs with existing algorithms remains challenging. For our first contribution, we show that Sequential Monte Carlo (SMC) estimators accurately estimate Bayesian MS-VAR posteriors. Relative to multi-step, model-specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. For our second contribution, we use SMC's ...
Finance and Economics Discussion Series , Paper 2015-116

FILTER BY year

FILTER BY Content Type

FILTER BY Author

FILTER BY Jel Classification

C14 8 items

C52 6 items

C10 5 items

C15 4 items

C21 4 items

show more (38)

FILTER BY Keywords

Machine learning 3 items

bootstrap 3 items

inference 3 items

Artificial intelligence 2 items

Explainable machine learning 2 items

Gaussian process 2 items

show more (93)

PREVIOUS / NEXT