Showing results 1 to 4 of approximately 4.(refine search)
IDENTIFICATION THROUGH HETEROGENEITY
We analyze set identification in Bayesian vector autoregressions (VARs). Because set identification can be challenging, we propose to include micro data on heterogeneous entities to sharpen inference. First, we provide conditions when imposing a simple ranking of impulse-responses sharpens inference in bivariate and trivariate VARs. Importantly; we show that this set reduction also applies to variables not subject to ranking restrictions. Second, we develop two types of inference to address recent criticism: (1) an efficient fully Bayesian algorithm based on an agnostic prior that directly samples from the admissible set and (2) a prior-robust Bayesian algorithm to sample the posterior bounds of the identified set. Third, we apply our methodology to U.S. data to identify productivity news and defense spending shocks. We find that under both algorithms, the bounds of the identified sets shrink substantially under heterogeneity restrictions relative to standard sign restrictions.
AUTHORS: Drautzburg, Thorsten; Amir-Ahmadi, Pooyan
Drifts, Volatilities, and Impulse Responses Over the Last Century
How much have the dynamics of U.S. time series and in particular the transmission of innovations to monetary policy instruments changed over the last century? The answers to these questions that this paper gives are "a lot" and "probably less than you think," respectively. We use vector autoregressions with time-varying parameters and stochastic volatility to tackle these questions. In our analysis we use variables that both influenced monetary policy and in turn were influenced by monetary policy itself, including bond market data (the difference between long-term and short-term nominal interest rates) and the growth rate of money.
AUTHORS: Matthes, Christian; Amir-Ahmadi, Pooyan; Wang, Mu-Chun
Choosing Prior Hyperparameters
Bayesian inference is common in models with many parameters, such as large VAR models, models with time-varying parameters, or large DSGE models. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters. The choice of these hyperparameters is crucial because their influence is often sizeable for standard sample sizes. In this paper we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. In terms of applications, we show via Monte Carlo simulations that in time series models with time-varying parameters and stochastic volatility, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature.
AUTHORS: Matthes, Christian; Wang, Mu-Chun; Amir-Ahmadi, Pooyan
Measurement Errors and Monetary Policy: Then and Now
Should policymakers and applied macroeconomists worry about the difference between real-time and final data? We tackle this question by using a VAR with time-varying parameters and stochastic volatility to show that the distinctionbetween real-time data and final data matters for the impact of monetary policy shocks: The impact on final data is substantially and systematically different (in particular, larger in magnitude for different measures of real activity) from theimpact on real-time data. These differences have persisted over the last 40 years and should be taken into account when conducting or studying monetary policy.
AUTHORS: Amir-Ahmadi, Pooyan; Wang, Mu-Chun; Matthes, Christian