Search Results

Showing results 1 to 10 of approximately 45.

(refine search)
SORT BY: PREVIOUS / NEXT
Jel Classification:C55 

Working Paper
Variable Selection and Forecasting in High Dimensional Linear Regressions with Structural Breaks

This paper is concerned with the problem of variable selection and forecasting in the presence of parameter instability. There are a number of approaches proposed for forecasting in the presence of breaks, including the use of rolling windows and exponential down-weighting. However, these studies start with a given model specification and do not consider the problem of variable selection, which is complicated by time variations in the effects of signal variables. In this study we investigate whether or not we should use weighted observations at the variable selection stage in the presence of ...
Globalization Institute Working Papers , Paper 394

Working Paper
Variable Selection and Forecasting in High Dimensional Linear Regressions with Structural Breaks

This paper is concerned with the problem of variable selection and forecasting in the presence of parameter instability. There are a number of approaches proposed for forecasting in the presence of breaks, including the use of rolling windows or exponential down-weighting. However, these studies start with a given model specification and do not consider the problem of variable selection. It is clear that, in the absence of breaks, researchers should weigh the observations equally at both the variable selection and forecasting stages. In this study, we investigate whether or not we should use ...
Globalization Institute Working Papers , Paper 394

Working Paper
Technological innovation in mortgage underwriting and the growth in credit, 1985–2015

The application of information technology to finance, or ?fintech,? is expected to revolutionize many aspects of borrowing and lending in the future, but technology has been reshaping consumer and mortgage lending for many years. During the 1990s, computerization allowed mortgage lenders to reduce loan-processing times and largely replace human-based assessments of credit risk with default predictions generated by sophisticated empirical models. Debt-to-income ratios at origination add little to the predictive power of these models, so the new automated underwriting systems allowed higher ...
Working Papers , Paper 19-11

Working Paper
Improving the Accuracy of Economic Measurement with Multiple Data Sources: The Case of Payroll Employment Data

This paper combines information from two sources of U.S. private payroll employment to increase the accuracy of real-time measurement of the labor market. The sources are the Current Employment Statistics (CES) from BLS and microdata from the payroll processing firm ADP. We briefly describe the ADP-derived data series, compare it to the BLS data, and describe an exercise that benchmarks the data series to an employment census. The CES and the ADP employment data are each derived from roughly equal-sized samples. We argue that combining CES and ADP data series reduces the measurement error ...
Finance and Economics Discussion Series , Paper 2019-065

Working Paper
Big data analytics: a new perspective

Model specification and selection are recurring themes in econometric analysis. Both topics become considerably more complicated in the case of large-dimensional data sets where the set of specification possibilities can become quite large. In the context of linear regression models, penalised regression has become the de facto benchmark technique used to trade off parsimony and fit when the number of possible covariates is large, often much larger than the number of available observations. However, issues such as the choice of a penalty function and tuning parameters associated with the use ...
Globalization Institute Working Papers , Paper 268

Working Paper
A one-covariate at a time, multiple testing approach to variable selection in high-dimensional linear regression models

Model specification and selection are recurring themes in econometric analysis. Both topics become considerably more complicated in the case of large-dimensional data sets where the set of specification possibilities can become quite large. In the context of linear regression models, penalised regression has become the de facto benchmark technique used to trade off parsimony and fit when the number of possible covariates is large, often much larger than the number of available observations. However, issues such as the choice of a penalty function and tuning parameters associated with the use ...
Globalization Institute Working Papers , Paper 290

Working Paper
Identification Through Sparsity in Factor Models

Factor models are generally subject to a rotational indeterminacy, meaning that individual factors are only identified up to a rotation. In the presence of local factors, which only affect a subset of the outcomes, we show that the implied sparsity of the loading matrix can be used to solve this rotational indeterminacy. We further prove that a rotation criterion based on the 1-norm of the loading matrix can be used to achieve identification even under approximate sparsity in the loading matrix. This enables us to consistently estimate individual factors, and to interpret them as structural ...
Working Papers , Paper 20-25

Working Paper
How Magic a Bullet Is Machine Learning for Credit Analysis? An Exploration with FinTech Lending Data

FinTech online lending to consumers has grown rapidly in the post-crisis era. As argued by its advocates, one key advantage of FinTech lending is that lenders can predict loan outcomes more accurately by employing complex analytical tools, such as machine learning (ML) methods. This study applies ML methods, in particular random forests and stochastic gradient boosting, to loan-level data from the largest FinTech lender of personal loans to assess the extent to which those methods can produce more accurate out-of-sample predictions of default on future loans relative to standard regression ...
Working Papers , Paper 19-16

Working Paper
Assessing International Commonality in Macroeconomic Uncertainty and Its Effects

This paper uses a large vector autoregression (VAR) to measure international macroeconomic uncertainty and its effects on major economies, using two datasets, one with GDP growth rates for 19 industrialized countries and the other with a larger set of macroeconomic indicators for the U.S., euro area, and U.K. Using basic factor model diagnostics, we first provide evidence of significant commonality in international macroeconomic volatility, with one common factor accounting for strong comovement across economies and variables. We then turn to measuring uncertainty and its effects with a large ...
Working Papers (Old Series) , Paper 1803

Working Paper
Assessing Macroeconomic Tail Risks in a Data-Rich Environment

We use a large set of economic and financial indicators to assess tail risks of the three macroeconomic variables: real GDP, unemployment, and inflation. When applied to U.S. data, we find evidence that a dense model using principal components (PC) as predictors might be misspecified by imposing the “common slope” assumption on the set of predictors across multiple quantiles. The common slope assumption ignores the heterogeneous informativeness of individual predictors on different quantiles. However, the parsimony of the PC-based approach improves the accuracy of out-of-sample forecasts ...
Research Working Paper , Paper RWP 19-12

FILTER BY year

FILTER BY Content Type

FILTER BY Author

Crane, Leland D. 5 items

Decker, Ryan A. 5 items

Hamins-Puertolas, Adrian 5 items

Kurz, Christopher J. 5 items

Carriero, Andrea 4 items

Chudik, Alexander 4 items

show more (69)

FILTER BY Jel Classification

C53 16 items

C81 10 items

C32 8 items

E32 8 items

E44 8 items

show more (44)

FILTER BY Keywords

forecasting 11 items

Big data 7 items

factor models 5 items

COVID-19 4 items

stochastic volatility 4 items

Variable Selection 4 items

show more (129)

PREVIOUS / NEXT