Search Results

Showing results 1 to 6 of approximately 6.

(refine search)
Keywords:Machine Learning 

Working Paper
PEAD.txt: Post-Earnings-Announcement Drift Using Text

We construct a new numerical measure of earnings announcement surprises, standardized unexpected earnings call text (SUE.txt), that does not explicitly incorporate the reported earnings value. SUE.txt generates a text-based post-earnings announcement drift (PEAD.txt) larger than the classic PEAD and can be used to create a profitable trading strategy. Leveraging the prediction model underlying SUE.txt, we propose new tools to study the news content of text: paragraph-level SUE.txt and paragraph classification scheme based on the business curriculum. With these tools, we document many ...
Working Papers , Paper 21-07

Working Paper
Understanding the Exposure at Default Risk of Commercial Real Estate Construction and Land Development Loans

We study and model the determinants of exposure at default (EAD) for large U.S. construction and land development loans from 2010 to 2017. EAD is an important component of credit risk, and commercial real estate (CRE) construction loans are more risky than income producing loans. This is the first study modeling the EAD of construction loans. The underlying EAD data come from a large, confidential supervisory dataset used in the U.S. Federal Reserve’s annual Comprehensive Capital Assessment Review (CCAR) stress tests. EAD reflects the relative bargaining ability and information sets of ...
Working Papers , Paper 2007

Working Paper
One Threshold Doesn’t Fit All: Tailoring Machine Learning Predictions of Consumer Default for Lower-Income Areas

Modeling advances create credit scores that predict default better overall, but raise concerns about their effect on protected groups. Focusing on low- and moderate-income (LMI) areas, we use an approach from the Fairness in Machine Learning literature — fairness constraints via group-specific prediction thresholds — and show that gaps in true positive rates (% of non-defaulters identified by the model as such) can be significantly reduced if separate thresholds can be chosen for non-LMI and LMI tracts. However, the reduction isn’t free as more defaulters are classified as good risks, ...
Working Papers , Paper 22-39

Working Paper
Bottom-up Leading Macroeconomic Indicators: An Application to Non-Financial Corporate Defaults using Machine Learning

This paper constructs a leading macroeconomic indicator from microeconomic data using recent machine learning techniques. Using tree-based methods, we estimate probabilities of default for publicly traded non-financial firms in the United States. We then use the cross-section of out-of-sample predicted default probabilities to construct a leading indicator of non-financial corporate health. The index predicts real economic outcomes such as GDP growth and employment up to eight quarters ahead. Impulse responses validate the interpretation of the index as a measure of financial stress.
Finance and Economics Discussion Series , Paper 2019-070

Working Paper
Bayesian Modeling of Time-Varying Parameters Using Regression Trees

In light of widespread evidence of parameter instability in macroeconomic models, many time-varying parameter (TVP) models have been proposed. This paper proposes a nonparametric TVP-VAR model using Bayesian additive regression trees (BART). The novelty of this model stems from the fact that the law of motion driving the parameters is treated nonparametrically. This leads to great flexibility in the nature and extent of parameter change, both in the conditional mean and in the conditional variance. In contrast to other nonparametric and machine learning methods that are black box, inference ...
Working Papers , Paper 23-05

Working Paper
Corporate Disclosure: Facts or Opinions?

A large body of literature documents the link between textual communication (e.g., news articles, earnings calls) and firm fundamentals, either through pre-defined “sentiment” dictionaries or through machine learning approaches. Surprisingly, little is known about why textual communication matters. In this paper, we take a step in that direction by developing a new methodology to automatically classify statements into objective (“facts”) and subjective (“opinions”) and apply it to transcripts of earnings calls. The large scale estimation suggests several novel results: (1) Facts ...
Working Papers , Paper 21-40


FILTER BY Content Type

FILTER BY Jel Classification

C00 2 items

C53 2 items

E32 2 items

G12 2 items

G14 2 items

C11 1 items

show more (8)