Working Papers

An Anatomy of Subjective Expectation

[Draft Available Upon Request]

This version: August 2024. 

Abstract:

I study the formation of analysts' subjective earnings expectations using textual information from over 1.8 million equity research reports. By employing a Large Language Model in a Retrieval-Augmented Generation (RAG) framework, I differentiate between factual and subjective content in analyst reports and distill this information into interpretable components of a firm's fundamentals. I find that analysts exhibit pro-cyclical attention to earnings growth information and counter-cyclical attention to financial distress, consistent with a model of optimal attention allocation. By exploiting the heterogeneity in attention allocation across analysts, I extract earnings signals from various fundamental components and find that analysts tend to over-react to high earnings growth while under-react to high debt and expenses. Additionally, I discover that analysts extrapolate their subjective outlooks from information of different fundamental components. Finally, I find that both asymmetric information and differences in beliefs are significant drivers of disagreement. These results offer new insights into the subjective belief formation process in reality.

APT or “AIPT”? The Surprising Dominance of Large Factor Models  

[Paper]

Coauthors: Antoine DidisheimBryan T. Kelly, and Semyon Malamud

This version: September 2024.  Previously titled Complexity in Factor Pricing Model

Abstract:

We introduce artificial intelligence pricing theory (AIPT).  In contrast with the APT's foundational assumption of a low dimensional factor structure in returns, the AIPT conjectures that returns are driven by a large number of factors.  We first verify this conjecture empirically and show that nonlinear models with an exorbitant number of factors (many more than the number of training observations or base assets)  are far more successful in describing the out-of-sample behavior of asset returns than simpler standard models.  We then theoretically characterize the behavior of large factor pricing models, from which we show that the AIPT's "many factors'' conjecture faithfully explains our empirical findings, while the APT's "few factors'' conjecture is contradicted by the data.

The Double-edged Sword of Data Mining: Implications on Asset Pricing and Information Efficiency

[Paper]

This version: November 2023.

Abstract:

Does data mining always increase price efficiency? Not necessarily. I incorporate data mining into a standard asset pricing model and identify a novel cost of complexity that arises endogenously from data mining. When a data miner explores alternative data, she faces a scarcer training history relative to potential predictors (increasing complexity) and an increasing difficulty in extracting useful signals (decreasing return in data efficacy). The cost of complexity and decreasing return in data efficacy together imply a finite optimal data mining level, such that excess data mining will lead to lower price informativeness. Empirically, I provide evidence of decreasing return in data efficacy in the context of the "factor zoo'', and I show that the release of satellite data reduces price informativeness in a difference-in-difference setting.

What Drives Trading in Financial Markets? A Big Data Perspective

[Paper]

Coauthor: Anton Lines

This version: September 2022.

Abstract:

We use deep Bayesian neural networks to investigate the determinants of trading activity in a large sample of institutional equity portfolios. Our methodology allows us to evaluate hundreds of potentially relevant explanatory variables, estimate arbitrary nonlinear interactions among them, and aggregate them into interpretable categories. Deep learning models predict trading decisions with up to 86% accuracy out-of-sample, with market liquidity and macroeconomic conditions together accounting for most (66-91%) of the explained variance. Stock fundamentals, firm-specific corporate news, and analyst forecasts have comparatively low explanatory power. Our results suggest that market microstructure considerations and macroeconomic risk are the most crucial factors in understanding financial trading patterns.


The Social Welfare of Stock Market Mispricing

[Paper]

This version: April 2024.

Abstract:

This paper studies the social value of eliminating mispricing in the US stock markets. By characterizing a model in which active managers extract abnormal value from trading against mispricing, I show that the mispricing of a stock, relative to a benchmark asset pricing model, exactly equals to the marginal social value of mispricing trading. Combining conditional mispricing estimates from a novel instrumented factor model and a calibrated price impact function, I find that the mispricing relative to CAPM translates into a welfare cost of about 3.1% of annual nominal GDP in the US, and increases to more than 8% during the Tech Bubble, the Global Financial Crisis, and in the recent Covid pandemic. These results suggest a large potential welfare gain from active management that eliminates stock mispricing. 

On the Testability of the Anchor Words Assumption in Topic Models

[Paper] [Online Appendix]

Coauthors: Simon Freyaldenhoven, Dingyi Li, and José Luis Montiel Olea

This version: August 2024.

Abstract:

Topic models are a simple and popular tool for the statistical analysis of textual data.  Their identification and estimation is typically enabled by assuming the existence of anchor words; that is, words that are exclusive to specific topics. In this paper we show that the existence of anchor words is statistically testable: there exists a hypothesis test with correct size that has nontrivial power. This means that the anchor-word assumption cannot be viewed simply as a convenient normalization. Central to our results is a simple characterization of when a column-stochastic matrix with known nonnegative rank admits a \emph{separable} factorization. We test for the existence of anchor words in two different datasets derived from the transcripts of the meetings of the Federal Open Market Committee (FOMC) - the body of the Federal Reserve System that sets monetary policy in the United States - and reject the null hypothesis that anchor words exist in one of them.

Publications

Robust Machine Learning Algorithms for Text Analysis 

[Paper] [Online Appendix] [Code]

Coauthors: José Luis Montiel Olea, and James Nesbit

Forthcoming at Quantitative Economics

Abstract:

We study the Latent Dirichlet Allocation model, a popular Bayesian algorithm for text analysis. Our starting point is the generic lack of identification of the model’s parameters, which suggests that the choice of prior matters. We then characterize by how much the posterior mean of a given functional of the model’s parameters varies in response to a change in the prior, and we suggest two algorithms to approximate this range. Both of our algorithms rely on obtaining multiple Nonnegative Matrix Factorizations of either the posterior draws of the corpus’ population term-document frequency matrix or of its sample analogue. The key idea is to maximize/minimize the functional of interest over all these nonnegative matrix factorizations. To illustrate the applicability of our results, we revisit recent work on the effects of increased transparency on discussions regarding monetary policy decisions in the United States.