Working Paper

What Do LLMs Want?


Abstract: Large language models (LLMs) are now used for economic reasoning, but their implicit "preferences" are poorly understood. We study these preferences by analyzing revealed choices in canonical allocation games and a sequential job-search environment. In dictator-style allocation games, most models favor equal splits, consistent with inequality aversion. Structural estimation of Fehr-Schmidt parameters suggests this aversion exceeds levels typically observed in human experiments. However, LLM preferences prove malleable. Interventions such as prompt framing (e.g., masking social context) and control vectors reliably shift models toward more payoff-maximizing behavior, while persona-based prompting has more limited impact. We then extend our analysis to a sequential decision-making environment based on the McCall job search model. Here, we recover implied discount factors from accept/reject behavior, but find that responses are less consistently rationalizable and preferences more fragile. Our findings highlight two core insights: (i) LLMs exhibit structured, latent preferences that often align with human behavioral norms, and (ii) these preferences can be steered, albeit more effectively in simple settings than in complex, dynamic ones.

JEL Classification: C63; C68; C61; D14; D83; D91; E20; E21;

https://doi.org/10.17016/FEDS.2026.006

Access Documents

File(s): File format is application/pdf https://www.federalreserve.gov/econres/feds/files/2026006pap.pdf
Description: Full text

Authors

Bibliographic Information

Provider: Board of Governors of the Federal Reserve System (U.S.)

Part of Series: Finance and Economics Discussion Series

Publication Date: 2026-01-30

Number: 2026-006