Daniel N Hauser
“Learning with Model Misspecification: Characterization and Robustness” joint with Aislinn Bohren (Conditional Accept at Econometrica)
This paper develops a general framework to study how misinterpreting information impacts learning. We consider sequential social learning and passive individual learning settings in which individuals observe signals and the actions of predecessors. Individuals have incorrect, or misspecified models of how to interpret these sources — such as overreaction to signals or misperception of others’ preferences. Our main result is a simple criterion to characterize long-run beliefs and behavior based on the underlying form of misspecification. This provides a unified way to compare different forms of misspecification that have been previously studied, as well as generates new insights about forms of misspecification that have not been theoretically explored. It allows for a deeper understanding of how misspecification impacts learning, including exploring whether a given form of misspecification is conceptually robust, in that it is not sensitive to parametric specification, whether misspecification has a similar impact in individual and social learning settings, and how model heterogeneity impacts learning. Lastly, it establishes that the correctly specified model is analytically robust, in that nearby misspecified models generate similar long-run beliefs.
“Censorship and Reputation” (R&R at AEJ Micro)
I study how a firm manages its reputation by investing in the quality of its product and censoring bad news. Without censorship, the threat of bad news provides strong incentives for investment. I highlight two discontinuities in the firm’s maximum equilibrium payoff the introduction of censorship creates. When the cost of investment exceeds the cost of censorship, a patient firm never invests in quality and receives the lowest possible payoff. In contrast, when censorship is more expensive than invesment, a patient firm’s payoffs approach the first best, which can exceed the maximum equilibrium payoff if it was unable to censor.
“Promoting a Reputation For Quality” (R&R at RAND)
A firm manages its reputation not only by investing in the quality of its products, but also through promotional campaigns and other forms of advertisement. I model a firm who invests in both the quality of a product and in the information about quality it provides to the market. The market learns about the quality through information that the firm cannot influence and promotion controlled by the firm. When the market learns about quality primarily through promotion, the ability to promote creates and enhances incentives to invest in quality. This leads to reputation cycles, and periods of time where the firm promotes even though it is not investing in quality to increase the reputational dividend from past investment. Promotion impacts incentives for investment in quality, enhancing the incentives for investment at low reputations and eliminating equilibria with reputation traps, situations where low reputation firms can never reestablish a high reputation. In equilibrium the ability to promote also reduces incentives for investment at high reputations, leading to longer and larger reputation cycles than in environments with only exogenous news.
Work in Progress
“Representing Biases and Heuristics as Misspecified Models” joint with Aislinn Bohren
A growing literature in economics considers how to model heuristics that agents use to process information and the resulting biases that emerge in belief updating. In this paper, we link two common approaches used to model inaccurate belief updating: (i) defining an “updating rule” that specifies a mapping from the true Bayesian posterior to the agent’s subjective posterior, and (ii) modeling an agent as a Bayesian with a misspecified model of the signal process. We establish conditions under which an updating rule can be represented as a misspecified model and conditions under which a misspecified model can be represented as an updating rule. This result connects the two approaches and clarifies the implicit assumptions about an agent’s learning rule required for each approach.
“Misinterpreting Social Outcomes and Information Campaigns,” joint with Aislinn Bohren (Extended Abstract)
This paper explores how information campaigns can counteract inefficient choices in a learning setting with social perception bias. Individuals learn from private information and the outcomes of others, and a social planner can release costly information about the state. We model social perception biases as misspecified model of others’ preferences. When individuals systematically overestimate the similarity between their own preferences and the preferences of others — exhibiting the false consensus effect — this can lead to incorrect learning, while when individuals systematically underestimate this similarity — exhibiting pluralistic ignorance — this can prevent beliefs from converging. We characterize how the type and level of social perception bias affects the optimal information policy, and show that the duration — temporary or permanent — and target — intervene to correct inefficient action choices or to reinforce efficient action choices — of the optimal information campaign depend crucially on the form of misspecification. We close with an application in which individuals misunderstand other individuals’ risk preferences.
“Social Learning with Endogenous Order of Moves” joint with Pauli Murto and Juuso Välimäki
We extend the canonical social learning model to allow for free timing of actions. A group of agents, each endowed with some private information, are trying to learn some unknown state of the world by observing the actions taken by other agents. Agents make a single irreversible decision but, unlike in the canonical model, they can choose to wait in order to observe what decisions others make. Previous literature has understated the role of this endogenous timing in facilitating information aggregation; we demonstrate that in the most informative symmetric equilibrium information fully aggregates as the number of players becomes large. In this limit, we are also able to obtain closed form characterizations for rates of learning, rates of exit and equilibrium welfare.
“Posteriors as Signals in Misspecified Learning Models” joint with Aislinn Bohren
The Bayesian learning literature often normalizes a signal about an unknown state to be the set of posterior distributions over the state space induced by each signal realization, as formalized in Smith Sorensen (2000). In this note, we provide a foundation for such a normalization when agents have a misspecified model of the state-signal distributions. Given such a posterior normalization for the correctly specified state-signal distributions, we establish necessary and sufficient conditions to represent an agent’s misspecified model with respect to this normalized set of posterior distributions.