Daniel N Hauser
“Learning with Heterogeneous Misspecified Models: Characterization and Robustness” joint with Aislinn Bohren (Econometrica, November 2021)
This paper develops a general framework to study how misinterpreting information impacts learning. Our main result is a simple criterion to characterize long-run beliefs based on the underlying form of misspecification. We present this characterization in the context of social learning, then highlight how it applies to other learning environments, including individual learning. A key contribution is that our characterization applies to settings with model heterogeneity and provides conditions for entrenched disagreement. Our characterization can be used to determine whether a representative agent approach is valid in the face of heterogeneity, study how differing levels of bias or unawareness of others’ biases impact learning, and explore whether the impact of a bias is sensitive to parametric specification or the source of information. This unified framework synthesizes insights gleaned from previously studied forms of misspecification and provides novel insights in specific applications, as we demonstrate in settings with partisan bias, overreaction, naive learning, and level-k reasoning.
“Censorship and Reputation” (Accepted at AEJ Micro)
I study how a firm manages its reputation by investing in the quality of its product and censoring bad news. Without censorship, the threat of bad news provides strong incentives for investment. I highlight two discontinuities in the firm’s maximum equilibrium payoff the introduction of censorship creates. When the cost of investment exceeds the cost of censorship, a patient firm never invests in quality and receives the lowest possible payoff. In contrast, when censorship is more expensive than invesment, a patient firm’s payoffs approach the first best, which can exceed the maximum equilibrium payoff if it was unable to censor.
“Promoting a Reputation For Quality” (Accepted at RAND)
A firm manages its reputation not only by investing in the quality of its products, but also through promotional campaigns and other forms of advertisement. I model a firm who invests in both the quality of a product and in the information about quality it provides to the market. The market learns about the quality through information that the firm cannot influence and promotion controlled by the firm. When the market learns about quality primarily through promotion, the ability to promote creates and enhances incentives to invest in quality. This leads to reputation cycles, and periods of time where the firm promotes even though it is not investing in quality to increase the reputational dividend from past investment. Promotion impacts incentives for investment in quality, enhancing the incentives for investment at low reputations and eliminating equilibria with reputation traps, situations where low reputation firms can never reestablish a high reputation. In equilibrium the ability to promote also reduces incentives for investment at high reputations, leading to longer and larger reputation cycles than in environments with only exogenous news.
“The Behavioral Foundations of Model Misspecification” joint with Aislinn Bohren
A growing literature in economics seeks to model how agents process information and update beliefs. In this paper, we link two common approaches: (i) defining an updating rule that specifies a mapping from prior beliefs and the signal to the agent’s subjective posterior, and (ii) modeling an agent as a Bayesian learner with a misspecified model. The updating rule approach has a more transparent conceptual link to the underlying bias being modeled, while the misspecified model approach is `complete,’ in that no further assumptions on belief-updating are necessary to analyze the model, and has well-developed solution concepts and convergence results. We show that any misspecified model can be decomposed into two objects that summarize the biases it introduces: the updating rule captures how the agent interprets realized information, while the forecast captures how the agent anticipates future information. Any misspecified model induces a forecast and updating rule pair. We derive necessary and sufficient conditions for a forecast and updating rule pair to be represented by a misspecified model. This provides conceptual guidance for which model to select to represent a given bias. Finally, we consider two natural ways to select forecasts: introspection-proofness and naive consistency. We demonstrate how introspection-proofness places a natural bound on the magnitude of bias in an application with motivated reasoning, and how naive consistency impacts a firm’s ability to screen consumers in a credit market application.
Work in Progress
“Misinterpreting Social Outcomes and Information Campaigns,” joint with Aislinn Bohren (Extended Abstract)
This paper explores how information campaigns can counteract inefficient choices in a learning setting with social perception bias. Individuals learn from private information and the outcomes of others, and a social planner can release costly information about the state. We model social perception biases as misspecified model of others’ preferences. When individuals systematically overestimate the similarity between their own preferences and the preferences of others — exhibiting the false consensus effect — this can lead to incorrect learning, while when individuals systematically underestimate this similarity — exhibiting pluralistic ignorance — this can prevent beliefs from converging. We characterize how the type and level of social perception bias affects the optimal information policy, and show that the duration — temporary or permanent — and target — intervene to correct inefficient action choices or to reinforce efficient action choices — of the optimal information campaign depend crucially on the form of misspecification. We close with an application in which individuals misunderstand other individuals’ risk preferences.
“Social Learning with Endogenous Order of Moves” joint with Pauli Murto and Juuso Välimäki
We extend the canonical social learning model to allow for free timing of actions. A group of agents, each endowed with some private information, are trying to learn some unknown state of the world by observing the actions taken by other agents. Agents make a single irreversible decision but, unlike in the canonical model, they can choose to wait in order to observe what decisions others make. Previous literature has understated the role of this endogenous timing in facilitating information aggregation; we demonstrate that in the most informative symmetric equilibrium information fully aggregates as the number of players becomes large. In this limit, we are also able to obtain closed form characterizations for rates of learning, rates of exit and equilibrium welfare.
“Posteriors as Signals in Misspecified Learning Models” joint with Aislinn Bohren
The Bayesian learning literature often normalizes a signal about an unknown state to be the set of posterior distributions over the state space induced by each signal realization, as formalized in Smith Sorensen (2000). In this note, we provide a foundation for such a normalization when agents have a misspecified model of the state-signal distributions. Given such a posterior normalization for the correctly specified state-signal distributions, we establish necessary and sufficient conditions to represent an agent’s misspecified model with respect to this normalized set of posterior distributions.
Other Ongoing Projects
“Misperception of Fines” joint with Martti Kaila
“Information for Attention” joint with Jan Knoepfle