We introduce a learning model in which the decision maker does not know how recommendations are generated, called the contraction rule. We present behavioral postulates that characterize it. The contraction rule can be uniquely identified and reveals how the decision maker interprets and how much she trusts the recommendation. In a dynamic stationary setting, we show that the contraction rule is not dominated by completely following recommendations and is incompatible with a property called compliance with balanced recommendations. Following this negative result, we demonstrate that the contraction rule may generate and reinforce recency bias and disagreement.
2857 8564
KK 924
- Ph.D., M.A., Princeton University
- B.Econ., The University of Hong Kong
Chen ZHAO received his Ph.D. and M.A. in economics from Princeton University. He also holds a B.Econ. from HKU. Chen joined HKU, his alma mater, in 2017 as an assistant professor. His main research field is axiomatic decision theory.
- Microeconomic Theory
- Decision Theory
- Behavioral Economics
- “Logic-based updating,” Journal of Economic Theory, 2024, 221, 105901.
- “Learning from a Black Box,” Journal of Economic Theory, 2024, 221, 105886.
- “Cheap talk with prior-biased inferences,” Games and Economic Behavior, 2023, 138, 254-280.
- “Pseudo-Bayesian Updating,” Theoretical Economics, 2022, 17(1), 253-289.
We build on AGM belief revision (Alchourrón et al. (1985)) and propose a class of updating rules called pragmatic rules. Pragmatic updating applies to multiple priors and requires that the agent's posteriors be the subset of her priors under which the realized event occurs with probability 1, if such priors exist. We construct a propositional language based on qualitative probability and demonstrate the strong relation between belief updating rules and belief revision rules in this language. We show that an updating rule is consistent with AGM belief revision if and only if it is pragmatic. While maximum likelihood updating is pragmatic in general, full-Bayesian updating is not. We characterize maximum likelihood updating within the AGM framework, and show that full-Bayesian updating can be obtained by dropping one of AGM's postulates.