Working Paper
Optimal Contracts with Reflection
Abstract: In this paper, we show that whenever the agent's outside option is nonzero, the optimal contract in the continuous-time principal-agent model of Sannikov (2008) is reflective at the lower bound. This means the agent is never terminated or retired after poor performance. Instead, the agent is asked to put zero effort temporarily, which brings his continuation value up. The agent is then asked to resume effort, and the contract continues. We show that a nonzero agent's outside option arises endogenously if the agent is allowed to quit and find a new firm (after a random search time of finite expected duration). In addition, we find new dynamics of the reflection at the lower bound. In the baseline model, the dynamics of the reflection are slow, as in Zhu (2013), i.e., the zero-action is used often. However, if the agent's disutility from the first unit of effort is zero, which is a standard Inada condition, or if his utility of consumption is unbounded below, the reflection becomes fast, i.e., the zero-effort action is used seldom.
Keywords: dynamic moral hazard; quitting; random search; reflective dynamics; ODE splicing; sticky Brownian motion; fast reflection; instantaneous control;
JEL Classification: C61; D82; D86; M52;
Access Documents
File(s):
File format is application/pdf
https://www.richmondfed.org/-/media/richmondfedorg/publications/research/working_papers/2016/pdf/wp16-14.pdf
Description: Full text
Authors
Bibliographic Information
Provider: Federal Reserve Bank of Richmond
Part of Series: Working Paper
Publication Date: 2017-01-27
Number: 16-14
Pages: 47 pages