%Paper: ewp-game/9902003
%From: vkrishna@psu.edu
%Date: Thu, 11 Feb 1999 18:09:26 -0600 (CST)


\documentclass[12pt]{article}
\usepackage{amssymb}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{sw20elba}

%TCIDATA{OutputFilter=LATEX.DLL}
%TCIDATA{Created=Tue Dec 22 18:08:20 1998}
%TCIDATA{LastRevised=Wed Jan 06 17:13:14 1999}
%TCIDATA{<META NAME="GraphicsSave" CONTENT="32">}
%TCIDATA{<META NAME="DocumentShell" CONTENT="Journal Articles\Vijay">}
%TCIDATA{Language=American English}
%TCIDATA{CSTFile=LaTeX article (bright).cst}

\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}{Acknowledgement}
\newtheorem{algorithm}{Algorithm}
\newtheorem{axiom}{Axiom}
\newtheorem{case}{Case}
\newtheorem{claim}{Claim}
\newtheorem{conclusion}{Conclusion}
\newtheorem{condition}{Condition}
\newtheorem{conjecture}{Conjecture}
\newtheorem{corollary}{Corollary}
\newtheorem{criterion}{Criterion}
\newtheorem{definition}{Definition}
\newtheorem{example}{Example}
\newtheorem{exercise}{Exercise}
\newtheorem{lemma}{Lemma}
\newtheorem{notation}{Notation}
\newtheorem{problem}{Problem}
\newtheorem{proposition}{Proposition}
\newtheorem{remark}{Remark}
\newtheorem{solution}{Solution}
\newtheorem{summary}{Summary}
\newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\input{tcilatex}

\begin{document}

\author{Vijay Krishna \\
%EndAName
Penn State University \and John Morgan \\
%EndAName
Princeton University}
\title{A Model of Expertise\thanks{%
This research was supported by a grant from the National Science Foundation
(SBR 9618648). We are grateful to Gene Grossman, George Mailath, Tomas
Sj\"{o}str\"{o}m and Joel Sobel for sharing their expertise with us.}}
\date{January 4, 1999}
\maketitle

\begin{abstract}
We study a model in which two perfectly informed experts offer advice to a
decision maker whose actions affect the welfare of all. Experts are biased
and thus may wish to pull the decision maker in different directions and to
different degrees. When the decision maker consults only a single expert,
the expert withholds substantial information from the decision maker. We ask
whether this situation is improved by having the decision maker consult a
cabinet of (two) experts. We first show that there is no perfect Bayesian
equilibrium in which full revelation occurs. When both experts are biased in
the same direction, it is never beneficial to consult both. In contrast,
when experts are biased in opposite directions, it is always beneficial to
consult both. Finally, a cabinet of extremists is of no value.

\newpage
\end{abstract}

\section{Introduction}

The power to make decisions rarely resides in the hands of those possessing
the necessary specialized knowledge. Instead experts are often solicited for
advice by decision makers. Thus, a division of labor has arisen between
those who have the relevant expertise and those who make use of it. The
diverse range of problems confronted by decision makers, such as corporate
CEOs or political leaders, almost precludes the possibility that they
themselves are experts in all relevant fields and hence, the need for
outside experts naturally arises. CEOs routinely seek the advice of
marketing specialists, investment bankers and management consultants.
Political leaders rely on a bevy of economic and military advisors.
Investors seek tips from stockbrokers and financial advisors.

These and numerous other situations share some common features.

First, the experts dispensing advice are by no means disinterested.
Differing objectives among the parties may lead the experts to attempt to
influence the decision maker in ways that are not necessarily in the
latter's best interests. Investment banks stand to gain from new issues and
corporate mergers, decisions about which they regularly offer advice. The
political future of economic and military advisors may be affected by the
decisions on which they give counsel. Stockbrokers are obviously interested
in the investment decisions of their clients.

Second, in nearly all of these cases decision makers are bombarded with
advice from numerous experts, with possibly different agendas. Moreover,
experts may strategically tailor their advice to counter that offered by
other, rival, experts. For instance, hawks may choose more extreme positions
on an issue if they know that doves are also being consulted, and
vice-versa. Thus the decision maker faces the daunting task of sifting
through the mass of sometimes conflicting opinions that are offered and
coming to a conclusion as to the best course of action. Indeed, this ability
is routinely touted as the mark of a good leader.

Thus, the decision maker, in determining the size and composition of her
``cabinet'' of advisors, must carefully consider the following questions: Is
it possible to extract all information relevant to the decision from a
cabinet? Is it better to actively consult a number of advisors or only a
single, well chosen, advisor? Is it better to form a cabinet with diverse
opinions about what is the ``correct'' decision, or does a cohesive cabinet
lead to better advice? Is an advocacy system, where the decision maker
appoints experts with opposing viewpoints, helpful in deciding on the
correct action? How do experts with extreme views affect the advice offered
to the decision maker? These questions form the central focus of our paper.

To get at these questions, we use a simple model in order to analyze the
interplay among a single decision maker and \emph{two} interested experts
who have superior information. The experts offer advice to the decision
maker in order to influence the decision in a way that serves their own,
possibly differing, objectives. We ask how a decision maker should integrate
the opinions of experts when faced with this situation.

In our model, an expert's preferences are parametrized by a measure of his
inherent bias relative to the decision maker. The experts may differ both in
terms of how biased they are and in which direction. They may be biased in
opposite directions (opposing bias): one expert may wish to pull the
decision maker to the left and the other to the right. Alternatively, both
may wish to pull in the same direction but to differing degrees (like bias).
The absolute value of the bias parameter indicates how ``loyal'' an expert
is to the decision maker. Of course, a more loyal expert's objectives are
more closely aligned with those of the decision maker.

If the decision maker had the option of consulting only one of the two
experts for advice, it seems natural that he would consult only the more
loyal expert, and indeed that is the case in our model. Nevertheless, \emph{%
a priori} it may be beneficial to \emph{combine }the advice from the two
experts. It is easy to see that if the advice were solicited in a way that
each expert was ignorant of the fact that the other was also offering
advice, the decision maker would surely benefit relative to consulting only
one expert. We study how this conclusion is affected if each expert were
aware that the other was also offering advice.

We first establish that, even though the information possessed by the
experts is perfectly correlated, the lack of congruence in incentives
between the decision maker and the experts \emph{always }leads to a
withholding of information on the part of the experts. That is, it is
impossible to form a cabinet such that the experts always reveal their
information. To assess whether it might still be beneficial to combine the
advice of experts, we examine separately the case where experts have like
biases and the case where biases are opposing.

\textbf{Like biases. }When the two experts have like biases, we find that
the decision maker would derive \emph{no }benefit relative to consulting
only one expert. Thus, despite the fact that the two experts have identical
information and the more loyal expert does not fully divulge what he knows,
the advice offered by the less loyal expert is of no additional value.
Moreover, \emph{ex ante }all parties, including the less loyal expert, would
agree that the best course of action is for the decision maker to consult
only the more loyal expert.

\textbf{Opposing biases. }When the two experts have opposing biases, the
decision maker always derives some\emph{\ }benefit from consulting both
experts relative to consulting only one. Indeed, we show that even when
experts would reveal no information if consulted alone, combining the
information of the experts leads them to completely reveal their information
over a portion of the state space, and this is beneficial relative to
consulting only a single expert. However, this conclusion holds only if at
least one of the experts is not an ``extremist.'' If both of the experts are
extremists, no information is revealed -- either when they are consulted
separately or when information is combined.

\subparagraph{Related Work}

The advice that experts offer does not have any direct economic effect; at
best it only influences economically relevant decisions. Thus experts'
advice has the nature of ``cheap talk.'' Indeed our basic model is closely
related to the model of Crawford and Sobel (1982) of strategic information
transmission between two parties, one of whom has information useful for the
other (see also Green and Stokey (1980)).

Our model differs from the Crawford and Sobel (1982), hereafter referred to
as CS, in that we allow for \emph{multiple} sources of information. This
context leads to important strategic considerations absent in the single
expert analysis. Now an expert must consider not only how his advice will
directly influence the decision maker but also what information is coming
from other experts and how his mere presence will affect that. Likewise, the
decision maker now has the option of listening to only a subset of the panel
of experts. As will become apparent, the additional strategic considerations
that arise with multiple experts lead to technical complications not present
with a single expert. Differences between the single expert model of CS and
our model are highlighted in later sections.

As is well known, models with cheap talk suffer from a plethora of
equilibria and efforts to identify some as salient has led to the
development of a substantial literature on refinements in this context
(Matthews, Okuno-Fujiwara and Postlewaite (1991) and Farrell (1993)).
Farrell and Rabin (1996) present a concise survey. The models we consider
also have multiple equilibria; however, for the most part, our focus is on
the ``most informative'' equilibrium.

Closely related are papers by Gilligan and Krehbiel (1989), who examine the
case of opposing biases and by Austen-Smith (1990), who examines the case of
like biases. Gilligan and Krehbiel (1989) are concerned with the effect on
legislative outcomes of having two ``expert'' committees with opposing
biases restrict the set of legislative alternatives that may be implemented.
They show that the restrictive ``closed rule'' system of determining the set
of alternatives, does not lead to different legislative outcomes compared to
the ``open rule'' where the set of alternatives is unrestricted.
Austen-Smith (1990) examines the effect of debate, modeled as cheap-talk, on
legislative outcomes when there are three legislators. His model is
substantially different from ours in that the ``expert'' legislators vote on
what legislation is to be passed. Thus, the separation between the experts
and the decision maker is absent in his model.

The problem of multiple experts has also been considered by Ottaviani and
Sorensen (1997). Both their model and concerns, however, are different from
ours. In their model the experts are not directly affected by the decisions
but care only about making recommendations that are validated \emph{ex post}%
. Thus experts care only about their reputation for ``being on the mark.''
Ottaviani and Sorensen (1997) then show that a kind of ``herd'' effect
results when experts are consulted sequentially: experts may well neglect
their own information in order to appear correct. Also related are papers by
Banerjee and Somanathan (1997) and Friedman (1998), which examine
information transmission in a setting in which there is a continuum of
potential experts with differing prior beliefs, at most one of whom receives
an informative signal about the state. Finally, the effects of combining
information provided by experts with opposing incentives has also been
examined by Shin (1994) in the context of persuasion games (see Milgrom and
Roberts (1986)), and by Dewatripont and Tirole (1998) in the context of a
moral hazard model.

Our work is also somewhat related to the problem of information transmission
between a decision maker and a single expert when the decision maker is
uncertain about the bias of the expert. Sobel (1985) and Morris (1997) focus
on reputational considerations in the expert's advice in analyzing this
problem. Morgan and Stocken (1998) consider this problem in a static CS-like
setting and focus on information transmission by equity analysts.\bigskip

The remainder of the paper proceeds as follows. Section 2 outlines the basic
model. In Section 3, we establish the impossibility of complete information
transmission as well as a structural property of a monotonic equilibria of
the two expert game. Section 4 examines the like bias case and shows that
the addition of a less loyal expert conveys no additional information
relative to simply consulting the more loyal expert alone. In Section 5, we
examine the opposing biases case and show that combining information can be
beneficial even when neither expert will reveal any information when
consulted alone. There are, however, limits to information gains from
combining: when both of the experts are extremist or when an extremist
expert is consulted last combining experts' advice is not beneficial.
Section 6 examines possible extensions of the simple model. Finally, Section
7 concludes. All proofs are collected in Appendix A. A second appendix,
Appendix B, takes up issues related to the existence of non-monotonic
equilibria.

All of our results are illustrated by means of a series of examples. The
examples offer the essential intuition for the general results without some
of the technical details of the formal proofs. Indeed, all of the main
propositions of the paper can be understood by means of the examples.

\section{Preliminaries}

In this section we sketch a simple model of decision making when there are
multiple experts. We do not model any of the examples mentioned in the
introduction explicitly. Rather our model is a stylized representation of
the interaction among decision makers and experts across a broad range of
institutional settings. The overall structure extends the model of CS to a
setting with multiple experts.

Consider a \emph{decision maker} who takes an action $y\in \mathbf{R},$ the
payoff from which depends on some underlying state of nature $\theta \in %
\left[ 0,1\right] .$ The state of nature $\theta $ is distributed according
to the density function $f\left( \cdot \right) .$ The decision maker has no
information about $\theta ,$ but there are two \emph{experts} each of whom
observes $\theta .$

The two experts then offer ``advice'' to the decision maker by sending
messages $m_{1}\in \left[ 0,1\right] $ and $m_{2}\in \left[ 0,1\right] ,$
respectively. After observing the state, messages are sent \emph{%
sequentially }and\emph{\ publicly}. First, expert 1 offers his advice$,$
which is heard by both the decision maker and expert 2. Expert 2 then offers
his advice, and the decision maker takes an action. The decision maker is
not in any way bound by the advice of the experts. Instead, she is free to
interpret the messages however she likes as well as to choose any action.%
\footnote{%
In the political science literature, this is referred to as the ``open
rule'' (Gilligan and Krehbiel (1989)).}

The payoff functions of all three agents are of the form $U\left( y,\theta
,b_{i}\right) $ where $b_{i}$ is a parameter which differs across agents.
For the decision maker, agent $0,$ $b_{0}$ is normalized to be $0.$ For the
experts, agents $1$ and $2$, $b_{i}\neq 0$ and $b_{1}\neq b_{2}.$ We write $%
U\left( y,\theta \right) \equiv U\left( y,\theta ,0\right) $ as the decision
maker's payoff function. We suppose that $U$ is twice continuously
differentiable and satisfies $U_{11}<0,$ $U_{12}>0,$ $U_{13}>0.$ Since $%
U_{13}>0$ the parameter $b_{i}$ measures how closely the expert $i$'s
interests are aligned with those of the decision maker and it is useful to
think of $b_{i}$ as a measure of how \emph{biased }expert $i$ is, relative
to the decision maker. We also assume that for each $i,$ $U\left( y,\theta
,b_{i}\right) $ attains a maximum at some $y.$ Since $U_{11}<0,$ the
maximizing action is unique. The biases of the two experts and the decision
maker are commonly known.

These assumptions are satisfied by ``quadratic loss functions.'' In this
case, the decision maker's payoff function is 
\begin{equation}
U\left( y,\theta \right) =-\left( y-\theta \right) ^{2}  \label{quad1}
\end{equation}
and expert $i$'s payoff function is 
\begin{equation}
U\left( y,\theta ,b_{i}\right) =-\left( y-\left( \theta +b_{i}\right)
\right) ^{2}  \label{quad2}
\end{equation}
where $b_{i}\neq 0.$ An important case, first introduced by CS, combines
quadratic loss functions with the assumption that the state $\theta $ is
uniformly distributed on $\left[ 0,1\right] .$ We will refer to this as the
``uniform-quadratic'' case.

In studying the multiple experts problem, we divide the analysis into two
cases. If both experts are biased in the same direction, that is, both $%
b_{1},b_{2}>0,$\ then the experts are said to have \emph{like biases}. If
the experts are biased in opposite directions, that is, $b_{i}>0>b_{j},$\
then the experts are said to have \emph{opposing biases}.\footnote{%
The case where both $b_{1},b_{2}<0$ is qualitatively no different from the
case where both $b_{1},b_{2}>0.$}

Define $y^{\ast }\left( \theta \right) =\arg \max_{y}U\left( y,\theta
\right) $ to be the \emph{ideal} action for the decision maker when the
state is $\theta .$ Similarly, define $y^{\ast }\left( \theta ,b_{i}\right)
=\arg \max_{y}U\left( y,\theta ,b_{i}\right) $ be the ideal action for
expert $i.$ Since $U_{13}>0,$ $b_{i}>0$ implies that $y^{\ast }\left( \theta
,b_{i}\right) >y^{\ast }\left( \theta \right) $; and since such an expert
always prefers a higher action than is ideal for the decision maker, we will
refer to him as being \emph{right-biased}. Similarly, if $b_{i}<0$ then $%
y^{\ast }\left( \theta ,b_{i}\right) <y^{\ast }\left( \theta \right) $ and
we refer to such an expert as being \emph{left-biased}.

Notice that with quadratic loss functions, the ideal action for the decision
maker is to choose an action that matches the true state exactly: for all $%
\theta ,$ $y^{\ast }\left( \theta \right) =\theta .$ The ideal action for an
expert with bias $b_{i}$ is $y^{\ast }\left( \theta ,b_{i}\right) =\theta
+b_{i}.$

A word of caution is in order. Our results fall into two categories. Some
concern the structure of equilibria of the multiple experts game and are
derived under the assumptions given above. Others concern welfare
comparisons among equilibria and require an additional assumption. This is
no different from the single expert model considered by CS and like them we
need their Assumption M (p. 1444 of CS) in order to derive unambiguous
welfare results (specifically, Propositions \ref{one} and \ref{superior}).
This assumption, while not so transparent, is satisfied by the
uniform-quadratic case. Thus in the interests of exposition, we have chosen
to derive the welfare results only for the uniform-quadratic specification.
The reader should be aware that these results can be derived more generally
under Assumption M of CS.

\section{Equilibrium with Experts}

\subparagraph{Single Expert}

Before studying equilibria of the model with two experts it is instructive
to recall the structure of equilibria of the model with a single expert as
derived by Crawford and Sobel (1982).

In the single expert game a strategy for the expert $\mu $ specifies the
message $m=\mu \left( \theta \right) $ that he sends in any state $\theta .$
A strategy for the decision maker $y$ specifies the action $y\left( m\right) 
$ that she takes following any message $m$ by the expert. Let $P\left( \cdot
|m\right) $ denote the cumulative distribution function that specifies
posterior beliefs about the state held by the decision maker after the
message $m.$

In a \emph{perfect Bayesian equilibrium} (1) for all messages $m,$ the
decision maker's action $y\left( m\right) $ maximizes her expected payoff
given her posterior beliefs $P\left( \cdot |m\right) $; (2) the beliefs $%
P\left( \cdot |m\right) $ are formed using the expert's strategy $\mu $ by
applying Bayes' rule wherever possible; (3) given the decision maker's
strategy $y,$ for all states $\theta $, $\mu \left( \theta \right) $
maximizes the expert's payoff.

CS show that every equilibrium of the single expert game has the following
structure.\footnote{%
CS actually characterize the set of \emph{Bayesian} equilibrium outcomes. In
the single expert game this is the same as the set of \emph{perfect Bayesian}
equilibrium outcomes. As we show in Section 3 this equivalence does not hold
in the multiple expert game.} There are a finite number of equilibrium
actions $y_{1},y_{2},...,y_{N}.$ The equilibrium breaks the state space into 
$N$ disjoint intervals $[0,a_{1}),$ $[a_{1},a_{2}),$ $...$ $,$ $%
[a_{n-1},a_{n}),$ $...$ $,$ $[a_{N-1},1]$ with action $y_{n}$ resulting in
any state $\theta \in \lbrack a_{n-1},a_{n}).$ The equilibrium actions are
monotonically increasing in the state, that is, $y_{n-1}<y_{n}.$ Finally, at
every ``break point'' $a_{n}$ the following ``no arbitrage'' condition 
\begin{equation}
U\left( y_{n},a_{n},b\right) =U\left( y_{n+1},a_{n},b\right)  \label{noarb}
\end{equation}
is satisfied. In other words, in state $a_{n}$ the expert is indifferent
between the actions $y_{n}$ and $y_{n+1}.$ Since $U_{12}>0$, for all $\theta
<a_{n}$, the expert strictly prefers $y_{n}$ to $y_{n+1}$ and for all $%
\theta >a_{n}$, the reverse is true. Thus (\ref{noarb}) serves as an
incentive (or self-selection) constraint.

\subparagraph{Multiple Experts}

In the multiple experts game a pure strategy for expert $1$ is a function $%
\mu _{1}\left( \theta \right) $ mapping states into messages and a pure
strategy expert $2$ is a function $\mu _{2}\left( \theta ,m_{1}\right) $
mapping states and messages $m_{1}$ from expert $1$ into messages. A (pure)
strategy for the decision maker is a function $y\left( m_{1},m_{2}\right) $
mapping messages into actions. Let $P\left( \cdot |m_{1},m_{2}\right) $
denote the cumulative distribution function that specifies posterior beliefs
about the state held by the decision maker after messages $m_{1}$ and $%
m_{2}. $

In the multiple expert game a (pure strategy) \emph{perfect Bayesian
equilibrium} (PBE) entails: (1) for all pairs of messages $m_{1}$ and $%
m_{2}, $ the decision maker's action $y\left( m_{1},m_{2}\right) $ maximizes
her expected payoff given her beliefs $P\left( \cdot |m_{1},m_{2}\right) $;
(2) the beliefs $P\left( \cdot |m_{1},m_{2}\right) $ are formed using the
experts' strategies $\mu _{1}$ and $\mu _{2}$ by applying Bayes' rule
wherever possible; (3) given the decision maker's strategy $y,$ for all
states $\theta $ and messages $m_{1}$ sent by expert $1$ $\mu _{2}\left(
\theta ,m_{1}\right) $ maximizes expert $2$'s payoff; and (4) given the
decision maker's strategy $y$ and expert $2$'s strategy $\mu _{2},$ for all
states $\theta $, $\mu _{1}\left( \theta \right) $ maximizes expert $1$'s
expected payoff.\footnote{%
The formal definition of a PBE requires only that the various optimality
conditions hold for \emph{almost every} state and pair of messages. This
would not affect any of our results.}

Given a PBE we will denote by $Y$ the \emph{outcome function }that
associates with every state the resulting equilibrium action. Formally, for
each $\theta $%
\[
Y\left( \theta \right) =y\left( \mu _{1}\left( \theta \right) ,\mu
_{2}\left( \theta ,\mu _{1}\left( \theta \right) \right) \right) . 
\]

Denote by $Y^{-1}\left( y\right) =\left\{ \theta :Y\left( \theta \right)
=y\right\} .$ Given an outcome function $Y$ we can determine the resulting 
\emph{equilibrium partition} 
\[
\mathcal{P}=\left\{ Y^{-1}\left( y\right) :y\text{ is an equilibrium action}%
\right\} 
\]
of the state space. The partition $\mathcal{P}$ is then a measure of the
informational content of the equilibrium. If the equilibrium partition $%
\mathcal{P}$ is finer than $\mathcal{P}^{\prime },$ then the informational
content of $\mathcal{P}$ is greater than that of $\mathcal{P}^{\prime },$
since the former allows the decision maker to discern among the states more
effectively.

A PBE always exists. In particular, there are always equilibria in which all
messages from both of the experts are completely ignored by the decision
maker, or in other words, both experts ``babble.'' To see that this is a
PBE, notice that since the messages of the experts contain no information,
the decision maker correctly disregards them in making her decision.
Likewise, from the perspective of each of the experts, since messages will
always be ignored by the decision maker, there is no advice giving strategy
that improves payoffs relative to babbling. Obviously, information loss is
most severe in a babbling equilibrium. Typically, there are also other, more
informative, equilibria.

\subparagraph{Example 1}

Let the state $\theta $ be distributed uniformly on $\left[ 0,1\right] $,
and let the payoff functions be of the quadratic loss kind specified in (\ref
{quad1}) and (\ref{quad2}). This is the uniform-quadratic case introduced
earlier.

Suppose that $b_{1}=\frac{1}{40}$ and $b_{2}=\frac{1}{9}$ so that the
experts have \emph{like }biases and expert 1 is less biased than is expert
2. A PBE for this game is depicted in Figure 1, where the states $a_{1}=%
\frac{1}{180},$ $a_{2}=\frac{22}{180},$ $a_{3}=\frac{61}{180}\ $and the
actions $y_{1}=\frac{1}{360},$ $y_{2}=\frac{23}{360},$ $y_{3}=\frac{83}{360}%
, $ $y_{4}=\frac{241}{360}.$

%TCIMACRO{
%\TeXButton{Figure 1}{\begin{figure}[t]
%\unitlength 1mm
%\linethickness{0.4pt}
%\begin{picture}(120,105)
%\put(20,20){\line(0,1){85}}
%\put(20,20){\line(1,0){100}}
%\linethickness{1.6pt}
%\put(20,22){\line(1,0){2}}
%\put(22,29){\line(1,0){12}}
%\put(34,44){\line(1,0){21}}
%\put(55,89){\line(1,0){65}}
%\linethickness{0.4pt}
%\put(34,36.5){\circle*{1}}
%\put(34,36.5){\vector(0,1){6.5}}
%\put(34,36.5){\vector(0,-1){6.5}}
%\put(55,66.5){\circle*{1}}
%\put(55,66.5){\vector(0,1){21.5}}
%\put(55,66.5){\vector(0,-1){21.5}}
%\multiput(20,10)(10,10){5}{\line(1,1){2}}
%\multiput(20,22.5)(1,1){74}{\makebox(0,0){.}}
%\put(80,99){\makebox(0,0)[cc]{$y^*(\cdot,b_2)$}}
%\multiput(20,31.5)(1,1){65}{\makebox(0,0){.}}
%\put(100,99){\makebox(0,0)[cc]{$y^*(\cdot,b_1)$}}
%\put(22,20){\line(0,-1){2}}
%\put(22,15){\makebox(0,0)[cc]{$a_1$}}
%\put(34,20){\line(0,-1){2}}
%\put(34,15){\makebox(0,0)[cc]{$a_2$}}
%\put(55,20){\line(0,-1){2}}
%\put(55,15){\makebox(0,0)[cc]{$a_3$}}
%\put(70,15){\makebox(0,0)[cc]{$\theta$}}
%\put(120,20){\line(0,-1){2}}
%\put(120,15){\makebox(0,0)[cc]{1}}
%\put(15,22){\makebox(0,0)[cc]{$y_1$}}
%\put(20,22){\line(-1,0){2}}
%\put(15,29){\makebox(0,0)[cc]{$y_2$}}
%\put(20,29){\line(-1,0){2}}
%\put(15,44){\makebox(0,0)[cc]{$y_3$}}
%\put(20,44){\line(-1,0){2}}
%\put(15,89){\makebox(0,0)[cc]{$y_4$}}
%\put(20,89){\line(-1,0){2}}
%\end{picture}
%\caption{A PBE with Like Biases}
%\end{figure}
%}}%
%BeginExpansion
\begin{figure}[t]
\unitlength 1mm
\linethickness{0.4pt}
\begin{picture}(120,105)
\put(20,20){\line(0,1){85}}
\put(20,20){\line(1,0){100}}
\linethickness{1.6pt}
\put(20,22){\line(1,0){2}}
\put(22,29){\line(1,0){12}}
\put(34,44){\line(1,0){21}}
\put(55,89){\line(1,0){65}}
\linethickness{0.4pt}
\put(34,36.5){\circle*{1}}
\put(34,36.5){\vector(0,1){6.5}}
\put(34,36.5){\vector(0,-1){6.5}}
\put(55,66.5){\circle*{1}}
\put(55,66.5){\vector(0,1){21.5}}
\put(55,66.5){\vector(0,-1){21.5}}
\multiput(20,10)(10,10){5}{\line(1,1){2}}
\multiput(20,22.5)(1,1){74}{\makebox(0,0){.}}
\put(80,99){\makebox(0,0)[cc]{$y^*(\cdot,b_2)$}}
\multiput(20,31.5)(1,1){65}{\makebox(0,0){.}}
\put(100,99){\makebox(0,0)[cc]{$y^*(\cdot,b_1)$}}
\put(22,20){\line(0,-1){2}}
\put(22,15){\makebox(0,0)[cc]{$a_1$}}
\put(34,20){\line(0,-1){2}}
\put(34,15){\makebox(0,0)[cc]{$a_2$}}
\put(55,20){\line(0,-1){2}}
\put(55,15){\makebox(0,0)[cc]{$a_3$}}
\put(70,15){\makebox(0,0)[cc]{$\theta$}}
\put(120,20){\line(0,-1){2}}
\put(120,15){\makebox(0,0)[cc]{1}}
\put(15,22){\makebox(0,0)[cc]{$y_1$}}
\put(20,22){\line(-1,0){2}}
\put(15,29){\makebox(0,0)[cc]{$y_2$}}
\put(20,29){\line(-1,0){2}}
\put(15,44){\makebox(0,0)[cc]{$y_3$}}
\put(20,44){\line(-1,0){2}}
\put(15,89){\makebox(0,0)[cc]{$y_4$}}
\put(20,89){\line(-1,0){2}}
\end{picture}
\caption{A PBE with Like Biases}
\end{figure}
%
%EndExpansion

In the figure, the outcome function $Y$ is the step function depicted by the
dark lines. The lower dotted line depicts expert $1$'s ideal actions $%
y^{\ast }\left( \theta ,b_{1}\right) =\theta +b_{1}$ and similarly, the
upper dotted line depicts expert $2$'s ideal actions $y^{\ast }\left( \theta
,b_{2}\right) =\theta +b_{2}$. In equilibrium, the information available to
the decision maker is that the state lies in one of four intervals $%
[0,a_{1}),$ $[a_{1},a_{2}),$ $[a_{2},a_{3})$ or $[a_{3},1].$ The action $%
y_{1}$ is then optimal for the decision maker given that he knows that $%
\theta \in \lbrack 0,a_{1}),$ $y_{2}$ is optimal given $\theta \in \lbrack
a_{1},a_{2}),$ etc.

To see that this is an equilibrium configuration, notice that in state $%
a_{2} $ expert $1$ is exactly indifferent between actions $y_{2}$ and $y_{3}$
since $\left( a_{2}+b_{1}\right) -y_{2}=y_{3}-\left( a_{2}+b_{1}\right) .$
(In the figure this indifference is indicated by the vertical double pointed
arrow centered on $a_{2}+b_{1}.$) Expert $1$ strictly prefers $y_{2}$ to $%
y_{3}$ in all states $\theta <a_{2}$ and prefers $y_{3}$ to $y_{2}$ in all
states $\theta >a_{2}.$ Thus given the decision maker's strategy he is
willing to distinguish between states $\theta <a_{2}$ and states $\theta
>a_{2}.$

Similarly, in state $a_{3}$ expert $2$ is indifferent between $y_{3}$ and $%
y_{4}$ and is willing to distinguish between states $\theta <a_{3}$ and
states $\theta >a_{3}.$

Thus in states $a_{2}$ and $a_{3}$ the ``no arbitrage'' condition (\ref
{noarb}) from CS holds for either expert $1$ or expert $2.$

In state $a_{1},$ however, neither expert is indifferent between $y_{1}$ and 
$y_{2}.$ Indeed, expert $1$ strictly prefers $y_{1}$ to $y_{2}$ in state $%
a_{1}.$ (Notice that expert 1's ideal action $y^{\ast }\left(
a_{1},b_{1}\right) ,$ is closer to $y_{1}$ than $y_{2}$.) Expert $2,$ on the
other hand, strictly prefers $y_{2}$ to $y_{1}.$ The equilibrium calls for
expert $1$ to ``suggest'' action $y_{1}$ by sending a message $m_{1}=y_{1}$
and for expert $2$ to ``agree'' and also send the message $m_{2}=y_{1}.$
Expert 2 has the option of ``disagreeing'' with expert $1$ and inducing
action $y_{3}$. The equilibrium is constructed so that expert $2$ is
indifferent between $y_{1}$ and $y_{3}$ in state $a_{1}$ and so strictly
prefers $y_{3}$ to $y_{1}$ if $\theta >a_{1}.$ Thus, even though in states
just above $a_{1},$ expert $1$ would strictly prefer to switch from the
equilibrium action $y_{2}$ to $y_{1},$ were he to actually suggest action $%
y_{1},$ expert $2$ would disagree, resulting in action $y_{3}.$ Since $y_{2}$
is preferred to $y_{3}$ by expert $1$ in these states, expert $1$ will
choose not to deviate$.$ Here we see how the strategic interaction of the
two experts creates the possibility of ``disciplining'' the experts in a
manner not possible for the single expert case.\footnote{%
A detailed specification of the equilibrium strategies and beliefs for this
and all other examples in the paper may be obtained from the authors.}

\paragraph{A Mechanism Design Interpretation}

Since our focus is on finding the most informative equilibrium in the
multiple experts game, the following ``mechanism design'' interpretation of
the decision maker's problem will sometimes prove helpful. Viewed in this
light, finding the most informative equilibrium may be viewed as a type of
implementation problem where the ``planner is a player'' (see Baliga,
Corchon and Sj\"{o}str\"{o}m, (1997)) but where the set of feasible game
forms that the planner can propose is substantially constrained.

Specifically, suppose that the decision maker were free to assign a
``meaning'' to each of the messages that the experts might issue provided
that the assigned meaning was consistent in equilibrium. In effect, the
decision maker is choosing a \emph{language. }This is equivalent to the
decision maker announcing her beliefs $P\left( \cdot |m_{1},m_{2}\right) $
for all message pairs $\left( m_{1},m_{2}\right) $. Such an announcement
immediately implies an action $y\left( m_{1},m_{2}\right) $ associated with
each message pair. Given an announcement of beliefs $P\left( \cdot
|m_{1},m_{2}\right) $ and a message $m_{1},$ expert 2 chooses a strategy $%
\mu _{2}\left( \theta ,m_{1}\right) $ to maximize his payoff. Similarly for
expert 1. Finally, for the announced beliefs to be consistent requires that
the announced beliefs correspond to posterior beliefs obtained by applying
Bayes' rule for all message pairs $\left( \mu _{1}\left( \theta \right) ,\mu
_{2}\left( \theta ,\mu _{1}\left( \theta \right) \right) \right) .$ Thus the
decision maker's problem is to choose a language that is incentive
compatible.

The problem of choosing the most informative equilibrium is formally
equivalent to choosing beliefs $P\left( \cdot |m_{1},m_{2}\right) $ to
maximize her \emph{ex ante }expected payoff subject to the constraints that $%
\left( \mu _{1},\mu _{2},y\right) $ form a PBE.

With this reformulation in mind, we turn to the question of whether there
exists an announced set of beliefs satisfying the above constraints such
that the state is always completely revealed. We shall refer to this as full
revelation. Obviously, such a fully revealing equilibrium would maximize the
decision maker's expected payoff.

\subsection{Full Revelation}

Full revelation means that in every state, the equilibrium action is the
same as the decision maker's ideal action, that is, for all $\theta ,$ $%
Y\left( \theta \right) =y^{\ast }\left( \theta \right) .$ (Equivalently, the
associated equilibrium partition $\mathcal{P}$ consists of singleton sets.)
With only a single expert, CS show that full revelation is \emph{not} a 
\emph{Bayesian equilibrium (BE)}.

With multiple experts, however, full revelation can occur in a BE. We
demonstrate this by looking at the case where the experts are biased in the
same direction, that is, $b_{1}>0,$ $b_{2}>0.$ Both experts then prefer a
higher action than is optimal for the decision maker: $y^{\ast }\left(
\theta ,b_{i}\right) >y^{\ast }\left( \theta \right) .$ Suppose that the
decision maker announces the beliefs $P\left( \theta =\min \left\{
m_{1},m_{2}\right\} |m_{1},m_{2}\right) =1.$ The associated strategy of the
decision maker is then $y\left( m_{1},m_{2}\right) =y^{\ast }\left( \min
\left\{ m_{1},m_{2}\right\} \right) .$ Let expert $1$ follow the strategy $%
\mu _{1}\left( \theta \right) =\theta $ of revealing the state and expert $2$
follow the strategy of also revealing the state regardless of what expert $1$%
's message is: $\mu _{2}\left( \theta ,m_{1}\right) =\theta .$ In state $%
\theta $, both experts send messages $m_{1}=$ $m_{2}=\theta ,$ and the
action taken is $y^{\ast }\left( \theta \right) $ which, from the
perspective of either expert, is better than any action $y<y^{\ast }\left(
\theta \right) $. Reporting an $m_{i}<\theta $ will only decrease $i$'s
payoff whereas reporting an $m_{i}>\theta $ will have no effect given that
the other expert follows $\mu _{j}.$

Thus with perfectly informed experts, there exists a BE\ in which the
decision maker can extract all the information and achieve a first-best
outcome.

The equilibrium constructed above, however, involves non-optimizing behavior
on the part of expert $2$ off the equilibrium path. Specifically, in state $%
\theta \in \lbrack 0,1)$ if expert $1$ were to choose a message $%
m_{1}=\theta +\varepsilon ,$ for $\varepsilon >0$ small enough, it is no
longer optimal for expert $2$ to play $\mu _{2}\left( \theta ,m_{1}\right)
=\theta .$ Indeed, he is better off also deviating to $m_{2}=m_{1}.$ Thus
the full revelation BE constructed above is not a PBE\emph{.}

The reason the Bayesian equilibrium constructed above does not survive the
stronger PBE\ notion should be familiar. Expert $2$ cannot credibly commit
to reveal the state regardless of expert $1$'s message. Expert $1$ can
exploit this by exaggerating the true state slightly, confident in the
knowledge that expert $2$ will follow his lead.

More generally:

\begin{proposition}
\label{fullrev}There does not exist a fully revealing PBE.
\end{proposition}

\subsection{\noindent Monotonic Equilibria}

The outcome function associated with full revelation, as well as that
associated with a babbling equilibrium, and with all equilibria where only a
single expert is consulted have in common the property that the equilibrium
action induced in a higher state is at least as large as that induced in a
lower state. Formally, we will say that a PBE $\left( \mu _{1},\mu
_{2},y\right) $ is \emph{monotonic} if the corresponding outcome function $%
Y\left( \cdot \right) $ is a non-decreasing function. Notice that the PBE
constructed in Example 1 also shares this property.

For the remainder of the paper, our analysis will concern itself with
monotonic equilibria. Recall that in the case of a single expert we know
from CS that all equilibria are monotonic. This is not true with multiple
experts as an example in Appendix B shows. For the case of like bias, we
also provide sufficient conditions to ensure that all equilibria are
monotonic equilibria (Proposition 5 in Appendix B). In the case of opposing
biases, our conclusions remain unaltered if we also admit the possibility of
non-monotonic equilibria.

The following result identifies some simple necessary conditions satisfied
by monotonic equilibria.

\begin{lemma}
\label{nonindiff}Suppose $Y$ is monotonic. If $Y$ has a discontinuity at $%
\theta $ and 
\[
\lim_{\varepsilon \downarrow 0}Y\left( \theta -\varepsilon \right)
=y^{-}<y^{+}=\lim_{\varepsilon \downarrow 0}Y\left( \theta +\varepsilon
\right) 
\]
then 
\begin{eqnarray}
U\left( y^{-},\theta ,\min \left\{ b_{1},b_{2}\right\} \right) \geq U\left(
y^{+},\theta ,\min \left\{ b_{1},b_{2}\right\} \right) ,\text{ and }
\label{min} \\
U\left( y^{-},\theta ,\max \left\{ b_{1},b_{2}\right\} \right) \leq U\left(
y^{+},\theta ,\max \left\{ b_{1},b_{2}\right\} \right) .  \label{max}
\end{eqnarray}
\end{lemma}

Viewed from a mechanism design perspective, the inequalities (\ref{min}) and
(\ref{max}) are in the nature of incentive constraints: at any
discontinuity, the expert with bias $\min \left\{ b_{1},b_{2}\right\} $
weakly prefers the lower action $y^{-}$ whereas the expert with bias $\max
\left\{ b_{1},b_{2}\right\} $ weakly prefers the higher action $y^{+}.$ As
we pointed out earlier, Example 1 shows that these inequalities may be
strict for both players (for instance, at the first point of discontinuity, $%
a_{1}$) and thus the incentive compatibility constraint of each of the
experts holds with slack. This is to be contrasted with the single expert
case where the incentive constraints (\ref{noarb}) must hold with equality.

\section{Experts with Like Biases}

In this section, we examine a situation in which a decision maker can
consult with a group of like biased experts. We focus on two questions:
first, what is the information content of advice offered by a given panel of
such experts; and second, how should a decision maker determine the
composition of such a panel. We begin by establishing limits to information
transmission by a group of like biased experts.

Our first result shows that when the experts have like biases there can be
at most a finite number of equilibrium actions played in any monotonic PBE.%
\footnote{%
In Appendix B, we show (Lemma \ref{uqfinite}) that \emph{all }PBE consist of
only finite equilibrium actions for the uniform-quadratic case.} In
particular, it rules out the possibility of full revelation in the case of
like biases (Case 2 in the proof of Proposition \ref{fullrev}) since a fully
revealing equilibrium must be monotonic and involves an infinite number of
equilibrium actions.

\begin{lemma}
\label{finite}Suppose experts have like biases and $Y$ is monotonic. Then
there are a finite number of equilibrium actions.
\end{lemma}

The intuition for Lemma \ref{finite} is that if two equilibrium actions are
sufficiently close to one another, then there will be some state where the
lower action is called for, but both experts prefer the higher action. As a
consequence, the first expert can deviate and send a message inducing the
higher action confident that expert 2 will follow his lead. Put differently,
it is impossible to satisfy the incentive constraints of Lemma \ref
{nonindiff} if equilibrium actions are too close together.

A consequence of Lemma \ref{finite} is that the information transmitted with
like biased experts is severely limited: only a finite number of actions
occur in equilibrium. The multiple expert setting shares this qualitative
feature with the single expert model of CS despite the fact that the
information possessed by the two experts is perfectly correlated.

We next turn to a more precise examination of the potential informational
benefits of adding an expert.

\subsection{Choosing a Cabinet}

Suppose that there is a decision maker who must choose a panel of like
biased experts, a cabinet, to advise her. She is aware of the biases of each
of the experts that she might select. What is the optimal composition of the
panel?

Proposition \ref{one} shows the solution to the problem of determining the
optimal panel is strikingly simple. Specifically, it is optimal to have a
one member panel that consists of the least biased expert. We show that it
is always the case that in the most informative partition equilibrium the
more biased expert's advice has no value. In other words, the more biased
expert is \emph{redundant}.

To answer this question requires that we make welfare comparisons among the
set of monotonic PBE in a multiple experts setting. However, we remind the
reader that to obtain unambiguous welfare comparisons even among single
expert equilibria as in CS, we require an additional assumption (their
Assumption M, p. 1444 of CS) to guarantee that a rightward shift in one
break point leads to a rightward shift in all break points. The most
frequently employed specification where this assumption is satisfied is the
uniform-quadratic case given in Example 1. The transparency of the argument
is also much improved by considering this case; thus, for the remainder of
the section, our arguments will reflect the uniform-quadratic specification.
However, one can show that our main result for this section (Proposition \ref
{one}) holds generally when Assumption M is satisfied.

\subparagraph{\textbf{Example 2}}

It is useful to illustrate the information transmission properties of the
multiple expert game by continuing to study the uniform-quadratic case from
Example 1 where $b_{1}=\frac{1}{40}$ and $b_{2}=\frac{1}{9}$.

If the decision maker solicited only expert 1 for advice, the Pareto
superior (and also most informative) equilibrium results in the partition $%
\mathcal{P}_{1}$ of $\left[ 0,1\right] $ being communicated to the decision
maker. 
\[
\begin{array}{cccccccccccccccc}
\vdash & - & \stackrel{1}{+} & - & - & \stackrel{1}{+} & - & - & - & 
\stackrel{1}{+} & - & - & - & - & - & \dashv \\ 
0 &  & \frac{1}{10} &  &  & \frac{3}{10} &  &  &  & \frac{6}{10} &  &  &  & 
&  & 1
\end{array}
\]
This means that if the true state $\theta $ lies in the interval $\left[ 0,%
\frac{1}{10}\right] ,$ the expert sends a message $m_{1}=\frac{1}{20}$
advising the decision maker to choose action $y_{1}=\frac{1}{20}.$
Similarly, if $\theta \in \left[ \frac{1}{10},\frac{3}{10}\right] $ he
suggests $y_{2}=\frac{4}{20}$, if $\theta \in \left[ \frac{3}{10},\frac{6}{10%
}\right] ,$ he suggests $y_{3}=\frac{9}{20}$ and if $\theta \in \left[ \frac{%
6}{10},1\right] $ he suggests $y_{4}=\frac{16}{20}.$

We will refer to points such as $\frac{1}{10},$ $\frac{3}{10}$ and $\frac{6}{%
10}$ as ``break points'' in that they determine how the set of states is
broken up in such an equilibrium. In the figure given above the superscript
above each break point labels the expert whose message distinguishes states
below the break point from states above the break point. Thus, the decision
maker only knows which of these four intervals contains the true state. As a
result, his expected utility is $-0.0083.$ Notice that in this case, there
is no slack in the incentive constraints of expert 1 at any point of
discontinuity.

Similarly, if the decision maker solicited only expert 2 for advice, the
most informative equilibrium partition is $\mathcal{P}_{2}$: 
\[
\begin{array}{cccccccccccccccc}
\vdash & - & - & - & - & \stackrel{2}{+} & - & - & - & - & - & - & - & - & -
& \dashv \\ 
0 &  &  &  &  & \frac{5}{18} &  &  &  &  &  &  &  &  &  & 1
\end{array}
\]
resulting in a payoff of $-0.0332$ to the decision maker. Notice that expert
2 withholds more information than does expert 1, in the sense that the
variance of the true state of the world, given the equilibrium partition, is
higher with expert 2 than expert 1. Intuitively, since expert 2 wishes the
decision maker to choose a larger value of $y$ than does expert 1, he
withholds more information than does expert 1. Put differently, in
announcing beliefs, the decision maker is faced with a much more daunting
problem in satisfying incentive compatibility for the more biased expert 2.
As a result, the most informative equilibrium consulting only expert 2 leads
to considerably more information withholding.

If there were no further strategic considerations, that is neither expert
knew of the other's existence, the decision maker could \emph{combine} the
reports of the two experts to obtain the partition $\mathcal{P}_{1}\wedge 
\mathcal{P}_{2}$: 
\[
\begin{array}{cccccccccccccccc}
\vdash & - & \stackrel{1}{+} & - & - & \stackrel{2}{+} & \stackrel{1}{+} & -
& - & - & \stackrel{1}{+} & - & - & - & - & \dashv \\ 
0 &  & \frac{1}{10} &  &  & \frac{5}{18} & \frac{3}{10} &  &  &  & \frac{6}{%
10} &  &  &  &  & 1
\end{array}
\]
which is the coarsest common refinement (\emph{join})\emph{\ }of $\mathcal{P}%
_{1}$ and $\mathcal{P}_{2}.$ The decision maker's expected utility is now $%
-.0081.$ Thus, it seems plausible that the addition of another expert, even
an expert more biased than expert 1, might be helpful in overcoming the
problem of strategic information withholding.

Of course, this ignores strategic interaction among the experts. That is,
each expert acts as though he or she were the only source of information
available to the decision maker. Indeed, the specification above is not a
PBE in the multiple experts game. One profitable deviation is for expert 1
to induce a higher equilibrium action for states near $\frac{1}{10}.$

Now suppose that the decision maker solicits both experts for advice
regarding the true state, and the experts are aware of each other's
presence. What is the most informative monotonic PBE?

One such equilibrium was described in Example 1 with the following
information partition. 
\[
\begin{array}{cccccccccccccccc}
\vdash  & + & \stackrel{1}{+} & - & - & \stackrel{2}{+} & - & - & - & - & -
& - & - & - & - & \dashv  \\ 
0 & \frac{1}{180} & \frac{22}{180} &  &  & \frac{61}{180} &  &  &  &  &  & 
&  &  &  & 1
\end{array}
\]
Recall that at the point $\theta =\frac{1}{180}$ neither expert is
indifferent between $y_{1}$ and $y_{2}$ and so there is slack in both
incentive constraints. In particular, expert 1 strictly prefers the \emph{%
lower }equilibrium action for states near $\frac{1}{180}$ and for states
near $\frac{61}{180}.$ The decision maker's expected utility in this
equilibrium is $-.0250.$

An analogous PBE when we eliminate the slack in expert 1's incentive
constraint at the first point of discontinuity results in the equilibrium
partition $\mathcal{Q}:$%
\[
\begin{array}{cccccccccccccccc}
\vdash & \stackrel{1}{+} & \stackrel{1}{+} & - & - & \stackrel{2}{+} & - & -
& - & - & - & - & - & - & - & \dashv \\ 
0 & \frac{1}{72} & \frac{23}{180} &  &  & \frac{41}{120} &  &  &  &  &  &  & 
&  &  & 1
\end{array}
\]
where expert 1 is exactly indifferent at the first two break points and
expert 2 is indifferent at the third break point. This results in expected
utility of $\allowbreak -.0247$ to the decision maker. This is better than
the equilibrium of Example $1.$ The intuition for this result is that, by
shifting the first break point to the right, informativeness is improved
since all of the other break points shift to the right as well. Moreover, as
there is slack in expert 1's incentive constraint, such a rightward shift is
possible. Thus, one can show that $\mathcal{Q}$ is the most informative PBE
in which there are three break points and both experts' messages are
relevant. It can also be shown that it is not possible to create a fourth
(or more) interior break point.

Even though the partition size is limited to that of the most loyal expert,
it might still be possible that combining messages from the experts
represents an informational improvement for the decision maker.

Comparing $\mathcal{Q}$\ to $\mathcal{P}_{1}$\ we see that the more biased
expert $2$ distorts the third break point to the left, from $\frac{6}{10}$\
to $\frac{41}{120}$. This reduces the information content in the right-most
interval by introducing slack into expert 1's incentive constraint.
Moreover, the leftward shift by expert 2 shifts all of the other break
points to the left; thus it also distorts expert 1's break points to the
left, from $\frac{3}{10}$\ down to $\frac{23}{180}$ and from $\frac{1}{10}$
to $\frac{1}{72}.$ The aggregate effect of these distortions is to reduce
the expected utility of the decision maker and that of both experts.

Thus, we observe that\emph{\ in the case of like biases the most informative
monotonic equilibrium is generated by consulting the most loyal expert alone.%
}

To see why this argument generalizes, notice that for any PBE, if there is a
break point where expert 1's incentive constraint holds with slack, it is
possible to shift this break point to the right while still preserving
incentive compatibility. By Assumption M, all of the other break points
shift to the right as well and rightward shifts improve informativeness.
Repeated application of this argument implies that the most informative
equilibrium is one where there is no slack in any of expert 1's incentive
constraints as points of discontinuity, but this is exactly the equilibrium
condition for the single expert case given in CS. Thus, we have that the
addition of one (or more) less loyal experts in the case of like biases can
never help information transmission.

Formally,

\begin{proposition}
\label{one}Suppose that expert $i$ has bias $b_{i}>0$. Then the addition of
another expert with bias $b_{j}\geq b_{i}$ is \textbf{never} informationally
superior.
\end{proposition}

Despite the fact that the messages of one expert can be used to discipline
the incentives of the other expert to deviate, in the case of monotonic
equilibria with like biased experts, this disciplining only has the effect
of causing the incentive constraints to hold with slack. As we saw in the
example, slack in the incentive constraints effectively shifts all of the
break points to the left in any monotonic PBE and hence \emph{reduces }%
information transmission.

Notice that this redundancy effect holds regardless of whether the committee
is cohesive, in the sense the that the biases of the two experts are close
to one another, or extremely diverse, in the sense that the less loyal
expert is much more biased expert than the loyal expert.

\section{Experts with Opposing Biases}

Previously, we observed that a cabinet composed of two experts with like
biases is no more effective than simply consulting the more loyal expert
alone. In this section, we examine whether it is helpful for the decision
maker to choose a cabinet where the experts biases oppose one another.
Specifically, we study the case when the experts are biased in ``opposite
directions,'' that is, $b_{i}>0>b_{j}.$ Now, while expert $i$ still prefers
a higher action than is ideal for the decision maker, expert $j$ prefers a 
\emph{lower }action: for all $\theta ,$ $y^{\ast }\left( \theta
,b_{j}\right) <y^{\ast }\left( \theta \right) <y^{\ast }\left( \theta
,b_{i}\right) .$ In effect, the experts want to tug the decision maker in
opposite directions.

Recall from Proposition \ref{fullrev} that fully revealing PBEs do not
exist. We argue below, however, that when experts have opposing biases it is
possible to construct monotonic equilibria which are ``semi-revealing'' in
the sense that the decision maker gets to know the true state over a \emph{%
portion} of the state space. We then show that semi-revealing equilibria are
informationally superior to the most informative single expert equilibrium.
This construction requires, however, that the single expert not be an
``extremist.''

\subparagraph{Extremists}

We will say that a right biased expert ($b_{i}>0$) is an \emph{extremist} if
for all $\theta ,$ $U\left( y^{\ast }\left( \theta \right) ,\theta
,b_{i}\right) \leq U\left( y^{\ast }\left( 1\right) ,\theta ,b_{i}\right) .$
Similarly, a left biased expert ($b_{j}<0$) is an extremist if for all $%
\theta ,$ $U\left( y^{\ast }\left( 0\right) ,\theta ,b_{j}\right) \geq
U\left( y^{\ast }\left( \theta \right) ,\theta ,b_{j}\right) .$

A right biased extremist is an expert whose bias is so high that no matter
what the state he prefers the highest ideal action $y^{\ast }\left( 1\right) 
$ to the ideal action $y^{\ast }\left( \theta \right) .$ Similarly, a left
biased extremist prefers the lowest ideal action $y^{\ast }\left( 0\right) $
to $y^{\ast }\left( \theta \right) .$

In the uniform-quadratic case an expert is an extremist if $\left|
b_{i}\right| \geq \frac{1}{2}$. Notice that if an extremist were to be
consulted alone he would reveal no information whatsoever: in the single
expert game the unique equilibrium involves only babbling.

\subsection{Semi-Revealing PBE}

With opposing biases there exist monotonic equilibria that are
semi-revealing: a \emph{continuum} of equilibrium actions are induced.
Consider the uniform-quadratic case and suppose $b_{1}<0<b_{2}<\frac{1}{2}.$

%TCIMACRO{
%\TeXButton{Figure 2}{\begin{figure}[t]
%\unitlength 1mm
%\linethickness{0.4pt}
%\begin{picture}(125,125)
%\put(20,20){\line(0,1){105}}
%\put(20,20){\line(1,0){100}}
%\thicklines
%\put(20,20.2){\line(1,1){75}}
%\put(20.2,20){\line(1,1){75}}
%\thinlines
%\linethickness{1.6pt}
%\put(95,107.5){\line(1,0){25}}
%\multiput(20,32.5)(1,1){88}{\makebox(0,0){.}}
%\put(107.5,123){\makebox(0,0)[cc]{$y^*(\cdot,b_2)$}}
%\multiput(53.33,20)(1,1){67}{\makebox(0,0){.}}
%\put(120,90){\makebox(0,0)[cc]{$y^*(\cdot,b_1)$}}
%\linethickness{0.4pt}
%\put(53.33,20){\line(0,-1){2}}
%\put(53.33,15){\makebox(0,0)[cc]{$-b_1$}}
%\put(70,15){\makebox(0,0)[cc]{$\theta$}}
%\put(95,20){\line(0,-1){2}}
%\put(95,15){\makebox(0,0)[cc]{$1-2b_2$}}
%\put(120,20){\line(0,-1){2}}
%\put(120,15){\makebox(0,0)[cc]{1}}
%\put(15,32.5){\makebox(0,0)[cc]{$b_2$}}
%\put(20,32.5){\line(-1,0){2}}
%\put(17,107.5){\makebox(0,0)[cr]{$1-b_2$}}
%\put(20,107.5){\line(-1,0){2}}
%\put(20,120){\line(-1,0){2}}
%\put(15,120){\makebox(0,0)[cc]{1}}
%\end{picture}
%\caption{A PBE with Opposing Biases}
%\end{figure}%
%}}%
%BeginExpansion
\begin{figure}[t]
\unitlength 1mm
\linethickness{0.4pt}
\begin{picture}(125,125)
\put(20,20){\line(0,1){105}}
\put(20,20){\line(1,0){100}}
\thicklines
\put(20,20.2){\line(1,1){75}}
\put(20.2,20){\line(1,1){75}}
\thinlines
\linethickness{1.6pt}
\put(95,107.5){\line(1,0){25}}
\multiput(20,32.5)(1,1){88}{\makebox(0,0){.}}
\put(107.5,123){\makebox(0,0)[cc]{$y^*(\cdot,b_2)$}}
\multiput(53.33,20)(1,1){67}{\makebox(0,0){.}}
\put(120,90){\makebox(0,0)[cc]{$y^*(\cdot,b_1)$}}
\linethickness{0.4pt}
\put(53.33,20){\line(0,-1){2}}
\put(53.33,15){\makebox(0,0)[cc]{$-b_1$}}
\put(70,15){\makebox(0,0)[cc]{$\theta$}}
\put(95,20){\line(0,-1){2}}
\put(95,15){\makebox(0,0)[cc]{$1-2b_2$}}
\put(120,20){\line(0,-1){2}}
\put(120,15){\makebox(0,0)[cc]{1}}
\put(15,32.5){\makebox(0,0)[cc]{$b_2$}}
\put(20,32.5){\line(-1,0){2}}
\put(17,107.5){\makebox(0,0)[cr]{$1-b_2$}}
\put(20,107.5){\line(-1,0){2}}
\put(20,120){\line(-1,0){2}}
\put(15,120){\makebox(0,0)[cc]{1}}
\end{picture}
\caption{A PBE with Opposing Biases}
\end{figure}%
%
%EndExpansion

Figure 2 depicts the outcome function $Y$ associated with a PBE. As
illustrated, in this equilibrium the state is completely revealed when it is
below $1-2b_{2}$ and completely concealed otherwise. Thus for all states $%
\theta <1-2b_{2},$ $Y\left( \theta \right) =\theta =y^{\ast }\left( \theta
\right) ,$ the ideal action for the decision maker.

For all states $\theta \leq 1-2b_{2},$ the equilibrium strategies call for
expert $1$ to send the ``true'' message $m_{1}=\theta $ and for expert $2$
to ``concur'' by sending message $m_{2}=m_{1}$. As long as expert $2$ sends
a message $m_{2}<m_{1}+2b_{2}$ the decision maker follows expert $1$'s
advice and chooses $y=m_{1}.$ If expert $2$ ``disagrees'' with expert $1$
and sends a message $m_{2}\geq m_{1}+2b_{2},$ the decision maker follows $2$%
's advice and chooses $y=m_{2}.$

Notice, however, that if expert $1$ were to ``suggest''\emph{\ }a lower
action $y<\theta $ in state $\theta \leq 1-2b_{2}$ by sending the message $%
m_{1}=y,$ expert $2$ would disagree. If $m_{1}\leq \theta -b_{2}$ then
expert $2$ would disagree since he can induce his ideal action $y\left(
\theta ,b_{2}\right) =\theta +b_{2}$ by sending message $m_{2}=$ $\theta
+b_{2}.$ If $m_{1}>\theta -b_{2}$ then expert $2$ can induce $m_{1}+2b_{2}$
by disagreeing. This is indeed the best outcome $2$ can obtain by
disagreeing and is preferred to the action $m_{1}$ (since $%
m_{1}+2b_{2}-\left( \theta +b_{2}\right) <\left( \theta +b_{2}\right) -m_{1}$%
). Hence, in either case, any attempt by expert $1$ to deviate by suggesting
a lower action will fail since expert $2$ will disagree, thus saddling him
with an even \emph{higher} action than that called for in equilibrium. In
this manner, it is possible to play the experts off against one another to
obtain complete revelation in the interval $[0,1-2b_{2}).$

For states $\theta >1-2b_{2},$ however, this construction fails since now
there is no rationalizable action $z\leq 1$ such that expert $2$ is
indifferent between $y=\theta $ and $z.$ Thus in states $\theta >1-2b_{2}$
complete revelation is not possible. The equilibrium strategies call for
expert $1$ to suggest $m_{1}=1-b_{2}.$ The decision maker follows $1$'s
advice whenever $m_{1}>1-2b_{2}$ and expert $2$'s advice is irrelevant. If $%
m_{1}\leq 1-2b_{2}$ then $2$ disagrees and can induce the action $1.$

It is useful to contrast the PBE constructed above with the equilibrium
construction of Gilligan and Krehbiel (1989) (hereafter, GK). GK consider
the uniform-quadratic case for experts with \emph{equal} but opposing
loyalties ($b_{1}=-b_{2}$). Their construction also differs from ours in
that they examine benefits to combining in a model in which experts' advice
is given \emph{simultaneously}. For extremely biased experts, $b_{2}\in
\left( \frac{1}{4},\frac{1}{2}\right) $, the GK equilibrium construction
yields a completely non-informative equilibrium: both experts babble. We
have demonstrated that, when experts speak sequentially, it is still
possible to obtain full revelation over the interval $\left[ 0,1-2b_{2}%
\right] .$

With this result in hand, we turn to the general question of when a cabinet
of advisors with opposing biases is helpful.

\subsection{Choosing a Cabinet}

We now show that the semi-revealing equilibrium constructed above is
informationally superior to the most informative equilibrium with a single
expert of bias $b_{2}$ as long as $0<b_{2}<\frac{1}{2}.$

Recall from CS (p. 1441) that any equilibrium partition $\mathcal{P}$
consists of the intervals $[a_{0},a_{1}),$ $[a_{1},a_{2}),$ $...,$ $%
[a_{n-1},a_{n}),$ $...,$ $[a_{N-1},a_{N}]$ where 
\[
a_{n}=\frac{n}{N}-2n\left( N-n\right) b_{2}. 
\]

For $N\geq 2,$ it is convenient to define 
\[
a_{N-1}\left( b_{2}\right) =\frac{N-1}{N}-2\left( N-1\right) b_{2} 
\]
as the last break point in the partition $\mathcal{P}$ when the expert has
bias $b_{2}.$

We will argue that for all $N,$ $1-2b_{2}>$ $a_{N-1}.$ This is certainly
true for $N=1$ and for all $N\geq 2$ it is routine to verify that:\footnote{%
Since $\frac{N-1}{N}<1$ and $2\left( N-1\right) b_{2}\geq 2b_{2}.$} 
\begin{eqnarray*}
1-2b_{2} &>&\frac{N-1}{N}-2\left( N-1\right) b_{2} \\
&=&a_{N-1}\left( b_{2}\right)
\end{eqnarray*}

The information partition $\mathcal{Q}$ generated by the semi-revealing
equilibrium consists of singleton sets $\left\{ \theta \right\} $ for all $%
\theta \leq 1-2b_{2}$ together with the set $(1-2b_{2},1].$ Any equilibrium
information partition $\mathcal{P}$ generated by a single expert with bias $%
b_{2}$ consists of the intervals $[0,a_{1}),$ $[a_{1},a_{2}),$ $...,$ $%
[a_{n-1},a_{n}),$ $...,$ $[a_{N-1},1].$ Since $1-2b_{2}>a_{N-1}\left(
b_{2}\right) ,$ $\mathcal{Q}$ is clearly finer than $\mathcal{P}.$

Finally, observe that the strategies of the semi-revealing equilibrium
depend only on $b_{2}$ and are valid for all $b_{1}<0$ as long as $0<b_{2}<%
\frac{1}{2}.$ The exact value of $b_{1}$ plays no role in the construction.

We have thus established:

\begin{proposition}
\label{superior}Suppose that expert $i$ has bias $b_{i}>0$ and is not an
extremist. Then the addition of another expert with $b_{j}<0$ is \textbf{%
always} informationally superior.
\end{proposition}

Proposition \ref{superior} shows that whenever the more loyal expert is
willing to reveal some information on his own, the addition of a second
expert with opposing bias, regardless of how extreme, creates the
possibility of an equilibrium that is strictly preferred by the decision
maker and both of the experts.

Are there any informational gains from having a cabinet of extremists? In
other words, is there any value to having an extreme diversity of opinion in
the cabinet?

\subsection{Extremists and the ``Crossfire Effect''}

Suppose, without loss of generality, that $b_{1}<0<b_{2},$ then

\begin{proposition}[Crossfire Effect]
\label{crossfire}If both experts are extremists, no information is
transmitted in any monotonic PBE.
\end{proposition}

Notice that since extremists never reveal any information when consulted
alone, the Crossfire Effect also applies to the case of like biased experts.%
\footnote{%
The television talk show \emph{Crossfire} regularly pits an avowed right
wing extremist against an avowed left wing extremist. The debate is
singularly uninformative.} Proposition \ref{crossfire} highlights the limits
to the gains from multiple opposing experts illustrated in the example
above. Recall that in the example the presence of expert $j$ led expert $i$
to reveal more information that he was willing to reveal on his own and
vice-versa. Proposition \ref{crossfire} shows that this does not hold if
both experts are sufficiently extreme in their biases. The key intuition in
deriving this result is the importance of a ``disagreement'' action for
expert 2. When experts are extremists, an appropriately constructed
``disagreement'' action exceeds the highest rationalizable action (namely, $%
y^{\ast }\left( 1\right) $) on the part of the decision maker. Thus, there
is no set of beliefs that the decision maker could announce that would lead
expert 2 to anticipate such extreme actions being taken. As a consequence,
the perfection refinement, this time as applied to the decision maker,
constrains the informativeness of equilibrium.

Proposition \ref{superior} shows that balancing opposing non-extremists may
result in each conveying more information than each would singly. In
contrast, the message of Proposition \ref{crossfire} is that combining the
advice of the two experts is of no value when both experts are extremists.
Finally, we show that pairing an extremist with a non-extremist can lead to
an interval in which the state is completely revealed only if the
non-extremist is consulted second.

\subparagraph{\textbf{Example 3}}

Consider the uniform-quadratic case when $b_{1}=-\frac{1}{2},$ $b_{2}=\frac{1%
}{3}$. Then the semi-revealing PBE constructed earlier is preferred by all
to consulting either expert singly.

Now suppose that the order of polling is reversed. This is equivalent to
setting $b_{1}=-\frac{1}{3}$ and $b_{2}=\frac{1}{2}.$ In this case, the
unique monotonic equilibrium is babbling. To see this, temporarily suppose
that all monotonic PBE consisted of finite equilibrium actions, then one can
show that one of the experts must be indifferent at points of discontinuity.
Then, since neither expert will reveal any information when polled alone, it
follows that there can be no points of discontinuity where one of the
experts is indifferent. We can use the first part of the proof of
Proposition \ref{crossfire} to show that a continuum (or countably infinite
number) of actions cannot occur. Finally, suppose that we are in the like
bias case, that is $b_{1}=\frac{1}{2},$ $b_{2}=\frac{1}{3}.$ From
Proposition \ref{one}, we know that babbling is the only monotonic
equilibrium with this cabinet composition.

Thus, we have shown that both the composition of the cabinet as well as the
order of polling can have a profound impact on the information content of
the most informative equilibrium.

\section{Extensions}

In this section, we indicate some possible extensions to our basic model.

Our assumption that both experts are perfectly informed about $\theta $ and
that biases are commonly known ensures that any improvement in information
from combining the advice of the experts arises solely from the strategic
interaction. In our model, the introduction of a second expert may be
useful, not because his information augments that held by the first expert,
but because of the strategic interaction between them. Indeed, were the
first expert simply to disclose his information honestly, introducing a
second experts would obviously have no value.

Of course, in practice the information of experts is neither perfect nor
identical. Hence, in addition to the strategic motives highlighted in this
paper, familiar information aggregation motives are also likely to influence
the optimal number and composition of experts. Thus, our model should be
thought of as only a partial description of the problem of choosing a
cabinet. Incorporating both motives would obviously enhance the realism of
the model but at an increase in complexity that takes it beyond the scope of
the present analysis. Such an extension also obscures circumstances in which
the pure strategic interaction effect of the two experts is helpful and when
it is not.

We also assumed that the two experts offer advice \emph{sequentially }and
speak exactly once. Obviously this is a departure from the reality of
give-and-take discussions between decision makers and experts. One
possibility that captures this ``conversational'' flavor of consulting
experts is to model message sending in a manner analogous to continuous time
bargaining games where, at each instant, any one of the parties is free to
send a message. Obviously, this complicates equilibrium characterization
significantly and remains for future research.

Another alternative to our extensive form is to model the messages of the
experts as occurring simultaneously. It is straightforward to show that the
most informative equilibrium under such an extensive form is full
revelation. Thus, the introduction of a second expert has a dramatic effect
on information transmission. This equilibrium, however, is not robust to a
number of perturbations of the information structure and the extensive form
of the game. For instance, were we to assume that instead of observing $%
\theta $ perfectly, the experts instead received noisy signals of $\theta $,
then full revelation is no longer an equilibrium even as the noise term
becomes arbitrarily small. Sequential moves has the same effect, but is much
more analytically tractable and preserves the original CS analysis as a
special case.

We have relied in a significant way on the specific extensive form, in
particular that the order in which the experts are polled is fixed. One
might reasonably argue that equilibria should be robust to reversing the
order of polling. Our conclusions about the like bias case are robust to
this perturbation. One can also show that the opposite bias results are
likewise robust. In particular, it is possible to construct equilibria in
which (a) the strategies of the experts are independent of the order of
polling; and (b) the information is superior to consulting either expert
alone.\footnote{%
The equilibrium construction is available upon request from the authors.}

Finally, our analysis only concerns itself with cabinets consisting of two
experts. Obviously, the sequential framework we adopt is not particularly
conducive to exercises where more and more experts are added. Nonetheless,
we believe that the basic intuition that satisfying the incentive
constraints of the most loyal agent leads to the most informative
equilibrium in the like bias case will carry over into the $n$ agent case.
In the case of opposite bias, again it is the most loyal agent who
determines the length of the revealing interval in our construction of a
semi-revealing equilibrium. Thus, we expect that our construction would
continue to be an equilibrium provided that the most loyal expert does not
speak first. Whether this can be improved upon by combining the information
of more experts remains an open question.

\section{Conclusion}

Self interested experts influence the decision process by strategically
withholding information that they possess. In a single expert setting, less
biased experts offer more precise information. However, we show that when
there are multiple experts with like biases, the combining of advice from
both experts is detrimental. Naturally, the less loyal expert offers less
precise information. While this by itself is harmful to the decision
process, it has a secondary effect. It also causes the more loyal expert to
strategically respond by reducing the precision of the information he
conveys. The presence of the disloyal advisor contaminates the advice of the
loyal advisor. With like biases, a kind of ``Gresham's Law'' operates: Bad
advice drives out good advice. Thus, a single advisor is superior to a
cabinet composed of advisors from the same side of the spectrum.

When experts have opposing biases, the situation is different. Now unless
the more loyal expert is an extremist, combining his advice with that of a
second advisor, even if less loyal, is always beneficial. The disloyal
expert, even if of little use by himself, can be used to discipline the more
loyal expert. With opposing biases, even bad advice can enhance good advice.
Thus, a cabinet composed of advisors from opposite sides of the spectrum is
superior to a single advisor. But there are limits to how much additional
information may be garnered from a cabinet. Full revelation is still not
possible. Moreover, if the cabinet consists of opposing extremists, no
information is conveyed. \newpage

\appendix

\section{Proofs}

\noindent \textbf{Proof of Proposition \ref{fullrev}. }Suppose not. Then
there is a PBE in which the state is fully revealed and thus for all $\theta
,$ the equilibrium action $Y\left( \theta \right) =y^{\ast }\left( \theta
\right) .$ We first consider the case of opposing biases.

\textbf{Case 1: Opposing biases}

First, consider the sub-case where $b_{1}<0<b_{2}.$

Let $\overline{\theta }<1$ be such that $y^{\ast }\left( \overline{\theta }%
,b_{2}\right) =y^{\ast }\left( 1\right) .$

Let $\theta \in \left( \overline{\theta },1\right) .$ Since $b_{1}<0$ we
have that $y^{\ast }\left( \theta ,b_{1}\right) <y^{\ast }\left( \theta
\right) .$ Choose a $\theta ^{\prime }>\theta $ close enough to $\theta $ so
that $y^{\ast }\left( \theta ^{\prime },b_{1}\right) <y^{\ast }\left( \theta
\right) .$ Suppose that $m_{1}$ and $m_{2}$ are the equilibrium messages in
state $\theta .$ Since the equilibrium is fully revealing, $y\left(
m_{1},m_{2}\right) =y^{\ast }\left( \theta \right) .$

Let $m_{2}^{\prime }=\mu _{2}\left( \theta ^{\prime },m_{1}\right) $ be
expert $2$'s best response to the message $m_{1}$ in state $\theta ^{\prime
}.$ Then by definition, $U\left( y\left( m_{1},m_{2}^{\prime }\right)
,\theta ^{\prime },b_{2}\right) \geq U\left( y\left( m_{1},m_{2}\right)
,\theta ^{\prime },b_{2}\right) \ $and since $y\left( m_{1},m_{2}\right)
=y^{\ast }\left( \theta \right) <y^{\ast }\left( \theta ,b_{2}\right)
<y^{\ast }\left( \theta ^{\prime },b_{2}\right) ,$ $U_{1}\left( y\left(
m_{1},m_{2}\right) ,\theta ^{\prime },b_{2}\right) >0$ and so $y\left(
m_{1},m_{2}^{\prime }\right) \geq y^{\ast }\left( \theta \right) .$

Next observe that $y\left( m_{1},m_{2}^{\prime }\right) \geq y^{\ast }\left(
\theta ^{\prime }\right) .$ Suppose that $y\left( m_{1},m_{2}^{\prime
}\right) <y^{\ast }\left( \theta ^{\prime }\right) $. Then by sending the
message $m_{1}$ in state $\theta ^{\prime }$ expert $1$ can induce the
action $y\left( m_{1},m_{2}^{\prime }\right) $ and since $y^{\ast }\left(
\theta ^{\prime },b_{1}\right) <y^{\ast }\left( \theta \right) \leq y\left(
m_{1},m_{2}^{\prime }\right) <y^{\ast }\left( \theta ^{\prime }\right) $
this is a profitable deviation for $1.$ This is a contradiction and so $%
y\left( m_{1},m_{2}^{\prime }\right) \geq y^{\ast }\left( \theta ^{\prime
}\right) >y^{\ast }\left( \theta \right) .$

By the definition of a PBE, it must be the case that the out of equilibrium
action $y\left( m_{1},m_{2}^{\prime }\right) \leq y^{\ast }\left( 1\right)
=y^{\ast }\left( \overline{\theta },b_{2}\right) .$ Now since $\overline{%
\theta }<\theta ,$ we also have $y\left( m_{1},m_{2}^{\prime }\right)
<y^{\ast }\left( \theta ,b_{2}\right) .$

Thus we have deduced that $y^{\ast }\left( \theta \right) <y\left(
m_{1},m_{2}^{\prime }\right) <y^{\ast }\left( \theta ,b_{2}\right) .$ Since $%
b_{2}>0,$ this implies that $U\left( y^{\ast }\left( \theta \right) ,\theta
,b_{2}\right) <U\left( y\left( m_{1},m_{2}^{\prime }\right) ,\theta
,b_{2}\right) .$ But this contradicts the assumption that $y^{\ast }\left(
\theta \right) $ is an equilibrium action in state $\theta $. Thus full
revelation cannot be an equilibrium.

The sub-case where $b_{2}<0<b_{1}$ is treated similarly.

\textbf{Case 2: Like Biases}

The proof for the case of like biases is analogous. We omit the proof
because, in the case of like biases, the conclusion also follows from a more
general result to come (Lemma \ref{finite}). $\blacksquare \bigskip $

\noindent \textbf{Proof of Lemma \ref{nonindiff}. }In order to economize on
notation, in what follows, we will denote $\theta -\varepsilon $ by $\theta
^{-}$ and $\theta +\varepsilon $ by $\theta ^{+}.$

\textbf{Case 1.} $b_{1}\leq b_{2}.$

To establish (\ref{min}), suppose the contrary, that is, suppose $U\left(
y^{-},\theta ,b_{1}\right) <U\left( y^{+},\theta ,b_{1}\right) $. Then by
continuity, for all $\varepsilon >0$ small enough, 
\begin{equation}
U\left( Y\left( \theta ^{-}\right) ,\theta ^{-},b_{1}\right) <U\left(
Y\left( \theta ^{+}\right) ,\theta ^{-},b_{1}\right) .  \label{1low}
\end{equation}
Now suppose that in state $\theta ^{-}$, expert $1$ were to send the message 
$m_{1}^{+}=\mu _{1}\left( \theta ^{+}\right) $ and let $m_{2}$ be expert $2$%
's best response to this off-equilibrium message in state $\theta ^{-}$ so
that: 
\[
U\left( y\left( m_{1}^{+},m_{2}\right) ,\theta ^{-},b_{2}\right) \geq
U\left( y\left( m_{1}^{+},m_{2}^{+}\right) ,\theta ^{-},b_{2}\right) . 
\]
This implies that $y\left( m_{1}^{+},m_{2}\right) \leq y\left(
m_{1}^{+},m_{2}^{+}\right) $ since otherwise we would have that $U\left(
y\left( m_{1}^{+},m_{2}\right) ,\theta ^{+},b_{2}\right) >U\left( y\left(
m_{1}^{+},m_{2}^{+}\right) ,\theta ^{+},b_{2}\right) $ contradicting the
fact that $Y\left( \theta ^{+}\right) =y\left( m_{1}^{+},m_{2}^{+\vspace{0in}%
}\right) $ is the equilibrium action in state $\theta ^{+}.$

But now since $y\left( m_{1}^{+},m_{2}\right) \leq y\left(
m_{1}^{+},m_{2}^{+}\right) $ and expert $2$ weakly prefers the former in
state $\theta ^{-},$ the fact that $b_{1}\leq b_{2}$ implies that expert $1$
also weakly prefers the former. Thus $U\left( y\left( m_{1}^{+},m_{2}\right)
,\theta ^{-},b_{1}\right) \geq U\left( Y\left( \theta ^{+}\right) ,\theta
^{-},b_{1}\right) $ and hence by (\ref{1low}) 
\[
U\left( y\left( m_{1}^{+},m_{2}\right) ,\theta ^{-},b_{1}\right) >U\left(
Y\left( \theta ^{-}\right) ,\theta ^{-},b_{1}\right) . 
\]
Thus by sending the message $m_{1}^{+}$ in state $\theta ^{-}$ expert $1$
can induce an action that he prefers to the equilibrium action. This is a
contradiction and thus (\ref{min}) holds.

To establish (\ref{max}), again suppose the contrary, that is, $U\left(
y^{-},\theta ,b_{2}\right) >U\left( y^{+},\theta ,b_{2}\right) .$ Then since 
$b_{1}\leq b_{2},$ $U\left( y^{-},\theta ,b_{1}\right) >U\left( y^{+},\theta
,b_{1}\right) .$

Then by continuity, for small enough $\varepsilon >0,$%
\[
U\left( Y\left( \theta ^{-}\right) ,\theta ^{+},b_{1}\right) >U\left(
Y\left( \theta ^{+}\right) ,\theta ^{+},b_{1}\right) 
\]
and 
\[
U\left( Y\left( \theta ^{-}\right) ,\theta ^{+},b_{2}\right) >U\left(
Y\left( \theta ^{+}\right) ,\theta ^{+},b_{2}\right) . 
\]
Hence if in state $\theta ^{+},$ expert $1$ were to send the message $%
m_{1}^{-}=\mu _{1}\left( \theta ^{-}\right) $ expert $2$ will induce an
action $y\left( m_{1}^{-},m_{2}\right) $ that is strictly lower than $%
Y\left( \theta ^{+}\right) .$ This is a profitable deviation for $1$ and
hence a contradiction. Thus (\ref{max}) holds.

\textbf{Case} \textbf{2.} $b_{1}\geq b_{2}.$

The proof for this case is similar. If either (\ref{min}) or (\ref{max})
does not hold then expert $1$ has a profitable deviation. $\blacksquare
\bigskip $

\noindent \textbf{Proof of Lemma \ref{finite}. }Let $\varepsilon
=\min_{j}\min_{\theta }\left[ y^{\ast }\left( \theta ,b_{j}\right) -y^{\ast
}\left( \theta \right) \right] >0.$

Suppose $\theta ^{\prime }<$ $\theta ^{\prime \prime }$ are two states such
that $Y\left( \theta ^{\prime }\right) \equiv y^{\prime }<y^{\prime \prime
}\equiv Y\left( \theta ^{\prime \prime }\right) .$ Then there exist $%
m_{1}^{\prime },m_{2}^{\prime }$ satisfying $m_{1}^{\prime }=\mu _{1}\left(
\theta ^{\prime }\right) $, $m_{2}^{\prime }=\mu _{2}\left( \theta ^{\prime
},m_{1}^{\prime }\right) $ and $y\left( m_{1}^{\prime },m_{2}^{\prime
}\right) =y^{\prime }$ and similarly for the double primes. We will argue
that $y^{\prime \prime }-y^{\prime }\geq \varepsilon .$

Suppose that $y^{\prime \prime }-y^{\prime }<\varepsilon .$

Let $\sigma ^{\prime },\sigma ^{\prime \prime }$ be such that $y^{\ast
}\left( \sigma ^{\prime }\right) =y^{\prime }$ and $y^{\ast }\left( \sigma
^{\prime \prime }\right) =y^{\prime \prime }.$ Then clearly $\sigma ^{\prime
}<\sigma ^{\prime \prime }.\medskip $

\noindent \textsc{claim.} $\sigma ^{\prime }\in Y^{-1}\left( y^{\prime
}\right) $ and $\sigma ^{\prime \prime }\in Y^{-1}\left( y^{\prime \prime
}\right) .$

\noindent \textsc{proof of claim.}\textbf{\ }Let\textbf{\ } $\underline{%
\theta }=$ $\min Y^{-1}\left( y^{\prime }\right) $ and $\overline{\theta }=$ 
$\max Y^{-1}\left( y^{\prime }\right) .$ Then $y^{\ast }\left( \underline{%
\theta }\right) \leq y^{\prime }\leq y^{\ast }\left( \overline{\theta }%
\right) .$ If $y^{\prime }<y^{\ast }\left( \underline{\theta }\right) $ then 
$U\left( y^{\prime },\underline{\theta }\right) <$ $U\left( y^{\ast }\left( 
\underline{\theta }\right) ,\underline{\theta }\right) $ and since $%
U_{12}>0, $ for all $t\in \left[ \underline{\theta },\overline{\theta }%
\right] ,$ $U\left( y^{\prime },t\right) <$ $U\left( y^{\ast }\left( 
\underline{\theta }\right) ,t\right) .$ If $y^{\prime }>y^{\ast }\left( 
\overline{\theta }\right) $ a similar argument holds.

Now since $y^{\ast }\left( \cdot \right) $ is increasing, $\underline{\theta 
}\leq \sigma \leq \overline{\theta }$ and $Y\left( \cdot \right) $ is
monotonic, $\sigma ^{\prime }\in Y^{-1}\left( y^{\prime }\right) .$

This establishes the claim. $\Box \medskip $

Now since $U_{1}\left( y^{\prime },\sigma ^{\prime }\right) =0$, $U_{13}>0$
implies that for $j=1,2,$ $U_{1}\left( y^{\prime },\sigma ^{\prime
},b_{j}\right) >0$ and since $y^{\prime \prime }-y^{\prime }<\varepsilon ,$ $%
U_{1}\left( y^{\prime \prime },\sigma ^{\prime },b_{j}\right) >0$ also.
Similarly, since $U_{1}\left( y^{\prime \prime },\sigma ^{\prime \prime
}\right) =0,U_{13}>0$ implies that $U_{1}\left( y^{\prime \prime },\sigma
^{\prime \prime },b_{j}\right) >0$ and since $y^{\prime }<y^{\prime \prime
}, $ $U_{1}\left( y^{\prime },\sigma ^{\prime \prime },b_{j}\right) >0$ also.

Now let $z^{\prime }$ solve 
\[
U\left( y^{\prime \prime },\sigma ^{\prime },b_{2}\right) =U\left( z^{\prime
},\sigma ^{\prime },b_{2}\right) 
\]
and let $z^{\prime \prime }$ solve 
\[
U\left( y^{\prime \prime },\sigma ^{\prime \prime },b_{2}\right) =U\left(
z^{\prime \prime },\sigma ^{\prime \prime },b_{2}\right) . 
\]

Since $U_{1}\left( y^{\prime \prime },\sigma ^{\prime \prime },b_{2}\right)
>0$ and $U_{11}<0,$ $U_{1}\left( z^{\prime \prime },\sigma ^{\prime \prime
},b_{2}\right) <0$ and so $z^{\prime \prime }>y^{\prime \prime }.$ Next
since $U_{12}>0,$ $U\left( y^{\prime \prime },\sigma ^{\prime \prime
},b_{2}\right) <U\left( z^{\prime },\sigma ^{\prime \prime },b_{2}\right) $
and so $z^{\prime \prime }>z^{\prime }.$

Now in state $\sigma ^{\prime },$ if expert $1$ sent the message $%
m_{1}^{\prime \prime }$ in lieu of $m_{1}^{\prime },$ then we claim that
expert $2$ could do no better than sending message $m_{2}^{\prime \prime }$
resulting in action $y^{\prime \prime }.$ This is because all actions in the
interval $\left( y^{\prime \prime },z^{\prime \prime }\right) $ cannot be
induced by expert $2$ following $m_{1}^{\prime \prime }$ that is, there does
not exist an $m_{2}$ such that $y\left( m_{1}^{\prime \prime },m_{2}\right)
\in \left( y^{\prime \prime },z^{\prime \prime }\right) .$ If there were
such a message then $y^{\prime \prime }$ would not be the equilibrium action
in state $\sigma ^{\prime \prime }.$ Thus, following $m_{1}^{\prime \prime
}, $ no action greater than $y^{\prime \prime }$ is preferred by expert $2$
to $y^{\prime \prime }.$ Thus if expert $1$ sends the message $m_{1}^{\prime
\prime }$ in state $\sigma ^{\prime },$ expert $2$ will respond by sending
the message $m_{2}^{\prime \prime },$ thereby resulting in action $y^{\prime
\prime }.$ This deviation is then profitable for expert $1$. $\blacksquare
\bigskip $

\noindent \textbf{Proof of Proposition \ref{one}. }Suppose $%
a_{1},a_{2},...,a_{n-1}$ are points where the function $Y$ is discontinuous.
Let $c=\min \left\{ b_{1},b_{2}\right\} .$ Lemma \ref{nonindiff} implies
that these points satisfy the following system of inequalities 
\begin{eqnarray*}
\left( a_{1}+c\right) -\frac{a_{1}}{2} &\leq &\frac{a_{1}+a_{2}}{2}-\left(
a_{1}+c\right) \\
\left( a_{2}+c\right) -\frac{a_{1}+a_{2}}{2} &\leq &\frac{a_{2}+a_{3}}{2}%
-\left( a_{2}+c\right) \\
&&\vdots \\
\left( a_{n}+c\right) -\frac{a_{n-1}+a_{n}}{2} &\leq &\frac{a_{n}+a_{n+1}}{2}%
-\left( a_{n}+c\right) \\
&&\vdots \\
\left( a_{N-1}+c\right) -\frac{a_{N-2}+a_{N-1}}{2} &\leq &\frac{a_{N-1}+1}{2}%
-\left( a_{N-1}+c\right)
\end{eqnarray*}
This system is equivalent to 
\begin{eqnarray*}
a_{1} &\leq &\frac{a_{2}}{2}-2c \\
a_{2} &\leq &\frac{a_{1}+a_{3}}{2}-2c \\
&&\vdots \\
a_{n} &\leq &\frac{a_{n-1}+a_{n+1}}{2}-2c \\
&&\vdots \\
a_{N-1} &\leq &\frac{a_{N-2}+1}{2}-2c
\end{eqnarray*}
which results in the following recursive system: 
\begin{eqnarray*}
a_{1} &\leq &\frac{1}{2}a_{2}-2c \\
a_{2} &\leq &\frac{2}{3}a_{3}-4c \\
&&\vdots \\
a_{n} &\leq &\frac{n}{n+1}a_{n+1}-2nc \\
&&\vdots \\
a_{N-1} &\leq &\frac{N-1}{N}-2\left( N-1\right) c
\end{eqnarray*}

Now let $\overline{a}_{1},\overline{a}_{2},...,\overline{a}_{N-1}$ be the
solution to the corresponding system of equations. Then clearly we have that 
$a_{1}\leq \overline{a}_{1},a_{2}\leq \overline{a}_{2},...,a_{N-1}\leq 
\overline{a}_{N-1}.$ We can now directly apply Theorem 4 of CS. This implies
that the single expert equilibrium is informationally superior. $%
\blacksquare \bigskip $

\noindent \textbf{Proof of Proposition \ref{crossfire}. }We first show that
a continuum of equilibrium actions cannot occur. We then establish that the
only monotonic PBE with finite equilibrium actions involves babbling.

Consider an interval of states $\left( \sigma ,\tau \right) $ such that for
all $\theta \in \left( \sigma ,\tau \right) ,$ $Y\left( \theta \right)
=y^{\ast }\left( \theta \right) $ so that the state is completely revealed
in this interval. Since expert 2 is an extremist, for all $\theta \in \left(
\sigma ,\tau \right) ,$ $U\left( y^{\ast }\left( \theta \right) ,\theta
,b_{2}\right) <U\left( y^{\ast }\left( 1\right) ,\theta ,b_{2}\right) .$
Hence, $y^{\ast }\left( \theta \right) $ must be the highest action
inducible by expert 2 following $\mu _{1}\left( \theta \right) .$

Since $b_{1}<0,$ for small $\varepsilon >0,$ $U\left( y^{\ast }\left( \theta
\right) ,\theta ,b_{1}\right) <U\left( y^{\ast }\left( \theta -\varepsilon
\right) ,\theta ,b_{1}\right) .$ Hence, in state $\theta ,$ if expert 1
plays $\mu _{1}\left( \theta -\varepsilon \right) ,$ expert 2 can do no
better than to induce $y^{\ast }\left( \theta -\varepsilon \right) $ by
playing $\mu _{2}\left( \theta -\varepsilon ,\mu _{1}\left( \theta
-\varepsilon \right) \right) ,$ but this is a profitable deviation for
expert 1.

Hence, no interval of the form $\left( \sigma ,\tau \right) $ can exist and
hence there cannot be a continuum of equilibrium actions. Essentially the
same argument also rules out that there are a countable infinity of
equilibrium actions.

Thus there must be a finite number of equilibrium actions.

Suppose $Y$ has an upward jump at $\theta .$ Then since $b_{1}<b_{2},$
Proposition \ref{nonindiff} implies that $U\left( y^{-},\theta ,b_{1}\right)
\geq U\left( y^{+},\theta ,b_{1}\right) $ and $U\left( y^{-},\theta
,b_{2}\right) \leq U\left( y^{+},\theta ,b_{2}\right) .$ \medskip

\noindent \textsc{claim.} If $\lim_{\varepsilon \downarrow 0}Y\left( \theta
-\varepsilon \right) =y^{-}<\lim_{\varepsilon \downarrow 0}Y\left( \theta
+\varepsilon \right) =y^{+},$then 
\[
\text{either }U\left( y^{-},\theta ,b_{1}\right) =U\left( y^{+},\theta
,b_{1}\right) \text{ or }U\left( y^{-},\theta ,b_{2}\right) =U\left(
y^{+},\theta ,b_{2}\right) \text{.} 
\]

\noindent \textsc{proof of claim.} Suppose neither is an equality. Since for
all small $\varepsilon >0,$ $Y\left( \theta -\varepsilon \right) <y^{\ast
}\left( \theta -\varepsilon \right) $ the fact that expert 2 is an extremist
then implies that $U\left( Y\left( \theta -\varepsilon \right) ,\theta
-\varepsilon ,b_{2}\right) <U\left( y^{\ast }\left( 1\right) ,\theta
-\varepsilon ,b_{2}\right) .$ Therefore the highest action that $2$ can
induce following the message $\mu _{1}\left( \theta -\varepsilon \right) $
is $Y\left( \theta -\varepsilon \right) .$ Now in some state $\theta
+\varepsilon $ if $1$ were to send the message $\mu _{1}\left( \theta
-\varepsilon \right) $ this would result in the action $Y\left( \theta
-\varepsilon \right) $ and would be a profitable deviation for $1.$ Since
this is a contradiction, the claim is established. $\square \medskip $

Thus we have argued that at every point of discontinuity at least one expert
is indifferent between $y^{-}$ and $y^{+}.$

Finally, we establish that points of discontinuity where one of the experts
is indifferent cannot occur in any monotonic PBE. Suppose that the contrary
is true, then there exist at least three break points, $a_{n-1},a_{n}$ and $%
a_{n+1},$ such that 
\begin{equation}
U\left( \overline{y}\left( a_{n-1},a_{n}\right) ,a_{n},b_{i}\right) =U\left( 
\overline{y}\left( a_{n},a_{n+1}\right) ,a_{n},b_{i}\right)  \label{oppind}
\end{equation}
where $\bar{y}\left( \sigma ,\tau \right) $ is the action that maximizes $E%
\left[ U\left( y,\theta \right) |\theta \in \left( \sigma ,\tau \right) %
\right] $

First, we show that $a_{n-1}>0$ and $a_{n+1}<1.$ Suppose that $a_{n-1}=0$,
then we have that $U\left( \overline{y}\left( 0,a_{n}\right)
,a_{n},b_{i}\right) =U\left( \overline{y}\left( a_{n},a_{n+1}\right)
,a_{n},b_{i}\right) .$

For $\alpha \in \left( 0,1\right) $ define $\beta \left( \alpha \right) $ by 
\[
U\left( \overline{y}\left( 0,\alpha \right) ,\alpha ,b_{i}\right) =U\left( 
\overline{y}\left( \alpha ,\beta \left( \alpha \right) \right) ,\alpha
,b_{i}\right) . 
\]
But since the most informative equilibrium with expert $i$ alone involves no
information revelation we know that for all $\alpha \in \left( 0,1\right) $, 
$\beta \left( \alpha \right) >1$ contradicting (\ref{oppind}). Thus, $%
a_{n-1}>0.$ A similar argument shows that $a_{n+1}<1.$

Now for $\varepsilon >0$ define $\phi \left( \varepsilon \right) $ by 
\[
U\left( \overline{y}\left( a_{n-1}-\phi \left( \varepsilon \right)
,a_{n}\right) ,a_{n},b_{i}\right) =U\left( \overline{y}\left(
a_{n},a_{n+1}+\varepsilon \right) ,a_{n},b_{i}\right) . 
\]
Note that $\phi \left( \varepsilon \right) $ is well-defined since $U$ is
concave and $\overline{y}$ is increasing in both arguments. Furthermore, $%
\phi $ is increasing. Let $\overline{\varepsilon }=\min \left\{
1-a_{n+1},\phi ^{-1}\left( a_{n-1}\right) \right\} .$ Thus either $a_{n+1}+%
\overline{\varepsilon }=1$ or $a_{n-1}-\phi \left( \overline{\varepsilon }%
\right) =0.$ This contradicts the first observation.

Hence, there are no points of discontinuity at which one of the experts is
indifferent. The only remaining monotonic PBE is babbling by both experts. $%
\blacksquare \bigskip $

\section{Non-monotonic Equilibria}

Throughout, we have restricted attention to the case where equilibria were
monotonic. Indeed, our results on welfare analysis for like biased experts
relied essentially on this. We now study some features of non-monotonic
equilibria and provide some sufficient conditions for all equilibria to be
monotonic for like biased experts in the uniform-quadratic case.

We begin by presenting an explicit example of a non-monotonic equilibrium.

\subsection{Example of Non-monotonic Equilibria}

\subparagraph{Example 4}

Once again, consider the uniform quadratic case. Suppose that $b_{1}=\frac{11%
}{160}$ and $b_{2}=\frac{3}{20}$ are the biases of the two experts. A PBE
for this game is depicted in Figure 3, where the states $a_{1}=.1,$ $%
a_{2}=.28,$ $a_{3}=.34$ and the actions $y_{1}=.1475,$ $y_{2}=.19$ and $%
y_{3}=.67.$

%TCIMACRO{
%\TeXButton{Figure 3}{\begin{figure}[t]
%\unitlength 1mm
%\linethickness{0.4pt}
%\begin{picture}(120,105)
%\put(20,20){\line(0,1){85}}
%\put(20,20){\line(1,0){100}}
%\linethickness{1.6pt}
%\put(20,35){\line(1,0){10}}
%\put(30,39){\line(1,0){18}}
%\put(48,35){\line(1,0){6}}
%\put(54,87){\line(1,0){66}}
%\linethickness{0.4pt}
%\put(30,37){\circle*{1}}
%\put(30,37){\vector(0,1){1.5}}
%\put(30,37){\vector(0,-1){1.5}}
%\put(48,63){\circle*{1}}
%\put(48,63){\vector(0,1){23}}
%\put(48,63){\vector(0,-1){23}}
%\put(54,61){\circle*{1}}
%\put(54,61){\vector(0,1){25}}
%\put(54,61){\vector(0,-1){25}}
%\put(47,87){\line(1,0){2}}
%\multiput(20,26.8)(1,1){69}{\makebox(0,0){.}}
%\put(76,98){\makebox(0,0)[cc]{$y^*(\cdot,b_2)$}}
%\multiput(20,35)(1,1){61}{\makebox(0,0){.}}
%\put(94,98){\makebox(0,0)[cc]{$y^*(\cdot,b_1)$}}
%\put(30,20){\line(0,-1){2}}
%\put(30,15){\makebox(0,0)[cc]{$a_1$}}
%\put(48,20){\line(0,-1){2}}
%\put(48,15){\makebox(0,0)[cc]{$a_2$}}
%\put(54,20){\line(0,-1){2}}
%\put(54,15){\makebox(0,0)[cc]{$a_3$}}
%\put(70,15){\makebox(0,0)[cc]{$\theta$}}
%\put(120,20){\line(0,-1){2}}
%\put(120,15){\makebox(0,0)[cc]{1}}
%\put(15,35){\makebox(0,0)[cc]{$y_1$}}
%\put(20,35){\line(-1,0){2}}
%\put(15,39){\makebox(0,0)[cc]{$y_2$}}
%\put(20,39){\line(-1,0){2}}
%\put(15,87){\makebox(0,0)[cc]{$y_3$}}
%\put(20,87){\line(-1,0){2}}
%\end{picture}
%\caption{A Non-monotonic PBE}
%\end{figure}
%}}%
%BeginExpansion
\begin{figure}[t]
\unitlength 1mm
\linethickness{0.4pt}
\begin{picture}(120,105)
\put(20,20){\line(0,1){85}}
\put(20,20){\line(1,0){100}}
\linethickness{1.6pt}
\put(20,35){\line(1,0){10}}
\put(30,39){\line(1,0){18}}
\put(48,35){\line(1,0){6}}
\put(54,87){\line(1,0){66}}
\linethickness{0.4pt}
\put(30,37){\circle*{1}}
\put(30,37){\vector(0,1){1.5}}
\put(30,37){\vector(0,-1){1.5}}
\put(48,63){\circle*{1}}
\put(48,63){\vector(0,1){23}}
\put(48,63){\vector(0,-1){23}}
\put(54,61){\circle*{1}}
\put(54,61){\vector(0,1){25}}
\put(54,61){\vector(0,-1){25}}
\put(47,87){\line(1,0){2}}
\multiput(20,26.8)(1,1){69}{\makebox(0,0){.}}
\put(76,98){\makebox(0,0)[cc]{$y^*(\cdot,b_2)$}}
\multiput(20,35)(1,1){61}{\makebox(0,0){.}}
\put(94,98){\makebox(0,0)[cc]{$y^*(\cdot,b_1)$}}
\put(30,20){\line(0,-1){2}}
\put(30,15){\makebox(0,0)[cc]{$a_1$}}
\put(48,20){\line(0,-1){2}}
\put(48,15){\makebox(0,0)[cc]{$a_2$}}
\put(54,20){\line(0,-1){2}}
\put(54,15){\makebox(0,0)[cc]{$a_3$}}
\put(70,15){\makebox(0,0)[cc]{$\theta$}}
\put(120,20){\line(0,-1){2}}
\put(120,15){\makebox(0,0)[cc]{1}}
\put(15,35){\makebox(0,0)[cc]{$y_1$}}
\put(20,35){\line(-1,0){2}}
\put(15,39){\makebox(0,0)[cc]{$y_2$}}
\put(20,39){\line(-1,0){2}}
\put(15,87){\makebox(0,0)[cc]{$y_3$}}
\put(20,87){\line(-1,0){2}}
\end{picture}
\caption{A Non-monotonic PBE}
\end{figure}
%
%EndExpansion

The outcome function $Y$ associated with this equilibrium is depicted above
and is clearly non-monotonic. Notice that in state $a_{1}$ expert 1 is
indifferent between $y_{1}$ and $y_{2};$ hence, for all $\theta >a_{1},$
expert 1 prefers $y_{2}$ to $y_{1}.$ Likewise, in state $a_{3},$ expert 1 is
indifferent between $y_{1}$ and $y_{3}.$ Finally, in state $a_{2}$ expert $2$
is indifferent between $y_{2}$ and $y_{3}.$

To induce action $y_{2},$ expert 1 must suggest the action $m_{1}=y_{2},$
and expert 2 must agree. If, on the other hand, expert 2 disagrees, action $%
y_{3}$ is induced. Since expert $2$ (weakly) prefers $y_{2}$ to $y_{3}$ if
and only if $\theta \leq a_{2},$ then when expert $1$ suggests $y_{2},$
expert $2$ will ``agree'' only $\theta \leq a_{2}$. Thus, despite the fact
that both experts prefer $y_{2}$ to $y_{1}$ for $\theta \in \left(
a_{2},a_{3}\right) ,$ expert 1 cannot obtain $y_{2}$ since expert 2 will
then ``disagree'' and induce the higher action $y_{3}$ that he prefers to $%
y_{2}$. It is the threat of ``overshooting'' by the more biased expert 2
that sustains the downward jump in the outcome function despite the fact
that both experts and the decision maker prefer the higher action $y_{2}$ to 
$y_{1}$ when $\theta >a_{2}.$

\subsection{Sufficient Conditions for Monotonicity}

We now turn to establishing sufficient conditions for equilibria to be
monotonic when experts have like biases.

\begin{lemma}
\label{evenmono}There exists a $\overline{\theta }<1$ such that $Y\left(
\cdot \right) $ is monotone over $\left[ \overline{\theta },1\right] .$
\end{lemma}

\noindent \textbf{Proof. }Observe that since $U_{1}\left( y^{\ast }\left(
1\right) ,1\right) =0,$ and $U_{13}>0,$ $U_{1}\left( y^{\ast }\left(
1\right) ,1,b_{i}\right) >0.$ Let $\overline{\theta }=\inf \left\{ \theta
:U_{1}\left( y^{\ast }\left( 1\right) ,\theta ,\min \left\{
b_{1},b_{2}\right\} \right) >0\right\} .$ Since $U_{1}\left( y^{\ast }\left(
1\right) ,1,b_{i}\right) >0$ for $i=1,2,$ it follows that $\overline{\theta }%
<1.$ Then for all $\theta >\overline{\theta }$ and all $y\leq y^{\ast
}\left( 1\right) ,$ $U_{1}\left( y,\theta ,b_{i}\right) >0$ for $i=1,2.$

Suppose there exist states $\theta ^{\prime },\theta ^{\prime \prime }$
satisfying $\overline{\theta }<\theta ^{\prime }<\theta ^{\prime \prime }$
such that $Y\left( \theta ^{\prime }\right) =y^{\prime }>Y\left( \theta
^{\prime \prime }\right) =y^{\prime \prime }.$ Suppose $\left( m_{1}^{\prime
},m_{2}^{\prime }\right) $ is sent in state $\theta ^{\prime }$ and $%
y^{\prime }=y\left( m_{1}^{\prime },m_{2}^{\prime }\right) .$ Similarly,
suppose $\left( m_{1}^{\prime \prime },m_{2}^{\prime \prime }\right) $ is
sent in state $\theta ^{\prime \prime }$ and $y^{\prime \prime }=y\left(
m_{1}^{\prime \prime },m_{2}^{\prime \prime }\right) .$ Observe that since $%
y^{\prime }$ and $y^{\prime \prime }$ are equilibrium actions $y^{\prime
}\leq y^{\ast }\left( 1\right) $ and $y^{\prime \prime }\leq y^{\ast }\left(
1\right) .$

Now suppose that expert $1$ deviates and sends message $m_{1}^{\prime }$ in $%
\theta ^{\prime \prime }.$ Now if expert 2 sends $m_{2}^{\prime },$ then
action $y^{\prime }$ occurs and, since $\theta ^{\prime \prime }>\overline{%
\theta },$ it follows that 
\[
U\left( y^{\prime \prime },\theta ^{\prime \prime },b_{i}\right) <U\left(
y^{\prime },\theta ^{\prime \prime },b_{i}\right) 
\]
for $i=1,2$ since $y^{\prime }\leq y^{\ast }\left( 1\right) .$ Thus expert $%
2 $'s best response to $m_{1}^{\prime }$ in state $\theta ^{\prime \prime }$
must yield him at least $U\left( y^{\prime },\theta ^{\prime \prime
},b_{2}\right) .$ Since $U_{1}\left( y^{\prime },\theta ^{\prime \prime
},b_{2}\right) >0$ this best response, say $m_{2}^{\prime \prime \prime },$
cannot result in an action $y\left( m_{1}^{\prime },m_{2}^{\prime \prime
\prime }\right) <y^{\prime }.$ Thus $y\left( m_{1}^{\prime },m_{2}^{\prime
\prime \prime }\right) >y^{\prime }$ and since $U_{1}\left( y^{\prime
},\theta ^{\prime \prime },b_{1}\right) >0,$%
\[
U\left( y\left( m_{1}^{\prime },m_{2}^{\prime \prime \prime }\right) ,\theta
^{\prime \prime },b_{1}\right) >U\left( y^{\prime },\theta ^{\prime \prime
},b_{1}\right) . 
\]
Thus it is profitable for $1$ to deviate to $m_{1}^{\prime }$ in state $%
\theta ^{\prime \prime }$ and we have obtained a contradiction. $%
\blacksquare \bigskip $

For the uniform-quadratic case, we can extend Lemma \ref{finite} so that the
assumption of monotonicity is unnecessary.

\begin{lemma}
\label{uqfinite}In the uniform-quadratic case with like biased experts,
there are a finite number of equilibrium actions in any PBE.
\end{lemma}

The proof is tedious and offers no new insights on the problem, so it is
omitted. It is available upon request from the authors.

\begin{lemma}
\label{gap}In the uniform-quadratic case, suppose that $Y$ has a downward
jump at $\theta ,$ that is, $\lim_{\varepsilon \downarrow 0}Y\left( \theta
-\varepsilon \right) =y^{-}>\lim_{\varepsilon \downarrow 0}Y\left( \theta
+\varepsilon \right) =y^{+}.$ Then 
\[
y^{-}-y^{+}\leq 2\left| b_{2}-b_{1}\right| . 
\]
\end{lemma}

\noindent \textbf{Proof. }Let $\theta $ be the largest state at which there
is a downward jump in $Y.$ Such a $\theta $ exists since $Y$ is eventually
monotone (Lemma \ref{evenmono}) and there are only a finite number of
equilibrium actions (Lemma \ref{uqfinite}).

Suppose $b_{1}<b_{2}.$ As before, in what follows, we will denote $\theta
-\varepsilon $ by $\theta ^{-}$ and $\theta +\varepsilon $ by $\theta ^{+}.$

First observe that there does not exist a state $\sigma >\theta $ such that $%
Y\left( \sigma \right) =y^{-}.$ To see this note that if $\tau =\sup \left\{
t:Y\left( t\right) =y^{+}\right\} $ then by Lemma \ref{nonindiff} $U\left(
y^{+},\tau ,b_{1}\right) \geq U\left( y^{-},\tau ,b_{1}\right) $ (where we
use the fact that the conclusion of Lemma \ref{nonindiff} holds as long as $%
Y $ is monotonic on the interval $\left[ \theta ,1\right] $)$.$ Hence there
is an $\varepsilon >0$ small enough so that in state $\theta ^{-},$ $U\left(
y^{+},\theta ^{-},b_{1}\right) >U\left( y^{-},\theta ^{-},b_{1}\right) .$ If
in state $\theta ^{-},$ expert $1$ were to send the message $m_{1}=\mu
_{1}\left( \theta ^{+}\right) $ then expert $2$ cannot do better than to
send the message $m_{2}=\mu _{2}\left( \theta ^{+},\mu _{1}\left( \theta
^{+}\right) \right) .$ This is a profitable deviation for $1.$

Thus we know that $y^{-}\leq y^{\ast }\left( \theta \right) $ since
otherwise the action $y^{-}$ could not be a best response to any beliefs
held by the decision maker. Thus, $U\left( y^{-},\theta ,b_{1}\right)
>U\left( y^{+},\theta ,b_{1}\right) .$

Suppose in some state $\theta ^{+}$ expert $1$ were to send the message $%
m_{1}=\mu _{1}\left( \theta ^{-}\right) .$ Then there must be a message $%
m_{2}$ such that $y\left( m_{1},m_{2}\right) =z$ (say) is such that $U\left(
z,\theta ^{+},b_{1}\right) <U\left( y^{+},\theta ^{+},b_{1}\right) $ and so
by continuity $U\left( z,\theta ,b_{1}\right) \leq U\left( y^{+},\theta
,b_{1}\right) .$ In the uniform-quadratic case this reduces to 
\begin{equation}
z-\left( \theta +b_{1}\right) \geq \left( \theta +b_{1}\right) -y^{+}
\label{gap1}
\end{equation}

But for $m_{2}$ to be a best response for expert $2$ in state $\theta ^{+}$
requires that $U\left( z,\theta ^{+},b_{2}\right) \geq U\left( y^{-},\theta
^{+},b_{2}\right) .$ On the other hand, the equilibrium condition implies $%
U\left( z,\theta ^{-},b_{2}\right) \leq U\left( y^{-},\theta
^{-},b_{2}\right) .$ Thus, $U\left( z,\theta ,b_{2}\right) =U\left(
y^{-},\theta ,b_{2}\right) .$ In the uniform-quadratic case, this reduces to 
\begin{equation}
z=2\left( \theta +b_{2}\right) -y^{-}  \label{gap2}
\end{equation}

Combining (\ref{gap1}) and (\ref{gap2}) yields the required inequality.

The proof for the case when $b_{1}>b_{2}$ is similar. $\blacksquare \bigskip 
$

Using Lemma \ref{uqfinite}, we are now in a position to state a sufficient
condition for monotonicity of all PBE in case of like biases. We show that
if the first expert is more biased than is the second, \emph{all} equilibria
are monotonic.

\begin{proposition}
\label{uqmono}In the uniform-quadratic case, if $b_{1}\geq b_{2}>0$ then all
PBE are monotonic.
\end{proposition}

\noindent \textbf{Proof. }Let $\theta $ be the largest state at which there
is a downward jump in $Y.$ Such a $\theta $ exists since $Y$ is eventually
monotone (Lemma \ref{evenmono}) and there are only a finite number of
equilibrium actions (Lemma \ref{uqfinite}). Define $y^{-}$ and $y^{+}$ as
follows: 
\[
\lim_{\varepsilon \uparrow 0}Y\left( \theta -\varepsilon \right)
=y^{-}>\lim_{\varepsilon \downarrow 0}Y\left( \theta +\varepsilon \right)
=y^{+} 
\]

As before, in what follows, we will denote $\theta -\varepsilon $ by $\theta
^{-}$ and $\theta +\varepsilon $ by $\theta ^{+}.$

There are two cases to consider. First, suppose that $U\left( y^{-},\theta
,b_{1}\right) \geq U\left( y^{+},\theta ,b_{1}\right) $. Then for $%
\varepsilon >0$ small enough, $U\left( y^{-},\theta ^{+},b_{1}\right)
>U\left( y^{+},\theta ^{+},b_{1}\right) $. In state $\theta ^{+}$ if $1$
sends the message $m_{1}=\mu _{1}\left( \theta ^{-}\right) ,$ then $2$
cannot do better than to send $m_{2}=\mu _{2}\left( \theta ^{-},\mu
_{1}\left( \theta ^{-}\right) \right) $ resulting in $y^{-}.$ This is a
profitable deviation for $1.$

Next, suppose that $U\left( y^{-},\theta ,b_{1}\right) <U\left( y^{+},\theta
,b_{1}\right) .$ Define $\tau =\sup \left\{ \sigma :Y\left( \sigma \right)
=y^{+}\right\} .$ Notice that $\tau <1$ and there exists a $\sigma $ $>\tau $
such that $Y\left( \sigma \right) =y^{-}.$ If $U\left( y^{-},\tau
,b_{1}\right) >U\left( y^{+},\tau ,b_{1}\right) $ then sending message $\mu
_{1}\left( \theta ^{-}\right) $ in state $\tau $ induces action $Y\left(
\theta ^{-}\right) =y^{-}$ which is a profitable deviation. If $U\left(
y^{-},\tau ,b_{1}\right) \leq U\left( y^{+},\tau ,b_{1}\right) ,$ then since 
$y^{\ast }\left( \tau \right) \geq y^{+}$ and thus $y^{\ast }\left( \tau
,b_{1}\right) >y^{+},$ we have in the uniform-quadratic case 
\[
y^{-}-y^{+}\geq 2b_{1} 
\]

But this contradicts Lemma \ref{gap} and thus $Y$ has no downward
jumps.\allowbreak\ $\blacksquare $\newpage

\begin{thebibliography}{99}
\bibitem{Austen-Smith}  Austen-Smith, D. (1990): ``Information Transmission
in Debate,'' \emph{American Journal of Political Science, }34, pp. 124-152.

\bibitem{Baliga}  Baliga, S., L. Corchon and T. Sj\"{o}str\"{o}m: ``The
Theory of Implementation When the Planner is a Player,'' \emph{Journal of
Economic Theory}, 77 (1997), pp. 15-33.

\bibitem{Banerjee}  Banerjee, A. and R. Somanathan (1997): ``A Simple Model
of Voice,'' mimeo, MIT.

\bibitem{Crawford}  Crawford, V. and J. Sobel (1982): ``Strategic
Information Transmission,'' \emph{Econometrica}, 50, pp. 1431-1451.

\bibitem{Dew-Tir}  Dewatripont, M. and J. Tirole (1998): ``Advocates,'' 
\emph{Journal of Political Economy}, forthcoming.

\bibitem{Farrell}  Farrell, J. (1993): ``Meaning and Credibility in Cheap
Talk Games,'' \emph{Games and Economic Behavior}, 5, pp. 514-531.

\bibitem{Farrell and Rabin}  Farrell, J. and M. Rabin (1996): ``Cheap
Talk,'' \emph{Journal of Economic Perspectives}, 10, pp. 103-118.

\bibitem{Friedman}  Friedman, E. (1998): ``Public Debate among Experts,''
mimeo, Northwestern University.

\bibitem{G-K1}  Gilligan, T. and K. Krehbiel (1989): ``Asymmetric
Information and Legislative Rules with a Heterogeneous Committee,'' \emph{%
American Journal of Political Science}, 33, pp. 459-490.

\bibitem{Green and Stokey}  Green, J. and N. Stokey (1980): ``Two-person
Games of Information Transmission,'' Mimeo, Harvard and Northwestern.

\bibitem{Matthews}  Matthews, S., M. Okuno-Fujiwara and A. Postlewaite
(1991): ``Refining Cheap Talk Equilibria,'' \emph{Journal of Economic Theory}%
, 55, pp. 247-273.

\bibitem{Milgrom}  Milgrom, P. and J. Roberts (1986): ``Relying on the
Information of Interested Parties,'' \emph{RAND Journal of Economics, }17,
350-391.

\bibitem{Morgan}  Morgan, J. and P. Stocken: ``An Analysis of Stock
Recommendations,'' mimeo, Princeton University, 1998.

\bibitem{Morris}  Morris, S.: ``An Instrumental Theory of Political
Correctness,'' mimeo, University of Pennsylvania, 1997.

\bibitem{Otta}  Ottaviani, M. and P. Sorensen (1997): ``Information
Aggregation in Debate,'' mimeo, University College, London.

\bibitem{Shin}  Shin, H. (1994): ``The Burden of Proof in a Game of
Persuasion'' \emph{Journal of Economic Theory, }64, 253-264.

\bibitem{Sobel}  Sobel, J. (1985): ``A Theory of Credibility,'' \emph{Review
of Economic Studies}, 52, pp. 557-573.
\end{thebibliography}

\end{document}
