%Paper: ewp-game/9703001
%From: shalev@core.ucl.ac.be
%Date: Tue, 11 Mar 97 08:02:22 CST
%Date (revised): Tue, 11 Mar 97 11:28:17 CST
%Date (revised): Mon, 17 Mar 97 11:19:18 CST
%Date (revised): Sat, 19 Jul 97 09:59:02 CDT

% 07/02/97 - Jonny Shalev LAE1.TEX
% 10/02/97 - v0.02
% 11/02/97 - v0.03
% 11/02/97 - v0.04 - proof of existence from Mertens
% 11/02/97 - v0.05 - better intro
% 12/02/97 - v0.06 - more on intro, Rabin,...
% 13/02/97 - v0.07 - women/men...
% 23/02/97 - v0.08 - examples..
% 24/02/97 - v0.09 - comp. stat.
% 24/02/97 - v0.10 - abstract
% 24/02/97 - v0.11 
% 25/02/97 - v0.12 - zero?-sum
% 25/02/97 - v0.13 - psych.games
% 25/02/97 - v0.14 - myopic / non-myopic
% 25/02/97 - v0.15 - proof for myopic=non-myopic etc
% 27/02/97 - v0.16 - changes 
% 27/02/97 - v0.17 - little changes
% 28/02/97 - v0.18 - allais paradox
% 10/03/97 - v1.00 - with Massimo's comments for CORE Discussion paper.
% 11/03/97 - v1.01 - with figure
% 11/03/97 - v1.02 - small update
% 14/03/97 - v1.03 - references to Gul and Dekel.
% 11/07/97 - v2.01 - LAE2.TEX - redo for extensive games
% 12/07/97 - v2.02 - more
% 14/07/97 - v2.03 - changes - moved lemma, more "extensive form"
% 15/07/97 - v2.04 - extensive/normal changes - remove most of GPS...
% 16/07/97 - v2.05 - starting with the problem of non-existence of NMLAE
% 17/07/97 - v2.06 - small changes all through

%\documentstyle[12pt,fleqn,twoside]{article}    % Specifies the document style.
%\documentstyle[bezier,emlines2,12pt,fleqn,titlepage]{article} 
\documentstyle[12pt,fleqn]{article} 
%\documentstyle[12pt,fleqn,titlepage]{article} %%%For discussion paper

%\newcommand{\newblock}{}           %ignore \newblock in bbl
\setlength{\evensidemargin}{0in}
\setlength{\oddsidemargin}{0in}
\setlength{\textwidth}{6.25in}
%\setlength{\textheight}{8.50in}
\setlength{\textheight}{9.00in}
%\setlength{\textheight}{9.50in}
\setlength{\topmargin}{0in}
%\setlength{\headheight}{0.3in}
\setlength{\headheight}{0in}
%\setlength{\headsep}{0.3in}
\setlength{\headsep}{0in}
\setlength{\parskip}{\medskipamount}
\addtolength{\baselineskip}{.5\baselineskip}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%  Line Spacing (e.g., \ls{1} for single, \ls{2} for double, even
%% \ls{1.5})
\newcommand{\ls}[1]
   {\dimen0=\fontdimen6\the\font
    \lineskip=#1\dimen0
    \advance\lineskip.5\fontdimen5\the\font
    \advance\lineskip-\dimen0
    \lineskiplimit=.9\lineskip
    \baselineskip=\lineskip
    \advance\baselineskip\dimen0
    \normallineskip\lineskip
    \normallineskiplimit\lineskiplimit
    \normalbaselineskip\baselineskip
    \ignorespaces
   }
%%%%%%%%%%%%%%%%%%%%%%%
\newtheorem{theorem}{Theorem} %%%%[section]
\newtheorem{lemma}{Lemma}%%%%%%[section]
\newtheorem{cor}{Corollary}%%%%%[section]
\newtheorem{claim}{Claim}
\newtheorem{proposition}{Proposition}%%%%%%[section]
\newtheorem{conjecture}{Conjecture}
\newtheorem{example}{Example}
\newtheorem{definition}{Definition}
\newcommand{\lora}{\longrightarrow}
\newcommand{\Lora}{\Longrightarrow}
\newcommand{\Lola}{\Longleftarrow}
\newcommand{\ovl}{\overline}
\newcommand{\qed}{\hspace*{\fill}~\rule{1ex}{1ex}}
\newcommand{\half}{\frac{1}{2}}
\newcommand{\wrt}{with respect to }
\newcommand{\wlg}{without loss of generality }
\newcommand{\la}{\lambda}
\newcommand{\al}{\alpha}
\newcommand{\be}{\beta}
\newcommand{\de}{\delta}
\newcommand{\ep}{\epsilon}
\newcommand{\varep}{\varepsilon}
\newcommand{\sig}{\sigma}
\newcommand{\Sig}{\Sigma}
\newcommand{\Ral}{I \hspace{-0.26em}R}
\newcommand{\lsls}{\ls{1.5}}
\newcommand{\scri}{{\cal I}}
\newcommand{\olr}{\overline{r}}
\newcommand{\ulr}{\underline{r}}
\newcommand{\lae}{loss-aversion equilibrium}
\newcommand{\laes}{loss-aversion equilibria}

%*********************************************************************
%##### start #####
%*********************************************************************
\title{Loss Aversion Equilibrium
   \thanks{Version 2.06, 17/07/97 (First version - 2/97).}   
   \thanks{I am grateful to 
   Jean-Fran\c{c}ois Mertens, 
   Massimo Morelli,
   Heracles Polemarchakis 
   and
   Dov Samet
   for helpful discussions.
   A previous version of this paper appeared as CORE discussion paper
   number 9723.
   }
} %end title
%*********************************************************************

\author{
     {\Large \bf Jonathan Shalev} 
     \thanks{
     CORE, 34 voie du Roman Pays, B-1348 Louvain-la-Neuve, Belgium.
     Fax: +32-10-474301, Phone: +32-10-478186,
     E-mail: SHALEV@CORE.UCL.AC.BE.
     } 
} % end author
 
%%%\date{}   
\begin{document}           % End of preamble and beginning of text.
\maketitle                 % Produces the title.

%*******************************************************************
% Abstract
%*******************************************************************
\begin{abstract}
\lsls                
The Nash equilibrium solution concept for games is
based on the assumption of expected utility maximization. 
Reference dependent utility
functions (in which utility is determined not only by an outcome, but
also by the relationship of the outcome to a reference point)
are a better predictor of behavior than expected
utility. In particular, loss aversion is an important element of such
utility functions.

We extend games to include loss aversion
characteristics of the players. We define
two types of loss-aversion equilibrium, a
solution concept endogenizing reference points.
The two types reflect different types of updating of reference points
during the game.
In equilibrium, reference points emerge as expressions of 
anticipation which are fulfilled.

We show existence of myopic loss-aversion equilibrium
for any extended game, and compare it to
Nash equilibrium. Comparative statics show that an increase in loss
aversion of one player can affect her and other players'
payoffs in different directions. 
\\ %
{\bf Keywords}: loss aversion, reference dependence, equilibrium.
\\ %
JEL Classification: {\bf C72}.
\end{abstract}

%************************************************************
\section{Introduction}  
%************************************************************
\lsls        % FOR 1.5 SPACING IN ARTICLE

Expected utility dominates the analysis of game-theoretic situations,
despite overwhelming evidence that it fails to adequately describe or
predict human behavior. 
Kahneman and Tversky's (1979) prospect theory proposes an alternative
to expected utility in which outcomes are evaluated with respect to a
reference point. Such {\em reference dependent} utility functions are
successful in explaining many systematic deviations from the
maximization of expected utility. 
Rabin (1996) writes that ``reference dependence deserves to be, and
is gradually becoming, an important part of economic modeling.''

The most striking result of the investigation of reference-dependent
utility functions is the existence of loss aversion.
Experimental works in both the psychological and the economic
literature suggest that people are motivated to minimize losses
(relative to a reference point) much more than they are motivated to
maximize gain. For example, Fishburn and Kochenberger (1979)
empirically assessed utility functions over changes in wealth. They
found that the slope of the utility function below the reference
point was on average almost five times as steep as the slope above
the reference point.
Other examples emphasizing the different treatment of losses and
gains (and implicitly or explicitly implying reference dependence)
are 
De Dreu, Emans and Van de Vliert~(1992),
Kahneman and Tversky (1979),
Kahneman, Knetsch and Thaler~(1990,~1991), 
Kramer~(1989), 
Taylor~(1991),
and Tversky and Kaneman~(1992)). 
Gul~(1991) axiomatizes disappointment aversion, which is closely
related to loss aversion. Gul's formula is that obtained when
using a reference point for a lottery which is 
based on the evaluation of the lottery. 
This is the path we take in the definition of consistent reference
points in Section~\ref{sec:model}.

The traditional definition of games ignores 
the possibility of reference dependent
utility functions, assigning for each player 
a single number to represent each possible (pure) outcome resulting
from a profile of pure strategies.
These numbers are the von Neumann-Morgenstern utilities
of the players for the outcome given by the strategy profile. 
For any pair of lotteries over
outcomes, each player is assumed to 
prefer the lottery giving her a higher expected
utility. This embodies the risk-aversion characteristics of the
players, but not the loss-aversion characteristics, as can be seen
from the following example. Assume that the possible (utility) payoffs are 0,
2, and 4. With expected utility,
the player is assumed to be indifferent between a lottery
giving probability 0.5 each of the outcomes 0 and 4, and receiving
the outcome 2 for sure. Assume the player was indifferent (ex
ante) and chose the lottery. If the outcome was 0, ex post the player
may ``suffer'' from the effects of loss-aversion, if her reference point
was above 0. The utility of the outcome may therefore be less than it
would have been if the same outcome were received, but it was expected for
sure, and was not part of a lottery, and therefore the reference
point was zero.
The loss-aversion characteristics cannot simply be embodied
in the payoff numbers of the game. 
This is true,
as the utility of an outcome in a lottery depends on the reference
point, which would usually depend on all the possible outcomes of the
lottery. Thus an outcome will have possibly different utility values
for different lotteries of which it is a component.
An example where reference dependence can help is the Allais~(1953) 
paradox, which demonstrates a systematic violation of expected
utility maximization. 
Using a reference dependent utility function with loss aversion
provides a robust justification for the modal choices in the
Allais paradox, as demonstrated by Example~\ref{ex:allais}
in Section~\ref{sec:examples}.

We extend the analysis of games to incorporate
reference dependence and loss aversion. We first
give a formula, based on experimental results, that 
systematically relates outcomes
and reference points to utility. We assume an underlying utility
function that translates outcomes into values, and a loss aversion
coefficient for each player that captures her level of loss aversion.
The values of outcomes
are modified according to the reference point 
(and whether they are gains or losses) and the loss-aversion
coefficient to give the final utility of the 
outcome for the player
with respect to the reference point. The players' preferences 
over lotteries are assumed to be
represented by the expectation of these final utilities (which depend
on the reference point).
In our solution concept we implicitly
assume that the loss-aversion coefficients of all players
are common knowledge.
While this may seem unrealistic, it is in fact no more so than the
standard assumption that the utilities of all possible outcomes for
all players are common knowledge.

Thus, given an {\em extended game}, which includes a
game and a loss aversion coefficient for each player, 
and given reference points for all the players, 
we can transform the game into a new (standard) game with
final utilities. This new game could be analyzed in the standard
fashion. However, this method sidesteps the important question of the
significance of the reference points. In experimental situations the
reference points are manipulated by the experimenters according to
the framing of the outcomes. In contrast, in real life the reference
points are ``manipulated'' by experiences and anticipations. 
Kahneman (1992) provides a review of much relevant 
work on reference points.
We endogenize the reference points into the solution concept,
and in a solution the utility of the 
reference points will be equal to the utility of the outcomes. This
captures the ``anticipation'' characteristic of the reference points,
that they represent beliefs about the outcome.
This endogenization evaluates the utility of outcomes with the same
function as the local utility function given in Gul (1991), 
which is a special
case of Dekel's (1986) characterization of preferences with a
weakening of the independence axiom.

We define a {\em loss-aversion equilibrium} 
as a strategy profile in which
for each player the expected outcome (using loss aversion evaluation
and thus giving higher weight to losses than to gains)
is equal to her reference point,
and no unilateral deviation of a player from this strategy can
increase her utility. 
We define two types of loss-aversion
equilibria, which are myopic \lae\  
and non-myopic \lae.
The term myopia refers to the updating of reference points as
situations change. Since in simultaneous games there is no time
involved, the two notions are equivalent for such games, as we show
in Section~\ref{sec:results}. 
In general the two notions can differ in extensive form
games, as shown by Examples~\ref{ex:extensive}
and~\ref{ex:extensive2} in Section~\ref{sec:examples}.
The reference points have a dual interpretation in a 
loss aversion equilibrium:
they are used to evaluate payoffs
(given values and loss aversion coefficients), and these same
reference points are equal to the expectation of the evaluated
payoffs for each player.

%Geanakoplos, Pearce, and Stachetti (1989) defined and investigated
%psychological games, in which payoffs are a function of both actions
%and beliefs. In Section \ref{sec:psychgames} we show how to define a
%psychological game based on an extended game. In such a game the
%reference point is a function of the beliefs. A psychological Nash
%equilibrium of this game is a 
%CHECK CHECK CHECK ???non-myopic loss-aversion equilibrium of the
%extended game. DID GPS DO EXTENSIVE FORM GAMES AND WHAT HAPPENS
%THERE???

The following results are proved in Section \ref{sec:results}.
Proposition~\ref{pr:myopicnonmyopic} is that for simultaneous games,
the set of myopic \laes\ is equal to the set of non-myopic \laes.
We show in Proposition~\ref{pr:pseq}
that any pure strategy Nash equilibrium of a game 
with perfect information is also a loss
aversion equilibrium (both myopic and non-myopic)
of any corresponding extended game. 
We prove existence of myopic loss aversion equilibria for any game
(Proposition~\ref{pr:exist}).

If none of the players is loss averse then the loss aversion equilibria
of an extended game coincide with the Nash equilibria of the
underlying standard game.

Loss aversion equilibrium, similarly to Nash
equilibrium, does not select a unique solution for each games, but a
set that may contain multiple elements. 
The number of loss aversion
equilibria in an extended game may be either higher, lower, or the
same as the number of Nash equilibria of the corresponding game
before the extension (i.e. with no loss aversion). Examples of all
these cases are given, along with other
examples of games and their loss-aversion equilibria,
in Section \ref{sec:examples}. 

In Section \ref{sec:compstat}
we perform some comparative statics, and show that an increase in
loss aversion of a player can affect both her and other players'
payoffs in both directions. 
We conclude in Section~\ref{sec:tests}
with some thoughts about possible experimental tests of the
theoretical results of the paper.

%*****************************************************************
\section{The Model}   %2
%*****************************************************************
\label{sec:model}

We deal mainly with games in extensive form. A 
formulation of games in extensive form can be found in Chapter~3 of
Fudenberg and Tirole~(1991)
We denote a game by $G$. The set of 
{\em players} is $\scri$, which we take to be a finite set, 
the {\em pure-strategy space} is $S_i$ for each player $i$, and {\em
payoff functions} $u_i$ give player $i$'s von
Neumann-Morgenstein utility $u_i(s)$ for each pure strategy profile
$s\in S$, where $S=\prod_{i\in\scri}S_i$.
We assume that these utilities $u_i(s)$ are the basic values of
the outcomes%
\ls{1}%
\footnote{The basic value of an outcome is
the utility of the outcome when the reference point is 
equal to the outcome.},
\lsls
which should then be modified according to their relation
to the reference points and the loss aversion characteristics of the
players, as explained in the next paragraphs. The outcomes are
assumed to be pure outcomes and not lotteries. This 
is without loss of generality, as any lottery over pure
outcomes can be represented by an additional level in the tree 
and a move by nature.

An {\em extended game} 
$(G,(\la_i)_{i\in\scri})$
has an additional element, the {\em loss-aversion
coefficients} of the players. For player $i$, $\la_i \in \Ral_+$
specifies the player's degree of loss aversion. 
Higher values of $\la_i$ represent more loss aversion. 
A value of $\la_i=0$ characterizes
an expected utility maximizer (player $i$'s 
utility function is not reference dependent if $\la_i=0$).
Given a reference
point $r_i \in \Ral$ and a basic utility value 
$x_i \in \Ral$,
the final (loss-aversion) utility of the player is given by
\begin{equation}
\label{eq:la}
   v_i(x_i,r_i)=
     \left\{ 
       \begin{array}{ll}
         x_i & \mbox{if } x_i \geq r_i \\
         x_i-\la_i (r_i-x_i) & \mbox{if } x_i<r_i
       \end{array}
     \right..
\end{equation}
The utility function given by (\ref{eq:la}) is similar to the value
function found experimentally by Tversky and Kahneman~(1992) for
monetary prospects. Tversky and Kahneman found that the value
function (when the reference point is zero) has the approximate form
$x^\al$ for $x\geq0$ and $-\la(-x)^\al$ for $x<0$. They found the
median values of $\al$ and $\la$ to be 0.88 and 2.25 respectively.
By using von Neumann-Morgenstern utility values instead of monetary
values and using Formula~(\ref{eq:la}), 
we retain the
loss-aversion aspect of the utility function, which is that the 
function is steeper for losses (relative to the reference point) than
for gains. 
However, we cannot get the ``S''-shaped value function of prospect
theory.
Risk aversion and risk seeking are both included as possibilities of
our specification, but not as a function of the reference point. 
We allow $\la$ to vary for different players to reflect
the heterogeneity of loss aversion. 
The existence and range of heterogeneity has not been directly
investigated empirically,
but as discussed in Section~\ref{sec:tests} there is evidence to
indicate that there exists such heterogeneity.

For exogenously given reference points, we can transform
an extended game into a standard game by evaluating
the utility of each outcome of the game according to 
Formula~(\ref{eq:la}).
Given an extended game
$(G,(\la_i)_{i\in\scri})$
and a vector of reference points $r \in \Ral^{\scri}$,
we define the transformation (to a standard game)
$L(G,\la,r)=(G')$, where $G'$ differs from $G$ only in the utility of
the outcomes for the players. The utility of each outcome for each
player is transformed according to Formula~(\ref{eq:la}), using the
appropriate reference points and loss-aversion coefficients. Thus,
if $S$ is the set of pure strategy profiles, then
for $s \in S$,
$u'_i(s)=v_i(u_i(s),r_i)$.

We extend the utility function to include mixed strategies as
follows. Denote by $\Sig_i$ the set of player $i$'s mixed strategies.
Denote $\Sig=\prod_{i\in\scri}\Sig_i$. If a mixed strategy 
profile $\sig\in\Sig$
gives probability $p_\sig(s)$ to each pure action
profile $s \in S$, then the utility 
of player $i$ from $\Sig$ when he
has a reference point $r_i$ is given by
\begin{equation}
  w_i(\sig,r_i)=\sum_{s\in S} p_\sig(s) v_i(u_i(s),r_i)
\end{equation}
Note that the payoffs for player $i$ are
first defined on the outcomes (as a function of the player's
reference point) and then defined for mixtures. This sequence 
is important for
extended games, as the payoffs are not linear in the reference
points, and the reference point is used to evaluate each pure
outcome, and not the expected value of the outcome. Note also the
implicit assumption that the reference point is fixed. This
assumption may not be valid when we deal with extensive games that
have more than one information set for a player,
as the reference points might change as the
player receives new information about the actions of the other
players (possibly including moves by nature). We do not ignore this
matter and we treat it when we discriminate between types of loss
aversion equilibria.

\begin{definition} %1
\end{definition}
We say a reference point is {\em consistent} with a lottery, if the
utility evaluation of the lottery with respect to the reference point is
equal to the reference point. Formally, for a lottery $x$ giving
outcomes $x^1,\ldots,x^n$ with respective probabilities
$p^1,\ldots,p^n$, a reference point $r_i$ is consistent for a player
$i$ with loss-aversion coefficient $\la_i$ if 
\begin{equation}
   r_i=\sum_{j=i}^n p^j v_i(u_i(x^j),r_i). 
\end{equation}
The value of a consistent reference point for a lottery is analogous
to the utility given by the appropriate disappointment averse utility
function as defined in Gul (1991).

For an extended game $(G,\la)$ and a mixed strategy profile 
$\sig \in \Sig$, denote
  $R_i(\sig)=\{r_i\in\Ral ~|~ w_i(\sig,r_i)=r_i \}$. 
This is the set of
reference points that are consistent for player $i$ 
with the lottery over outcomes implied by the strategies $\sig$. 

For an extended game 
$(G,\la)$, define 
$\displaystyle \olr=\max_{i\in\scri}
               \{\max_{s\in S}\{u_i(a)\}\}$ 
   and
$\displaystyle \ulr=\min_{i\in \scri} \{
                    \min_{s\in S}     \{
               v_i(u_i(s),\olr)  \} \}.$ 
Thus, for any lottery over outcomes, if the reference points are all
in the interval $[\ulr,\olr]$, the evaluated utilities of all players
will also be in this interval.


The following lemma shows that for any $i\in\scri$ and any $\sig \in
\Sig$, $R_i(\sig)$ contains a single value:
\begin{lemma}
\label{le:sglval}
  If 
  $(G,(\la_i)_{i \in \scri})$ is an extended game,
  then for all $i\in \scri$, and for all $\sig \in \Sig$, the
  correspondence $R_i(\sig)$ is single-valued, and the value 
  is in the interval $[\ulr,\olr]$.
\end{lemma}
{\bf Proof:}
Take $i \in \scri$ and $\sig \in \Sig$. $w_i(\sig,r_i)$, viewed as a
function of $r_i\in\Ral$,
is non-increasing and continuous.
The following three relations hold, defining
$u_i(\sig)=\sum_{s\in S}p_\sig(s)u_i(s)$:
\begin{eqnarray}
  u_i(\sig) & \in & [\ulr,\olr] \\
  w_i(\sig,\ulr)&=&u_i(\sig) \geq \ulr \\
  w_i(\sig,\olr)&\leq&u_i(\sig) \leq \olr
\end{eqnarray}
Therefore, since $w_i(\sig,\olr)$ is non-increasing and continuous
in $r_i$,
there exists a unique $r_i^* \in \Ral$ satisfying 
$w_i(\sig,r_i^*)=r_i^*$. Moreover, $r_i^* \in [\ulr,\olr]$.
\qed(Lemma~\ref{le:sglval})

As a consequence of Lemma~\ref{le:sglval} we can define 
$r_i(\sig)$ as a function with the
value of the unique element of $R_i(\sig)$. This function can be
evaluated not just at the root of a game tree, but also
at any information set, and will give the consistent
reference point for a player at that information set, given his
belief that $\sig$ is the strategy profile being played.

\begin{definition} %2
\label{def:myopic}
\end{definition}
A strategy profile
$\sig\in\Sig$ is a {\em myopic loss aversion equilibrium} of $(G,\la)$ 
if there exists $r \in \Ral^{\scri}$ such that 
$\sig$ is a Nash equilibrium of the transformed game $L(G,\la,r)$, 
and the payoff to the players from using $\sig$ in $L(G,\la,r)$ is $r$.

There are two aspects of myopia in this definition. 
The first is that all evaluation is done at the root of the tree. 
The players do not take into account possible changes of the
reference point as the game proceeds. This might be a reasonable
assumption for situations where reference points adjust slowly
relatively to the duration of the game (or the actions are played by
agents of the player, who therefore does not update her expectations
during the play).
The second aspect is that when evaluating a deviation, the
player does not change her reference point, even though the
distribution of outcomes may change if she deviates. Here too, there
are situations where we might consider the reference point to be
fixed and not shift in line with contemplated deviations. 
Kahneman (1992) discusses how multiple reference points might be
used, and suggests that an important problem for future research is how
multiple reference points compete and combine.

\begin{definition} %3
\label{def:nonmyopic}
\end{definition}
%{\bf Definition \ref{def:nonmyopic}*:}
A strategy profile $\sig$ is a {\em non-myopic \lae} of $(G,\la)$
if for all $i \in \scri$, all $\sig'_i \in \Sig_i$,
and for all information sets $\mu$ of player $i$
that are reached with positive probability under $\sig$, the evaluation
at $\mu$ satifies
\begin{equation}
  r_i(\sig) \geq r_i((\sig_{-i},\sig'_i)),
\end{equation}
where $(\sig_{-i},\sig'_i)$ signifies tha all players $j \in \scri
\setminus \{i\}$ play $\sig_j$ and player $i$ plays $\sig_i$.

We call these
non-myopic \laes, as a player considering a deviation takes
into account an appropriate change in her reference point that is
consistent with her deviation. Evaluation is done at each information
set that might be reached,
so all available information is used when evaluating the
lottery over outcomes implied by the strategies. Non-myopic \laes\ are
therefore appropriate for situations where we would expect reference
points to adjust swiftly (relative to the duration of the game), and
where the players are sophisticated and take into account these
future expected changes in the reference point.

Both these definitions endogenize the reference points into 
the model, and the reference points
serve both as comparison values to determine gains and losses,
and also as anticipation values which are rational, as they 
are reached in an equilibrium.

We show in Section~\ref{sec:results} that the set of myopic \laes\
coincides with the set of non-myopic \laes\ for 
games with only one information set for each player. These are
simultaneous games whose form is essentially captured by 
strategic form representation.
The fact that there is no time for adjustment of reference points
gives the intuition for this result.
In such games, all decisions must be
taken before any relevant information about other 
players' decisions or realizations of chance moves is received. 
This equivalence result does not hold in general
for extensive form games as shown by Example~\ref{ex:extensive} in
Section~\ref{sec:examples}.


%*****************************************************************
\section{Results} %3
%*****************************************************************
\label{sec:results}
%
The first proposition we prove shows that non-myopic and myopic \lae\
are identical for games where each player has only one information
set, which is always reached. 
For such games, time is not of the essence. The information
available to a player when she has to choose her action is no
different from the information she had at the root of the game tree.
This proposition shows that the difference between myopic and
non-myopic \laes\ comes from the differences in timing the updating
of reference points, and not from re-evaluating reference points when
considering deviations.

\begin{proposition} %1
\label{pr:myopicnonmyopic}
   For any extended game 
   where each player has exactly one information set, which is
   reached on every path of play (a simultaneous game),
   the set of myopic \laes\ coincides with
   the set of non-myopic \laes.
\end{proposition}
%
{\bf Proof:}
Take an extended game $(G,\la)$ satisfying the requirements of the
proposition. 
From Definition~\ref{def:myopic} and
Lemma~\ref{le:sglval}, the set of myopic \lae\ is the set of
$\sig\in\Sig$ that satisfy
\begin{equation}
\label{eq:myopic}
  w_i(\sig,r_i(\sig)) \geq w_i((\sig_{-i},\sig'_i),r_i(\sig))~~~
       \forall i\in\scri, ~\forall \sig'_i \in \Sig_i.
\end{equation}
From Definition~\ref{def:nonmyopic} and 
Lemma~\ref{le:sglval}, the set of non-myopic
\lae\ is the set of $\sig\in\Sig$ that satisfy
\begin{equation}
\label{eq:nonmyopic}
  w_i(\sig,r_i(\sig)) \geq
          w_i((\sig_{-i},\sig'_i),r_i((\sig_{-i},\sig'_i)))~~~
       \forall i\in\scri, ~\forall \sig'_i \in \Sig_i.
\end{equation}
For this inequality we used the fact that each player has only one
information set, and it is always reached. Therefore, the information
at this point is the same as the player had at the root of the tree.
From the definition of the function $r_i(\sig)$, we have
\begin{equation}
\label{eq:rsig1}
w_i(\sig,r_i(\sig))=r_i(\sig)
       ~~~\forall i\in\scri, ~\forall \sig'_i \in \Sig_i
\end{equation}
and
\begin{equation}
\label{eq:rsig2}
w_i((\sig_{-i},\sig'_i),r_i((\sig_{-i},\sig'_i)))=r_i((\sig_{-i},\sig'_i))
       \forall i\in\scri, ~\forall \sig'_i \in \Sig_i.
\end{equation}

We first show that (\ref{eq:nonmyopic}) implies
(\ref{eq:myopic}).
Inequality (\ref{eq:nonmyopic}) together with 
Equations~(\ref{eq:rsig1}) and~(\ref{eq:rsig2}) implies
$r_i(\sig) \geq r_i((\sig_{-i},\sig'_i))$. Since $w_i$ is 
continuous and monotonically non-increasing in its second parameter,
this implies
\begin{equation}
\label{eq:ge1}
   w_i((\sig_{-i},\sig'_i),r_i((\sig_{-i},\sig'_i))) \geq
   w_i((\sig_{-i},\sig'_i),r_i(\sig))
       ~~~\forall i\in\scri, ~\forall \sig'_i \in \Sig_i.
\end{equation}
which together with (\ref{eq:nonmyopic}) implies (\ref{eq:myopic}).

We now show the other direction. Inequality (\ref{eq:myopic}) and
Equation~(\ref{eq:rsig1}) imply 
$r(\sig)\geq w_i((\sig_{-i},\sig'_i),r_i(\sig))$, 
for all $i\in\scri$ and for all $\sig'_i \in \Sig_i$,
so from continuity
and monotonicity of $w_i$ with respect to its second parameter we 
can conclude that $r_i(\sig) \geq r_i((\sig_{-i},\sig'_i))$, which
implies (\ref{eq:nonmyopic}), using 
Equations~(\ref{eq:rsig1}) and~(\ref{eq:rsig2}).
\qed(Proposition~\ref{pr:myopicnonmyopic})

%
The next proposition states that all pure strategy equilibria of an
underlying game with perfect information (no moves by nature)
are \laes\ of any extension of this game.

\begin{proposition} %2
\label{pr:pseq}
  For any game $G$ 
  with perfect information, any pure-strategy Nash equilibrium of $G$ is
  both a myopic and a non-myopic \lae\ of $(G,\la)$ for any $\la$.
\end{proposition}

{\bf Proof:} 
If $\sig$ is a pure strategy equilibrium giving payoffs 
$x=(x_i)_{i \in \scri}$, then $\sig$ is also a pure-strategy
equilibrium in $L(G,\la,x)$. This is true, since any deviation 
from $\sig$ by a player $i$ in G is not profitable, i.e. it gives
no more than $x_i$. Therefore, in $L(G,\la,x)$ it also gives
no more than $x_i$ as the payoffs in $L(G,\la,x)$ are no higher
than in $G$. Thus, the outcome of $\sig$ which also gives
$x$ in $L(G,\la,x)$ cannot be improved on
by a unilateral deviation, and $x$ is therefore
a loss-aversion equilibrium. To show that the proposition holds also
for non-myopic \laes, note first that with a profile of pure
strategies $\sig$ in a game of perfect information, each information set is
reached with probability one or probability zero. Exactly one
terminal node is reached with probability one, and the payoffs at
this node are exactly the consistent reference points of the players
for the profile $\sig$. If a player has a deviation with a consistent
reference point that gives her more, then the same deviation gives a
higher expected payoff (without loss aversion evaluation) than that
of $\sig$, in contradiction with the assumption that $\sig$ is a Nash
equilibrium of $G$.
\qed (Proposition \ref{pr:pseq})

\begin{proposition} %3
\label{pr:exist}
  For any extended game $(G,\la)$, there exists a myopic loss-aversion
  equilibrium.
\end{proposition}

{\bf Proof:}\footnote{
\ls{1}
This proof was suggested by J-F. Mertens.}
\lsls
%
The proof is by the use of Kakutani's fixed point theorem.
Assume an extended game $(G,\la)$.
%Denote by $S$ the space
%of strategy profiles.
We define the correspondence $f$ from $\Sig \times [\ulr,\olr]^{\scri}$
to itself as follows.
$(\sig',r') \in f(\sig,r)$ if 
$\sig_i'$ is a best response to $\sig_{-i}$ in the game
$L(G,\la,r)$ for all $i \in \scri$, and 
$r_i'$ is the payoff to $i$ from $(\sig_i',\sig_{-i})$ in the game
$L(G,\la,r)$ for all $i \in \scri$.

To apply Kakutani's fixed point theorem we need to show that the
domain is non-empty, compact and convex and that the correspondence is
nonempty, convex valued, and has a closed graph.

Both the strategy space and $[\ulr,\olr]^{\scri}$ are non-empty,
compact and convex, and therefore so is their product. The
correspondence is non-empty as for every $(\sig,r)$ 
and each $i$ there exists
a best response (at least one pure strategy is a best response)
$\sig_i'$ to $\sig_{-i}$ in $L(G,\la,r)$, and taking $r'_i$ as the
payoff from $(\sig_i',\sig_{-i})$ in $L(G,\la,r)$ we have an 
element $(\sig',r')$
in $f(\sig,r)$. If there is more than one best response for
a player $i$, then all give the same payoff, and so does any
convex combination of the best responses, therefore the
correspondence is convex valued.
The correspondence has a closed graph from the continuity of the
payoffs as a function of $r$ and the fact that the best-response
function has a closed graph.

Therefore, applying Kakutani's fixed point theorem, there exist 
$\sig^*$ and $r^*$ such that $(\sig^*,r^*) \in f(\sig^*,r^*)$. 
From the definition of $f$, we have that $\sig^*$ is a 
myopic loss-aversion
equilibrium of $(G,\la)$, giving payoffs of $r^*$.
\qed(Proposition~\ref{pr:exist})

The existence of non-myopic \lae\ is not guaranteed for
non-simultaneous games. An example of a game with no non-myopic \laes\ 
is Example~\ref{ex:extensive2} in Section~\ref{sec:examples}. 
This might lead us to believe that we have used too restrictive a
definition, as non-emptiness is surely a desirable property of any
solution concept.
The basic cause of the non-existence is as follows. The value of a
final outcome that is possible given a strategy profile could be
different when evaluated as part of different lotteries (depending on
how disappointing the outcome is, relative to the entire lottery).
Requiring that the strategy of each player
be preferred to any other strategy (taking the strategies of the
other players as fixed) at more than one information set can lead to
no strategies being preferred at all information sets. Without an
appropriate notion of timing of the game, it is difficult to suggest
how to balance different preferences at different nodes. 
This problem
is sidestepped with the myopic concept, as time doesn't enter in the
evaluation and the reference point is fixed (as a function of the
strategy profile) before play starts. 
Another approach is that of Ferreira, Gilboa and Maschler~(1995),
which deals with games with preferences that change during the play
of the game. They give a solution concept (credible equilibrium) that
always exists. However, this concept makes assumptions about the
updating of preferences that are not suitable for the
reference-dependent utility function we use.

{\bf Remark:}
For any extended game $(G,\la)$ with
$\la_i=0$ for all $i\in \scri$, the set of loss aversion
equilbria (either myopic or non-myopic) 
of $(G,\la)$ coincides with the set of Nash equilibria of
$G$. This is obvious from the definitions, as the evaluation of any
lottery for a player with $\la_i=0$ 
gives the expected utility of this lottery.

%*****************************************************************
\section{Examples} % 4
%*****************************************************************
\label{sec:examples}

This section provides some examples that clarify and exemplify the
previous sections. Example~\ref{ex:allais} demonstrates that the
Allais paradox is no longer a paradox when loss aversion is taken
into account. Examples~\ref{ex:bos} and \ref{ex:mp} examine two
classical games, and shows how different levels of loss aversion
affect the equilibria of these games. Examples~\ref{ex:ne1} and 
\ref{ex:ne2} are extreme cases showing
that the number of Nash equilibria can be either greater or smaller
than the number of loss aversion equilibria. In Example~\ref{ex:ne1}
there is a unique loss aversion equilibrium and a continuum of Nash
equilibria. Example~\ref{ex:ne2} gives the opposite situation, with a
unique Nash equilibrium and a continuum of loss aversion equilibria.
Example~\ref{ex:extensive} is an extensive form game where there
exists a unique non-myopic loss aversion equilbrium and a unique
myopic loss-aversion equilbrium, but they are completely different.
Except for Examples~\ref{ex:extensive} and~\ref{ex:extensive2}, 
all the examples are given in
normal form, with the understanding that they represent simultaneous
games, where each player has a single information set.

{\samepage
  \begin{example}
  \label{ex:allais}
  \end{example}
  We start with a single-player decision problem, the Allais paradox,
}%end of \samepage
  from Allais (1953), and demonstrate that if we assume subjects are
  loss-averse there is no paradox. 
This was done in Gul (1991), with disappointment aversion.
Subjects were presented with two
pairs of lotteries, and asked to choose one from each pair.
The first pair, lotteries A and B, are the following (in Francs):

%\begin{description}
{\bf Lottery A} \\
\[  \begin{array}{l|ccc}
    \mbox{Outcome}      & 0    & 100m & 500m \\
    \mbox{Probability}  & 0    & 1    & 0
  \end{array} \]
{\bf Lottery B} \\
\[  \begin{array}{l|ccc}
    \mbox{Outcome}      & 0    & 100m & 500m \\
    \mbox{Probability}  & 0.01 & 0.89 & 0.1
  \end{array} \]

The second pair, lotteries C and D, are the following: \\
{\bf Lottery C} \\
\[  \begin{array}{l|ccc}
    \mbox{Outcome}      & 0    & 100m & 500m \\
    \mbox{Probability}  & 0.89 & 0.11 & 0
  \end{array} \] 
{\bf Lottery D}  \\
\[  \begin{array}{l|ccc}
    \mbox{Outcome}      & 0    & 100m & 500m \\
    \mbox{Probability}  & 0.9  & 0    & 0.1
  \end{array} \]

The modal choice was lotteries A and D, even though this selection 
is not consistent with expected utility maximization for any utility values
of the outcomes. However, for a subject with a loss-aversion
coefficient in the range $(\frac{1}{9},10)$, which is the case with
virtually all experimental results, this selection of lotteries does not
cause any contradictions. 

We assume that a subject will evaluate a lottery using a reference
point that is consistent with the lottery (which is unique from
Lemma~\ref{le:sglval}). Thus, a lottery that has a higher consistent
reference point is preferred over a lottery with a lower consistent
reference point. 
If the values of the outcomes are $0$ for
$0m$, $1$ for $500m$ and $0.91$ for $100m$, then any subject with a
loss aversion coefficient above $\frac 19$ will choose A over B, and
any subject with a loss aversion coefficient below 10 will choose D
over C. 


The remaining examples are of games with more than one player. 
The elements in each square
of the payoff matrices in the following examples are the
values of the outcomes of the relevant pure strategies. These values
are transformed into final utilities 
(for given reference points) according to Formula~(\ref{eq:la}).

{\samepage
\begin{example}
\label{ex:bos}
  The Battle of the Sexes
\end{example}
\hspace{2cm}
\begin{tabular}{c|c|c|}
      & Boxing & Ballet \\
 \hline
 Boxing  & 2,~1 & 0,~0 \\
 \hline
 Ballet & 0,~0 & 1,~2 \\
 \hline
\end{tabular}
\\
}%end of \samepage

The battle of the sexes has two pure-strategy Nash equilibria,
(Boxing,Boxing) and (Ballet,Ballet), and 
one mixed strategy equilibrium with each player playing the strategy
with his/her most preferred outcome with probability $\frac{2}{3}$.

Both pure strategy equilibria are also loss aversion equilibria
(Proposition~\ref{pr:pseq}).
There is also a mixed strategy
equilibrium. It can be calculated by solving the following equations,
which specify that each player is indifferent between both of his/her
strategies, given his/her reference point, and that the reference
point is equal to the utility of the expected outcome. $p$ represents
the probability that player 1 plays Boxing and $q$ the probability that
player 2 plays Boxing.

\begin{equation}
 2 q + (-\la_1 r_1)(1-q) =  - \la r_1 q+ (1-q) = r_1
\end{equation}
\begin{equation}
 -p \la_2 r_2 + (1-p)= (-\la_2 r_2)p + 2(1-p)= r_2
\end{equation}
With the restriction that $p$ and $q$ are in $[0,1]$, there
is a unique solution to these equations, 
which is given (for $\la \gg 0$) by%
\ls{1}%
\footnote{There is a unique solution also for the case where
$\la_i=0$, which is the limit of Equations (\ref{eq:p})-(\ref{eq:r})
as $\la_i$ tends to zero.}
\lsls
\begin{eqnarray}
  \label{eq:p}
  p&=&1-\frac{-3-2\la_2 + \sqrt{9+8\la_2(2+\la_2)}}{2 \la_2} \\
  q&=&\frac{-3-2\la_1 + \sqrt{9+8\la_1(2+\la_1)}}{2 \la_1} \\
  \label{eq:r}
  r_i&=&\frac{-3+\sqrt{9+8\la_i(2+\la_i)}}{2\la_i(2+\la_i)},~~~
                                                    i=1,2.
\end{eqnarray}
It can be verified that $p$ is decreasing as a function of $\la_2$
and $q$ is increasing as a function of $\la_1$. Each $r_i$ is
decreasing as a function of $\la_i$. This means that a player who
becomes more loss averse has a higher probability of receiving
his/her preferred outcome in the mixed-strategy equilibrium, but
receives a lower utility. A change in a player's loss-aversion
coefficient does not affect the other player's payoff in the mixed
strategy loss-aversion equilibrium.

{\samepage
\begin{example}
\label{ex:mp}
  Matching Pennies
\end{example}
\hspace{2cm}
\begin{tabular}{c|c|c|}
      & H & T \\
 \hline
 H  & 1,~0 & 0,~1 \\
 \hline
 T & 0,~1 & 1,~0 \\
 \hline
\end{tabular}
\\
}%end of \samepage

For any values of $\la_1$ and $\la_2$, the only 
loss-aversion equilibrium strategies in extended matching pennies
are for each player to play each
pure strategy with probability $\frac 12$. The payoffs and the
reference points are $r_i=\frac{1}{2+\la_i}$, thus as a player
becomes more loss averse, she receives a lower payoff. The payoff of
each player is not affected by 
a change in the loss-aversion coefficient of the other player.
We show in Section \ref{sec:compstat} that 
neither of these properties holds in general. 

We now give two examples to show that the number of loss-aversion
equilibria in an extended game can be either higher or lower than 
the number of Nash equilibria in the underlying game.
%

{\samepage
\begin{example}
\label{ex:ne1}
\end{example}
\hspace{2cm}
\begin{tabular}{c|c|c|}
      & L & R \\
 \hline
 A  & 1,~0 & 0,~1 \\
 \hline
 B & 0,~1 & 1,~0 \\
 \hline
 C  & 2,~0 & -1,~1 \\
 \hline
 D & -1,~1 & 2,~0 \\
 \hline
\end{tabular}
\\
}%end of \samepage

This game has a continuum of Nash equilibria. 
In all of them player 2 mixes his two strategies with probabilities 
$(\frac 12, \frac 12)$. Player
1's strategy is of the form $(\al,\be,\frac 12 - \al, \frac 12 -
\be)$, where $\al$ and $\be$ are in $[0,\frac 12]$.
When $\la=(1,1)$, there is only one loss-aversion equilibrium. Player
2 still mixes with probabilities 
$(\frac 12, \frac 12)$. Player 1 uses the mixed strategy 
$(\frac 12, \frac 12, 0,0)$. 

{\samepage
\begin{example}
\label{ex:ne2}
\end{example}
\hspace{2cm}
\begin{tabular}{c|c|c|}
      & L & R \\
 \hline
 A  & 1,~0 & 0,~1 \\
 \hline
 B & 0,~1 & 1,~0 \\
 \hline
 C  & 0.4,~2 & 0.4,~2 \\
 \hline
\end{tabular}
\\
}%end of \samepage

This game has a unique Nash equilibrium. 
Player 2 mixes
his two strategies with probabilities 
$(\frac 12, \frac 12)$. Player
1's strategy is $(\frac 12 , \frac 12 ,0)$.
The extended game with $\la=(1,1)$ has 
a continuum of loss-aversion equilibria. 
In these,
player 2 mixes with probabilities $(\al,1-\al)$, where
$\frac 37 < \al < \frac 47$, and player 1 
plays the pure strategy C.

In Examples~\ref{ex:bos} and~\ref{ex:mp} the number of Nash
equilibria is equal to the number of loss aversion equilibria in any
extension of these games. We therefore see that there is no fixed
relationship between the quantity of Nash equilibria in the
underlying basic game and the number of loss aversion equilibria in
the extended games.

The following is an example of a one-player extensive form game which
has different myopic \laes\ and non-myopic \laes.

\begin{example}   %6
\label{ex:extensive}
\end{example}
The tree for this example is given in Figure~\ref{fig:ex}.
 \begin{figure}
\caption{The tree for Example~\ref{ex:extensive}}
\label{fig:ex}
%\input{exten.pic}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% START OF exten.pic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%TexCad Options
%\grade{\on}
%\emlines{\off}
%\beziermacro{\off}
%\reduce{\on}
%\snapping{\off}
%\quality{2.00}
%\graddiff{0.01}
%\snapasp{1}
%\zoom{1.00}
\unitlength 1mm
\linethickness{0.4pt}
%\begin{picture}(63.33,81.28)
\begin{picture}(63.33,60)(0,21.28)
\put(50.00,80.33){\circle{1.89}}
%\emline(50.00,79.67)(62.67,65.33)
\multiput(50.00,79.67)(0.12,-0.14){106}{\line(0,-1){0.14}}
%\end
%\emline(50.00,79.33)(37.67,65.00)
\multiput(50.00,79.33)(-0.12,-0.14){103}{\line(0,-1){0.14}}
%\end
\put(37.00,64.00){\rule{1.00\unitlength}{1.00\unitlength}}
%\emline(37.33,64.00)(28.33,53.00)
\multiput(37.33,64.00)(-0.12,-0.15){75}{\line(0,-1){0.15}}
%\end
%\emline(37.33,63.67)(46.67,52.67)
\multiput(37.33,63.67)(0.12,-0.14){78}{\line(0,-1){0.14}}
%\end
\put(28.67,52.33){\circle{2.00}}
%\emline(28.67,51.67)(20.33,40.67)
\multiput(28.67,51.67)(-0.12,-0.16){70}{\line(0,-1){0.16}}
%\end
%\emline(28.33,51.00)(36.00,40.33)
\multiput(28.33,51.00)(0.12,-0.17){64}{\line(0,-1){0.17}}
%\end
\put(42.33,75.33){\makebox(0,0)[cc]{0.5}}
\put(59.00,76.00){\makebox(0,0)[cc]{0.5}}
\put(31.00,60.67){\makebox(0,0)[cc]{L}}
\put(43.67,60.67){\makebox(0,0)[cc]{R}}
\put(21.67,48.33){\makebox(0,0)[cc]{0.5}}
\put(34.00,48.67){\makebox(0,0)[cc]{0.5}}
\put(63.33,61.00){\makebox(0,0)[cc]{0}}
\put(46.67,49.67){\makebox(0,0)[cc]{21}}
\put(35.67,37.67){\makebox(0,0)[cc]{10}}
\put(20.67,37.67){\makebox(0,0)[cc]{40}}
\end{picture}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% END OF exten.pic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%     chance node at top with 0.5 for outcome zero, and 0.5 for A
%     Node A - R gives 21 for sure
%              L gives a lottery with 0.5 to get 40 and 0.5 for 10.
\end{figure}

There is one player, with $\la=1$. There are two nodes belonging to
nature (the hollow circles), and one decision node of the player
(the solid square). 
We first calculate the myopic \laes. If the player chooses R, he
faces the lottery giving 0 with probability $\frac 12$ and 21 with
probability $\frac 12$, i.e. $(0,0.5; ~21,0.5)$. The reference point
consistent with this lottery is 7. If the player chooses L she faces
the lottery  $(0,0.5; ~40,0.25; ~10,0.25)$. The consistent rerference
point of this lottery is $8 \frac 13$, so choosing L is the unique
myopic \lae. We now calculate the non-myopic \laes. Starting at the
subtree headed by the decision node, R gives 21 for sure, so has a
consistent reference point of 21. L gives $(0.5,40; ~0.5,10)$, with a
consistent reference point of 20, so the only non-myopic \lae\ is to
choose R. We therefore see that the game has different non-myopic and
myopic \laes. 
This difference can be understood as follows. In the
non-myopic equilibrium, when the player is called on to choose 
at her decision node,
he adusts her expectations to reflect the fact that she will not
receive 0, and her reference point takes this into account. In the
myopic equilibrium, the player still has the 0 figuring in the
calculation of her reference point
when she chooses between L and R, even
though she will not receive it. This happens when we have slow
adjustment of reference points. Another interpretation could be that
a player (the principal) sends agents to play for her 
at her information sets, and
they are given their instructions in advance. 
If the principal is not sophisticated, she does not take into account
that when an agent will be called upon, this might convey information
about what has occurred in the game and this leads to myopic behavior.
For this example, using a reference point which 
still considers the possibility of receiving 0
makes the 10 of the second
lottery more attractive than it would be with a higher reference
point, and therefore the lottery is preferred to the sure 20 for this
case.

%example 7 - non-existence of NMLAE
The following is an example of a one-player extensive form game which
has no non-myopic \laes.
\begin{example} %7
\label{ex:extensive2}
\end{example}
The tree for this example is given in Figure~\ref{fig:ex2}.
 \begin{figure}
\caption{The tree for Example~\ref{ex:extensive2}}
\label{fig:ex2}
%\input{exten2.pic}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% START OF exten2.pic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%TexCad Options
%\grade{\on}
%\emlines{\off}
%\beziermacro{\off}
%\reduce{\on}
%\snapping{\off}
%\quality{2.00}
%\graddiff{0.01}
%\snapasp{1}
%\zoom{1.00}
\unitlength 1.00mm
\linethickness{0.4pt}
%\begin{picture}(68.00,92.33)(0,21.28)
\begin{picture}(68.00,72.33)(0,21.28)
\put(50.00,80.33){\circle{1.89}}
%\emline(50.00,79.67)(62.67,65.33)
\multiput(50.00,79.67)(0.12,-0.14){106}{\line(0,-1){0.14}}
%\end
%\emline(50.00,79.33)(37.67,65.00)
\multiput(50.00,79.33)(-0.12,-0.14){103}{\line(0,-1){0.14}}
%\end
\put(37.00,64.00){\rule{1.00\unitlength}{1.00\unitlength}}
%\emline(37.33,64.00)(28.33,53.00)
\multiput(37.33,64.00)(-0.12,-0.15){75}{\line(0,-1){0.15}}
%\end
%\emline(37.33,63.67)(46.67,52.67)
\multiput(37.33,63.67)(0.12,-0.14){78}{\line(0,-1){0.14}}
%\end
\put(28.67,52.33){\circle{2.00}}
%\emline(28.67,51.67)(20.33,40.67)
\multiput(28.67,51.67)(-0.12,-0.16){70}{\line(0,-1){0.16}}
%\end
%\emline(28.33,51.00)(36.00,40.33)
\multiput(28.33,51.00)(0.12,-0.17){64}{\line(0,-1){0.17}}
%\end
\put(42.33,75.33){\makebox(0,0)[cc]{0.5}}
\put(59.00,76.00){\makebox(0,0)[cc]{0.5}}
\put(31.00,60.67){\makebox(0,0)[cc]{$L_2$}}
\put(43.67,60.67){\makebox(0,0)[cc]{$R_2$}}
\put(21.67,48.33){\makebox(0,0)[cc]{0.5}}
\put(34.00,48.67){\makebox(0,0)[cc]{0.5}}
\put(63.33,61.00){\makebox(0,0)[cc]{0}}
\put(46.67,49.67){\makebox(0,0)[cc]{21}}
\put(35.67,37.67){\makebox(0,0)[cc]{10}}
\put(20.67,37.67){\makebox(0,0)[cc]{40}}
%\emline(58.66,92.00)(49.66,81.00)
\multiput(58.66,92.00)(-0.12,-0.15){75}{\line(0,-1){0.15}}
%\end
%\emline(58.66,91.67)(68.00,80.67)
\multiput(58.66,91.67)(0.12,-0.14){78}{\line(0,-1){0.14}}
%\end
\put(52.33,88.67){\makebox(0,0)[cc]{$L_1$}}
\put(65.00,88.67){\makebox(0,0)[cc]{$R_1$}}
\put(68.00,77.67){\makebox(0,0)[cc]{8}}
\put(58.33,91.33){\rule{1.00\unitlength}{1.00\unitlength}}
\end{picture}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% END OF exten2.pic
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{figure}

There is one player, with $\la=1$. There are two nodes belonging to
nature (the hollow circles), and two decision nodes of the player
(the solid squares). Note that if the player starts 
with $L_1$ then she is in the
situation of Example~\ref{ex:extensive}. 
The following story is behind the decision tree. 
The first decision is whether
to study at Harvard business school ($L_1$) or to get a job at
McDonalds ($R_1$). If the business school is chosen, with probability
$\frac{1}{2}$ she fails in her studies
(the right branch) and gets a very low
payoff, having spent much time and money. If she 
succeeds, and is awarded her MBA (the
left branch), she now has to decide whether to accept a job on Wall
street ($R_2$) or try for a PhD ($L_2$), with probability of success
equal to $\frac{1}{2}$.
The player has 4 pure
strategies, which are $\sig^1=(L_1,L_2)$, $\sig^2=(L_1,R_2)$,
$\sig^3=(R_1,L_2)$ and $\sig^4=(R_1,R_2)$. There are no non-myopic
\laes\ in this game, according to the following reasoning. At the
first decision node, the player strictly prefers $\sig_1$ to all
other strategies, as she evaluates this outcome as equivalent to a
sure $8 \frac{1}{3}$. Therefore, this is the only possible candidate
for a non-myopic \lae.
However, at the second decision node $\sig^2$ is
preferred to $\sig^1$ and therefore $\sig^1$ is not a non-myopic
\lae. The unique {\em credible} equilibrium of this game (see 
Ferreira, Gilboa and Maschler, 1995) is $\sig^4$, which is to take
the job at McDonalds, as the player is sophisticated enough to know
that if she succeeded to get her MBA, she would go 
for the job on Wall street, and she prefers the sure thing of
McDonalds to the 50:50 gamble between Wall street and failure.

%*******************************************************************
\section{Comparative Statics}
\label{sec:compstat}
%*******************************************************************

It is not obvious how to compare loss-aversion equilibria 
of games with varying loss-aversion coefficients, 
as the correspondence between loss aversion coefficients in
extended games and their respective
loss-aversion equilibria is not continuous
(similar to regular games and Nash equilibria).

However, we can provide a number of examples which have unique
loss-aversion equilibria which show that increasing loss aversion 
of a player can either increase or decrease
the payoffs for the player herself and for the other players.

Example~\ref{ex:mp} (matching pennies) 
has a unique loss-aversion equilibrium for each
value of $(\la_1,\la_2)$. This equilibrium gives payoffs of 
$(\frac{1}{2+\la_1},\frac{1}{2+\la_2})$. Thus, in this example, when a
player becomes more loss averse, she receives a lower payoff and the
payoff of the other player remains the same. The same happens in the
payoffs of the mixed equilibrium of Example~\ref{ex:bos} (the battle
of the sexes).

We now give a 3-player game with a unique loss-aversion equilibrium
for most values of $\la$
(there are always unique equilibrium strategies for players 1 and 2,
and for almost all values of $\la$ player 3 has a unique equilibrium
strategy),
and in which when player 1 becomes more
loss-averse, her payoff increases and the payoff of another player 
decreases.

{\samepage
\begin{example}
\end{example}
\hspace{2cm}
\begin{tabular}{c|c|c|p{2cm}c|c|c|}
      & L &   R   & & & L & R \\
 \cline{1-3} \cline{5-7}
 T  & 2,~0,~0 & 0,~1,~1 & & T & 4,~0,~0.45 & 0,~1,~0.45   \\
 \cline{1-3} \cline{5-7}
 B  & 0,~2,~0 & 1,~0,~1 & & B & 0,~2,~0.45 & 2,~0,~0.45   \\
 \cline{1-3} \cline{5-7}
    & \multicolumn{2}{c|}{F} & & & 
      \multicolumn{2}{c|}{S}  \\
\end{tabular}
\\
}%end of \samepage
The strategy sets of this game are as follows:
Player 1 chooses either T or B. Player 2 chooses L or R, and player 3
chooses either F (the first matrix) or S (the second one). 
We now show that
the game extended with $\la=(1,1,1)$ has a unique \lae, and the same
applies for the game extended with $\la=(2,1,1)$. 

There are no pure-strategy equilibria for this game.
We calculate the loss-aversion equilibrium as follows: using the calculations for
Example~\ref{ex:bos} (note that the payoffs for players 1 and 2 are
similar to those of the battle of the sexes for either choice of
player 3), we find that in any loss-aversion equilibrium 
the probability of player 1 playing T is 
\begin{equation}
  p=1-\frac{-3-2\la_2 + \sqrt{9+8\la_2(2+\la_2)}}{2 \la_2} \\
\end{equation}
and the probability of player 2 playing L is
\begin{equation}
  q=1-\frac{-3-2\la_1 + \sqrt{9+8\la_1(2+\la_1)}}{2 \la_1} \\
\end{equation}
Faced with these strategies of players~1 and~2,
playing F gives player 3 a payoff of $\frac{1-q}{1+q}$, and playing S
gives player 3 a payoff of 0.45. Therefore, if 
$\frac{1-q}{1+q} \neq 0.45$ there is a unique loss-aversion
equilibrium of the game. 

For $\la=(1,1,1)$, $q=\frac{-5+\sqrt{33}}{2}\approx 0.37228$, so
playing F gives player 3 a payoff of approximately 0.457427, which
is greater than 0.45. 
With player 3 playing F, players 1 and 2 also receive a payoff of
$\frac{-3+\sqrt{33}}{6} \approx 0.457427$.
Therefore, the loss-aversion equilibrium for 
$\la=(1,1,1)$ has $p=q\approx 0.37228$, player 3 choosing F, and
the payoffs are approximately 0.457427 for each player.

For $\la=(2,1,1)$, $p$ remains unchanged, and
$q=\frac{-7+\sqrt{73}}{4}\approx 0.38600$, so
playing F gives player 3 a payoff of approximately 0.44300, which
is less than 0.45. Therefore, player 3 will play S.
With player 3 playing S, players 2 receives a payoff of
$\frac{-3+\sqrt{33}}{6} \approx 0.457427$, and
player 1 receives $2\frac{-3+\sqrt{73}}{16}\approx 0.69300$.
Therefore, the loss-aversion equilibrium for 
$\la=(2,1,1)$ has $p\approx 0.37228$, 
$q\approx 0.38600$, player 3 choosing S, and
the payoffs are approximately (0.457427,0.69300,0.45).

In summary, as player 1's loss-aversion increased, her payoff
increased, that of player 3 decreased and that of player 2 remained
the same (by multiplying the payoffs of player 2 in the
second matrix by a positive
constant, we could have her payoff increasing or decreasing). This
is not too surprising, recalling that the 
result is analogous to what occurs in the following pair of three player
games, using regular expected utility.

{\samepage
\begin{example}
\end{example}
{\bf Game 1:}\\
\hspace{2cm}
\begin{tabular}{c|c|c|p{2cm}c|c|c|}
      & L &   R   & & & L & R \\
 \cline{1-3} \cline{5-7}
 T  & {\bf 2},~0,~0 & 0,~1,~1 & & T & {\bf 4},~0,~0.6 & 0,~1,~0.6   \\
 \cline{1-3} \cline{5-7}
 B  & 0,~2,~0 & 1,~0,~1 & & B & 0,~2,~0.6 & 2,~0,~0.6   \\
 \cline{1-3} \cline{5-7}
    & \multicolumn{2}{c|}{F} & & & 
      \multicolumn{2}{c|}{S}  \\
\end{tabular}
}%end of \samepage

{\samepage
{\bf Game 2:}\\
\hspace{2cm}
\begin{tabular}{c|c|c|p{2cm}c|c|c|}
      & L &   R   & & & L & R \\
 \cline{1-3} \cline{5-7}
 T  & {\bf 1},~0,~0 & 0,~1,~1 & & T & {\bf 2},~0,~0.6 & 0,~1,~0.6   \\
 \cline{1-3} \cline{5-7}
 B  & 0,~2,~0 & 1,~0,~1 & & B & 0,~2,~0.6 & 2,~0,~0.6   \\
 \cline{1-3} \cline{5-7}
    & \multicolumn{2}{c|}{F} & & & 
      \multicolumn{2}{c|}{S}  \\
\end{tabular}
\\
}%end of \samepage
These are 3-player games differing only 
in two of player 1's payoffs 
which are lower in the second game than in the first. Each game has a  
unique Nash equilibrium. 
The payoffs for the Nash equilibrium of game 1 are $(\frac 23,
\frac 23, \frac 23)$.
The payoffs for the Nash equilibrium of game 2 are $(1,
\frac 23, 0.6)$.
Player 1's payoff is higher in
the equilibrium of the second game (in which she had lower payoffs), 
while player 3's payoff is lower in the
equilibrium of the second game.

%*******************************************************************
%*******************************************************************
\section{Directions for future research}
\label{sec:tests}
%*******************************************************************
One of the goals of game theory is to make predictions. 
In the previous sections we have made predictions of outcomes of
games, when reference dependence and loss aversion are taken into
consideration. A first step in testing these results is to measure
the loss aversion of individuals. Virtually all work on loss aversion
has looked for averages, and not dealt with variations between
individuals. 
Significant differences between averages in different experiments
could indicate heterogeneity.
A related project is to try to correlate the level of
loss aversion of an individual with factors such as age, social
status, gender, culture etc.
Such work has been done on risk aversion, and experimental and
empirical evidence shows that women are more risk averse than men.%
\footnote{%
\ls{1}
For examples of experimental evidence, see 
Levin, Snyder and Chapman (1988), Hudgens and Fatkin (1985), Zinkhan
and Karande (1991), Arch (1993), 
%Fagley and Miller (1990), 
Kogan and Wallach (1964), Slovic (1964), Maccoby and Jacklin (1974). 
For empirical evidence see Jianakoplos and Bernasek (1996).}
%
Since many gambles naturally include outcomes both above and below one's
reference point, increased loss aversion would lead to a higher
measure of risk aversion. An interesting hypothesis therefore, is
that women are more loss averse than men.
Once loss aversion has been measured, experimental games could test
the predictions of this paper. The games would be such that differing
levels of loss aversion would lead to different equilibrium
strategies. Since the results are based
on the assumption that
the loss-aversion characteristics (as well as the payoffs)
are common knowledge, there would
have to be a stage where the players learn about each other's level
of loss aversion.

%*******************************************************************

\lsls
%*******************************************************************
%\begin{thebibliography}{99}
\section*{Bibliography}
%*******************************************************************

\begin{description}
\item 
   Allais, M. (1953):
   ``Le Comportement de l'Homme Rationnel Devant le Risque,
   Critique des Postulates de l'\'{E}cole Americaine,''
   {\em Econometrica} {\bf 21} 503-546.

\item 
   Arch, E. C. (1993):
   ``Risk-Taking: A Motivational Basis for Sex Differences,''
   {\em Psychological Reports} {\bf 73} 3-11.

\item 
%\bibitem{dedreu92}
    De Dreu, C. K. W., B. J. M. Emans and E. Van de Vliert (1992):
    ``Frames of Reference and Cooperative Social Decision Making,''
    {\em European Journal of Social Psychology} {\bf 22} 297-302.     

%\item 
%    Fagley, N. S., and P. M. Miller (1990):
%    ``The Effect of Framing on Choice: Interactions With
%    Risk-Taking Propensity, Cognitive Style and Sex,''
%    {\em Personality and Social Psychology Bulletin}
%    {\bf 16} (3)

\item 
    Dekel, E. (1986):
    ``An Axiomatic Characterization of Preferences under Uncertainty:
    Weakening the Independence Axiom,''
    {\em Journal of Economic Theory} {\bf 40} 304-318.

\item
    Ferreira, J.-L., I. Gilboa and M. Maschler (1995):
    ``Credible Equilibria in Games with Utilities Changing
    during the Play,''
    {\em Games and Economic Behavior} {\bf 10} 284-317.

\item 
    Fishburn, P. C. and G. A. Kochenberger (1979):
    ``Two piece Von Neumann - Morgenstern Utility Functions,''
    {\em Decision Sciences} {\bf 10} 503-518.

\item
    Fudenberg, D. and J. Tirole (1991):
    ``Game Theory,''
    The MIT Press, Cambridge, MA and London, England.

%\item
%    Geanakoplos, J., D. Pearce, and E. Stachetti (1989):
%    ``Psychological Games and Sequential Rationality,''
%    {\em Games and Economic Behavior} {\bf 1} 60-79.
%
\item
    Gul, F. (1991):
    ``A Theory of Disappointment Aversion,''
    {\em Econometrica} {\bf 59} (3) 667-686.

\item 
    Hudgens, G. A., and L. T. Fatkin (1985):
    ``Sex Differences in Risk Taking: Repeated Sessions on a 
    Computer-Simulated Task''
    {\em The Journal of Psychology} {\bf 119} (3) 197-206.

\item 
    Jianakoplos, N. A., and A. Bernasek (1996):
    ``Are Women More Risk Averse?''
    {\em Working Papers in Economics and Political Economy,} 
    Colorado State University.

\item
   Kahneman, D. (1992):
   ``Reference Points, Anchors, Norms and Mixed Feelings,''
   {\em Organizational Behavior and Human Decision Processes}
   {\bf 51} 296-312.

\item 
%\bibitem{kkt90} 
   Kahneman, D., J. L. Knetsch and R. H. Thaler (1990):
   ``Experimental Tests of the Endowment Effect and and the Coase
   Theorem,'' {\em Journal of Political Economy} {\bf 98} (6) 1325-1348.

\item 
%  \bibitem{kkt91} 
   Kahneman, D., J. L. Knetsch and R. H. Thaler (1991):
   ``The Endowment Effect, Loss Aversion and Status Quo Bias,''
   {\em Journal of Economic Perspectives} {\bf 5} (1) 193-206.

\item 
%   \bibitem{kt79} 
   Kahneman, D. and A. Tversky (1979):
   ``Prospect Theory: An Analysis of Decision Under Risk,''
   {\em Econometrica} {\bf 47} 263-291.

\item 
   Kogan, N., and M. A. Wallach (1964):
   {\em Risk Taking: A Study in Cognition and Personality,}
   Holt, Rinehart and Winston.

\item 
%   \bibitem{kramer89} 
   Kramer, R. M. (1989):
   ``Windows of Vulnerability or Cognitive Illusions: Cognitive 
   Processes and the Nuclear Arms Race,'' {\em Journal of Experimental
   Social Psychology} {\bf 25} 79-100.

\item 
   Levin, I. P., M. A. Snyder, and D. P. Chapman (1988):
   ``The Interaction of Experiential and Situational Factors and 
   Gender in a Simulated Risky Decision-Making Task,''
   {\em The Journal of Psychology} {\bf 122} (2) 173-181.

\item 
   Maccoby, E. E., and C. N. Jacklin (1974):
   {\em They Psychology of Sex Diferences,} 
   Stanford University Press, Stanford, CA.

\item 
%   \bibitem{rabin96} 
    Rabin, M. (1996):
   ``Psychology and Economics,'' Mimeo, University of California,
   Berkeley.

\item 
   Slovic, P. (1964):
   ``Assessment of Risk Taking Behavior,''
   {\em Psychological Bulletin} {\bf 61} (3) 220-233.

\item 
%\bibitem{taylor91} 
   Taylor, S. E. (1991):
   ``Asymmetrical Effects of Positive and Negative Events:
   The Mobilization-Minimization Hypothesis,''
   {\em Psychological Bulletin} {\bf 110} 67-85.

\item 
%\bibitem{tk92} 
   Tversky, A. and D. Kahneman (1992): 
    ``Advances in Prospect Theory: Cumulative Representation
    of Uncertainty,'' {\em Journal of Risk and Uncertainty,}
    {\bf 5} 297-323.

\item 
    Zinkhan, G. M., and K. W. Karande (1991):
    ``Cultural and Gender Differences in Risk-Taking
    Behavior Among American and Spanish Decision Makers,''
    {\em The Journal of Social Psychology} {\bf 131} (5) 741-742.
\end{description}

%\end{thebibliography}


\end{document}             % End of document.

%*****************************************************************
%ENDENDENDEND END END END END 


%scrap from psych. games section... scrapped 15/7/97
%*****************************************************************
\section{Psychological Games and Loss Aversion} %4
\label{sec:psychgames}
%*****************************************************************
FOR THIS SECTION TO HOLD NEED TO GENERALIZE PREVIOUS NOTATION TO
ALLOW FOR "SIMULTANEOUS" NORMAL FORM GAMES...

Geanakoplos, Pearce and Stachetti (1989) (henceforth GPS), define a
normal form psychological game in which the utility of the players is
a function of their actions and their beliefs. Our endogenization of
the reference points into the solution concept of extended games
makes the reference points a function of the beliefs of the players.
In this section we show that any extended game can be converted into
a psychological game, in such a manner that the 
actions in any psychological Nash
equilibrium of the psychological game constitute a 
non-myopic loss-aversion
equilibrium of the extended game. 


We are now ready to construct a normal form
psychological game $G^p$ from our extended game $G$.
The players and the actions of $G^p$ are the same as those in $G$. 
The payoffs for a player depend only on the actions of all the
players, and on her (first-order) beliefs over these actions,
as follows.
If the actions are $s \in S$, and player $i$'s (first order)
beliefs are $\sig_{-i} \in \Sig_{-i}$, then the payoff to player $i$ is
\[ u^p_i(\sig_{-i},s)=w_i(u_i(s),r_i(s_i/\sig_{-i})). \]
The payoff to player $i$ from a mixed strategy profile $\tau$ 
when her beliefs are given by $\sig_{-i}$ is given by
\[ u^p_i(\sig_{-i},\tau)=\sum_{s \in S}p_\tau(s)u^p_i(\sig_{-i},s). \]

Following GPS, 
a psychological Nash equilibrium is a pair $(\hat{b},\hat{\sig})$
of beliefs and strategies, such that the strategies are best
responses given the beliefs, and 
the beliefs of each player $i$ are that ``all his
opponents follow $\hat{\sig}_{-i}$, that each opponent $j \neq i$
believes that her opponents follow $\hat{\sig}_{-j}$ and so on''.

Using our notation and payoff structure, a strategy profile $\sig \in
\Sig$ is a psychological Nash equilibrium if 
\[ u^p_i(\sig_{-i},\sig_{-i}/\sig'_i) \leq 
   u^p_i(\sig_{-i},\sig) 
       ~~~\forall i\in\scri,~~\forall \sig'_i\in\Sig_i.  \]

Using the definition of a psychological game based on an extended
game, and the definiton of the function $r_i$, one can verify that
the set of psychological Nash equilibria of the derived psychological
game is equivalent to the set of non-myopic \lae\ of the extended
game. Therefore, using GPS's existence theorem, we have the result
that there always exists a non-myopic \lae.

