Научная статья на тему 'Uncertainty aversion and equilibrium in normal form games'

Uncertainty aversion and equilibrium in normal form games Текст научной статьи по специальности «Математика»

CC BY
24
4
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
RATIONALITY / NORMAL FORM GAME / UNCERTAINTY AVERSION / CHOQUET EXPECTED UTILITY THEORY / NASH EQUILIBRIUM / ROBUSTNESS

Аннотация научной статьи по математике, автор научной работы — Rothe Jörn

This paper presents an analysis of games in which rationality is not necessarily mutual knowledge. We argue that a player who faces a non-rational opponent faces genuine uncertainty that is best captured by non-additive beliefs. Optimal strategies can then be derived from assumptions about the rational player’s attitude towards uncertainty. This paper investigates the consequences of this view of strategic interaction. We present an equilibrium concept for normal form games, called Choquet-Nash Equilibrium, that formalizes this intuition, and study existence and properties of these equilibria. Our results suggest new robustness concepts for Nash equilibria.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Uncertainty aversion and equilibrium in normal form games»

Uncertainty Aversion and Equilibrium in Normal Form Games*

Jorn Rothe1

London School of Economics,

Department of Management,

Managerial Economics and Strategy Group, Houghton Street, London, WC2A 2AE, United Kingdom E-mail: [email protected]

Abstract This paper presents an analysis of games in which rationality is not necessarily mutual knowledge. We argue that a player who faces a non-rational opponent faces genuine uncertainty that is best captured by non-additive beliefs. Optimal strategies can then be derived from assumptions about the rational player’s attitude towards uncertainty. This paper investigates the consequences of this view of strategic interaction. We present an equilibrium concept for normal form games, called Choquet-Nash Equilibrium, that formalizes this intuition, and study existence and properties of these equilibria. Our results suggest new robustness concepts for Nash equilibria.

Keywords: rationality, normal form game, uncertainty aversion, Choquet expected utility theory, Nash equilibrium, robustness.

1. Introduction

From a classical point of view, game theory is about the question what constitutes rationality in a situation of strategic interaction (von Neumann & Morgen-stern 1944, particularly sections 2.1 and 4.1). The players are assumed to be rational in a decision-theoretic sense, i. e. they act as if they possess a utility function over outcomes and beliefs given by a probability distribution over states, and maximise (subjective) expected utility (von Neumann and Morgenstern (1944), Savage (1954)). Beliefs, in turn, have to be compatible with what the players know. In particular, players are assumed to know that their opponents are themselves rational. Under additional assumptions, the equilibrium concept (Nash (1950)) can then be interpreted as a rationality concept (see, e.g., Tan and Werlang (1988), Aumann and Brandenburger (1995)).

However, the assumptions that players are rational, and that they know that their opponents are rational, are restrictive, both from an introspective and an experimental point of view. This paper addresses the question what constitutes rationality if rationality is not mutual knowledge. As in Kreps et al. (1982), we distinguish between rational and non-rational players. However, we argue that the possibility that the opponent is not rational leads to uncertainty that cannot be

* First version 1999. This paper continues the research presented in Rothe (1996, 2009) and is based on chapter 2 of my PhD thesis submitted to the University of London in 1999. I am grateful for financial support from the Deutsche Forschungsgemeinschaft (Graduiertenkolleg Bonn), Deutscher Akademischer Austauschdienst, Economic and Social Research Council, the European Doctoral Program and the LSE.

adequately captured by beliefs that are necessarily representable by a probability measure. Thus, the analysis of games without mutual knowledge of rationality has to be based on a weaker definition of decision-theoretic rationality. In particular, Choquet-expected utility theory allows more general beliefs. Thus, we combine the analysis of Kreps et al. (1982) with Choquet-expected utility theory.

Choquet-expected utility theory (henceforth CEU) is due to Schmeidler (1989). Under Choquet-expected utility theory players are maximising expected utility subject to their beliefs, but their beliefs do not have to be additive. CEU is closely related to, but not quite identical with maxmin expected utility theory (Gilboa and Schmeidler (1989)), which allows sets of additive beliefs. Whereas Savage’s subjective expected utility theory reduces uncertainty to risk, CEU and its variants gives rise to a qualitative difference between risk and uncertainty.

This difference is important in games if we distinguish between rational and non-rational players as in Kreps et al. (1982). A rational player is one who chooses his strategy as to maximise utility given his beliefs. A rational player who faces a rational opponent can anticipate her strategy if he knows her utility function and can anticipate her beliefs. Consequently, a rational player who faces a rational opponent faces risk, in the sense that his beliefs are given by objective probabilities determined by best-reply considerations. Thus his beliefs are necessarily additive.

On the other hand, a rational player who faces a non-rational opponent faces true uncertainty, if all he knows is that a non-rational player does not necessarily choose a utility-maximising strategy. Under CEU, a rational player’s beliefs reflect his attitude towards uncertainty. As a result, it becomes possible to base a theory of rational decisions in games not on a player’s theory about how non-rational opponents play, but on his attitude towards uncertainty. Since CEU was motivated by phenomena that can be explained as uncertainty aversion — for instance the Ellsberg paradox — we also make this assumption.

We present an equilibrium concept, called Choquet-Nash equilibrium, that formalizes this intuition and discuss existence and properties of these equilibria in normal form games. We show that

- in normal form games Choquet-Nash equilibria always exist,

- not every rationalizable strategy is a Choquet-Nash equilibrium, and, conversely, non-rationalizable strategies may be equilibria,

- strictly dominated strategies are never rational, but elimination of such strategies cannot be iterated,

- robustness with respect to doubts about the rationality of the opponents is not captured by payoff-dominance or risk-dominance,

- mixed strategies may or may not be robust, depending on the game in question. On this basis we formulate two equilibrium refinements: A Nash equilibrium is called strictly uncertainty aversion perfect if it continues to be an equilibrium as long as the belief in the opponents’ rationality is sufficiently strong. Such equilibria need not exist. A Nash equilibrium is called uncertainty aversion perfect if it can be approximated by equilibria that do not require mutual knowledge of rationality. We show that such equilibria always exist, and that these refinements differ from those that are based on ‘trembles’ of otherwise fully rational opponents, i. e. trembling-hand perfect, proper and strictly perfect equilibria.

This paper makes three contributions. First, we extend the analysis of Kreps et al. (1982) (henceforth KMRW). In contrast to KMRW, we do not need to specify

a particular belief about the ‘type’ of an irrational opponent. Due to the absence of a theory of non-rational decision-making, such a specification is necessarily ad hoc. Moreover, the uniform distribution does not adequately model the ignorance about an irrational opponent, because it is not invariant under irrelevant changes of the game, for instance when adding a superfluous strategy that is a mere copy of an existing one. In our approach, ignorance can naturally be expressed as a non-additive probability.

More fundamentally, two difficulties arise with interpreting equilibria as rational strategies in the KMRW framework. First, interpreting equilibrium strategies as rational implicitly defines all non-equilibrium strategies as non-rational. Thus, a rational player’s beliefs about an non-rational opponent should be consistent with this definition of non-rationality. This means that his beliefs should be consistent with any non-equilibrium strategy of the opponent. Secondly, a ‘type’ in a game with incomplete information corresponds to a consistent infinite hierarchy of beliefs. Thus, in KMRW the rational player believes that the opponent possesses such beliefs, even if he is not rational. In contrast, in our analysis an irrational opponent is a source of genuine uncertainty, and the question what constitutes a rational strategy is determined by a rational player’s attitude towards uncertainty. Consequently, our analysis applies independently of the question whether the opponent can be modelled as a type.

The second contribution of this paper consists in a robustness analysis of Nash equilibria. Applying our solution concept to normal form games allows us to formalize how robust a Nash equilibrium is with respect to doubts about the rationality of the opponent. This robustness concept differs from existing ones, and shows how robustness is not a property of an equilibrium concept in general, but rather a property of specific equilibria in specific games.

The third contribution of this paper is that it extends the equilibrium concept to games in which players have non-additive beliefs. Here we extend solution concepts proposed by Dow and Werlang (1994), Eichberger and Kelsey (1994), Epstein (1997a), Haller (1995), Hendon et al. (1995), Klibanoff (1993), Lo (1995), Lo (1996), Marinacci (1994), Mukerji (1994), Ritzberger (1996), and Ryan (1997). This literature considers games in which players maximise CEU, or some variant of CEU. These papers show that it is possible to capture strategic phenomena that cannot be explained when players maximise subjective expected utility, and have also uncovered the difficulties that an extension of the equilibrium concept has to address. In our analysis we provide an explicit reason for the existence of uncertainty, and on this basis some of these difficulties can be avoided. In particular, it is not necessary to use simple capacities in the definition of an equilibrium, or to decide between the different support concepts that have been proposed for capacities, or to formulate an independence concept for capacities.1

This paper is organized as follows. In section 2 we define the equilibrium concept for two-player games and prove existence of Choquet-Nash equilibria. In section 3

1 After a first version of this paper was completed, I learnt of the related approach of Sujoy Mukerji (1994). His main concern is the consistent introduction of CEU into game theory, and he argues that this requires the KMRW framework. We fully agree with this, in addition we argue in this paper that the converse also holds, i. e. nonadditive beliefs overcome the limitations of the KMRW approach described above. For finer differences see section 5.

we derive properties of Choquet-Nash equilibria, formulate the two refinements of Nash equilibria, and compare them with standard solution concepts. In section 4 we discuss the extension to infinitely many strategies and more than two players. Section 5 compares the equilibrium concept with other equilibrium concepts that are based on Choquet expected utility and uncertainty aversion. Section 6 presents an equilibrium concept that allows players to have a strict preference for mixed strategies. Section 7 concludes.

2. Choquet-Nash Equilibrium

A game in normal form is defined by specifying the set of players N, for each player a set of strategies Si and each player’s von Neumann-Morgenstern utility function ui. In particular, players are assumed to be rational: when faced with uncertainty they maximise subjective expected utility. This concept of rationality has been axiomatized by Savage (1954).

In a game, rational beliefs must not only satisfy Savage’s axioms, but must in addition be consistent with what players know about the structure of the game and about each other’s rationality. In particular, if a player can anticipate which strategies are rational and if he knows that his opponent is rational, then he can anticipate his opponent’s play. Precise arguments along this line are developed, e.g., in Tan and Werlang (1988) and Aumann and Brandenburger (1995).

If rationality is not mutual knowledge the question thus arises how a rational player should act if he knew that the opponent is not rational. In that case Savage’s axioms imply that the rational player should have a belief given by a unique probability measure over the opponent’s actions. If neither a theory of bounded rationality nor a stable empirical regularity of non-rational behaviour is available, there seems to be no foundation for this belief. The idea of this paper is that a weaker rationality concept allows further assumptions about the rational player from which rational actions can be derived.

2.1. Uncertainty Aversion

A key axiom in subjective expected utility theory is the independence axiom (Ans-combe & Aumann 1963, Samuelson1952). Intuitively, the independence axiom says that if a decision maker prefers one act over another then he should also prefer a probability mixture of the first and a third act over the same mixture of the second and the third act: Either this probability mixture will reduce to a choice between the first two acts, or not, in which case the decision-maker is left with the third act in either case.2 The descriptive validity of the independence axiom is questioned by the Allais paradox, the Ellsberg paradox and similar findings. Since its consequence is that a decision maker’s beliefs can be represented by a probability measure, it also places a high demand on a player’s rationality.

CEU weakens the independence axiom (Schmeidler (1989)). Under CEU, the independence axiom is not assumed to hold for all acts, but only for acts that are ‘comonotonic’. Two acts3 f,f' are comonotonic if f(w) > f(w') implies f'(u) >

2 However, this interpretation equates the probability mixture with a two-stage lottery, i. e. also assumes a version of the ‘reduction of compound lotteries axiom’, see Kreps (1988)p.50 - 52 for the expected utility case.

3 Here, acts f £F map states w £ Q into von Neumann - Morgenstern utilities. The acts are measurable with respect to events E £ S C 2n.

f '(o'), i.e. both acts give rise to the same preference ordering over states. In the following figures, acts f, g and h are pairwise comonotonic, f (or g or h) and h' are not.

O1 02

O1 02

/ 10 6 y + hh 10 3

9 16 0 19 + hh 13 0

h 10 0 y + \h> 5 5

ti 0 4 h + \h< 8 2

Fig. 1

Fig. 2

Restricting the sure-thing principle to comonotonic acts means that if the player is indifferent between / and g then he must also be indifferent between and

bg + ^h, because f,g and h are comonotonic. However, he may, e.g., strictly prefer + \h! to \g + \h'. The reason is that mixtures of non-comonotonic acts can be interpreted as “hedging”, i.e. distributing utility across states. Uncertainty aversion means that players may rationally act as if they hedged against uncertainty. Thus, in contrast to subjective expected utility theory, CEU allows the introduction of an additional assumption about rational preferences over acts that characterizes the player’s attitude towards uncertainty.4

Schmeidler (1989) has shown that behaviour that is rational in this weaker sense can still be described by expected-utility maximisation. Players do still act as if they possess a von Neumann - Morgenstern utility function and beliefs, and take expected values. These beliefs, however, are no longer given by a probability measure over events, but a capacity, i.e. non-additive measure over events. Formally, a capacity v maps S into [0,1] such that (i) v(0) = 0, (ii) v(Q) = 1 and (iii) E C E' =^ v(E) < v(E'). Property (iii) weakens the finite-additivity requirement for finitely-additive measures: E n E' = 0 =^ v(E U E') = v(E) + v(E'). Note that non-additive beliefs still may, but in general need not be additive.

The expectation of a real-valued random variable X with respect to a nonadditive measure v is defined in Choquet (1953). If X takes finitely many values a1 > ... > an the Choquet integral is given by5

/U

Xdv := 'sy^jv(X > ai) ■ Aai, i=1

where Aa.i := ai — ai+i and a.n+i := 0.

4 This preference for randomisation argument exploits the structure of the Anscombe-Aumann model (Eichberger and Kelsey (1996)). Also, comonotonic independence may be too strong a requirement for uncertainty aversion (Epstein (19976), Ghirardato and Marinacci (1997)). In our game-theoretic context these are side issues, however.

5 As usual, we write v(X > t) for v({u e ft\X(w) > t}). The integrals on the right hand side are extended Riemann integrals. If v is additive this is the usual expectation.

Formally, uncertainty aversion can be characterized in terms of the capacity v. The capacity v displays uncertainty aversion iff it is supermodular, i.e. v(E) + v(E') < v(EnE')+v(EUE'). The ‘probability weights’ v(E) of an uncertainty averse decision maker do not add up to 1. Maximisation of Choquet expected utility under uncertainty aversion corresponds to allocating probability residuals to outcomes that are worst for the player.

2.2. Equilibrium

Let (I, S, u) be a finite two-player game in normal form. If player i knew that his opponent was non-rational, CEU implies that his belief is given by a not necessarily additive capacity vj over Sj. Moreover, his expected utility from his pure strategy si is given by the Choquet expectation ui(si, vj) := /S ui(si, Sj) dvj. We define his payoff from a mixed strategy ai G AS\ as ui(ai, vj) := ^S.es- ai(si) ■ ui(si, vj).

In a game in which rationality is not mutual knowledge, player 1 will thus take both possibilities into account: that the opponent is rational and that he need not be. If he can anticipate the rational strategies, his overall expected utility will be the weighted sum of his expected utility from interacting with a rational opponent, and the Choquet expected utility from interacting with a non-rational opponent. The weight corresponds to his degree of belief in the opponent’s rationality. In a weak Choquet-Nash equilibrium, these rational strategies are determined endogenously.

Definition 1. Let (I,S,u) be a finite two-player game in normal form. Let 0 < ei_,e2 < 1. Let v1 be a capacity on S1 and v2 be a capacity on S2. Then a* is a weak Choquet-Nash equilibrium iff (if and only if)

a** G arg max [ (1 — ei) ■ ui(ai,a*) + ei ■ ui(ai,v2) ],

aiESi

a* G arg max [ (1 — e2) ■ u2(a*,a2) + e2 ■ u2(vi,a2) ].

a 2 GS2

Note that if e1 = e2 = 1 then each player believes that he faces a non-rational opponent, and thus the question what constitutes a rational strategy is purely decision-theoretic. On the other hand, if e1 = e2 = 0 then rationality is mutual knowledge. Note also that this definition assumes that the rational players know each others beliefs. Finally, notice that this equilibrium concept makes no assumption about the players’ attitudes towards uncertainty, in particular, they may be uncertainty loving.

The difference between this approach and the ‘crazy type’ approach of Kreps et al. (1982) is this: In KMRW, the ‘irrational’ players have a different utility function or a different strategy set. But they are fully rational as ‘types’ of a game with incomplete information. The specification of the utility function corresponds to a belief of the rational player about the ‘irrational’ opponent’s play. In contrast, in the present approach the rational players treat the non-rational players as part of ‘nature’, rather than as ‘types’. The specification of the rational players’ ‘beliefs’ reflect their own attitude towards uncertainty, rather than an assumption about the opponent.

In general, when players are not expected utility maximisers, an equilibrium need not exist (Crawford (1990), Dekel, Safra and Segal (1991)). However, the following proposition shows that this problem does not arise under CEU.6

6 Note that this existence result also holds under uncertainty love. However, this is due to the order of integration, see section 6.

Proposition 1. For all ei,e2,vi and v2 a weak Choquet-Nash equilibrium exists.

Proof. The proof is the standard argument due to Nash (1950). The best reply correspondence a*(aj) = argmaxaiESi [ (1 — ei) ■ ui(ai,aj) + ei ■ ui(ai,vj) ] maps the (n — 1) dimensional unit simplex into itself. Since the objective function is linear in ai, it is continuous, therefore a maximum exists and the best reply correspondence is non-empty and convex-valued. Since ui is continuous in aj, it also has a closed graph. By Kakutani’s Fixed Point Theorem, (a*(a2),a*(ai)) has a fixed point, which is, by definition, a general Choquet-Nash equilibrium.

In this generality the equilibrium concept is difficult to apply, because the beliefs ei and vi have to be specified. We therefore make three simplifying assumptions: First, we assume that players share a common prior about the degree of mutual knowledge of rationality. This assumption is for simplicity only, but also has two useful side effects. It avoids any ad hoc asymmetry, and it makes the assumption that players know each others beliefs less demanding. Secondly, we assume that players are totally ignorant about the behaviour of a non-rational opponent. This ignorance has two reasons: Our solution concept specifies rational strategies only, so it does not restrict at all the range of non-rational strategies. Thus, complete ignorance is a consistency requirement. Also, there is no exogenous theory of non-rational decision making. As a consequence, every assumption about the shape of a rational player’s beliefs about his non-rational opponent are ad hoc. In addition, a useful side effect is that the assumption that players know the rational opponent’s beliefs is less restrictive. Finally, we consider the case that the players are uncertainty averse. Uncertainty aversion is the natural explanation of behavior observed in the Ellsberg paradox.

Complete ignorance can naturally be captured by ‘simple capacities’:

If player i holds this belief vj about a non-rational opponent, he is only certain that the opponent will choose one of his available actions, but is unable to assign positive probability to any particular set of actions.

The Choquet-expectation of a utility function with respect to a simple capacity reflects uncertainty aversion, since all probability is allocated to the worst realization, i. e.

A Choquet-Nash equilibrium is a weak Choquet-Nash equilibrium with these additional assumptions.

Definition 2. Let (I, S, u) be a finite two-player game in normal form. Let 0 < e < 1. Then a* is a Choquet-Nash equilibrium iff7

a* G arg max [ (1 — e) ■ u1(a1, a*)+ e ■ ai(si) ■ min ui(si, S2) ],

7 In the remaining sections, we also use the notation S.es- ai(si) ' minsjeSj ui(si, Sj) =

/s minSjeSj ui(si, Sj) da*.

a* G arg max [ (1 — e) ■ u2(a*,a2) + e ■ V' a2(s2) ■ min u2(si, s2) ].

a2ES2 siESi

S2 ES2

It follows from proposition 1 that in every finite two-player game in normal form a Choquet-Nash equilibrium (henceforth CNE) exists. Moreover, every symmetric game also has a symmetric Choquet-Nash equilibrium.

Definition 3. Let (I, S, u) be a finite two-player game in normal form. The game is symmetric iff Si = Sj and ui(si,sj) = uj(sj,si). A strategy combination is symmetric iff si = sj.

Remark 1. For all e, in a symmetric game a symmetric Choquet-Nash equilibrium exists.

Proof. Again, the proof is standard. The result is proved as in proposition 1, except that the fixed point argument is applied to the best reply correspondence a* (ai).

3. Properties of Choquet-Nash Equilibria

The aim of this section is to present the properties of Choquet-Nash equilibria. Section 3.1 relates them to dominance and rationalizability. In section 3.2, we relate Choquet-Nash equilibria to the robustness of Nash equilibria This will lead to the definition of two equilibrium refinements (sections 3.3 and 3.4). Section 3.5 compares them with minimax strategies in zero-sum games. Finally, section 3.6 compares them with other equilibrium refinements (trembling-hand perfect, proper and strictly perfect equilibria).

3.1. Dominance and Rationalizability

The following result implies that, independently of the degree of mutual knowledge of rationality, no strictly dominated strategy is rational.

Lemma 1. Let (I,S,u) be a finite two-player game in normal form. Let 0 < e < 1. Let a* be a Choquet-Nash equilibrium. Then if a* (si) > 0, then si is a best response to a* and e, i. e.

si G argmaxsiESi [ (1 — e) ■ ui(si,a*) + e ■ minsjESj ui(si, sj) ].

Proof. Again, the proof is standard. If si is not a best response then some other strategy si gives higher expected utility than si. Thus the player can increase his overall utility from ai* by playing ai(si) = ai* (si)+a* (si), ai(si) =0 and ai(s”) = a*(s'[) for all other strategies s", which contradicts the assumption that a* is a best reply.

It is important to notice, however, that strict dominance cannot be iterated, as the game in Figure 3 shows:

L R

T B

Fig. 3

In this game playing L is a strictly dominant strategy for player 2. Consequently, iterated strict dominance yields T as the unique rational strategy for player 1, if rationality is mutual knowledge. In particular, (T, L) is the unique equilibrium and the unique rationalizable strategy profile of the game.

However, (T, L) is not a plausible profile unless player 1 is convinced that player 2 is rational. The CNE in this game depends on e. In every CNE, player 2 will play L because this is his strictly dominant strategy. However, unless e < only strategy B is rational for player 1.

Note that this shows that non-rationalizable strategies may be CNE-strategies. The ‘Matching Pennies’ game in figure 4 shows that, conversely, not every rational-izable strategy is a CNE.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

T

B

Fig. 4

Note that the best reply correspondence for a Choquet-Nash equilibrium in the ‘Matching Pennies’ game is given by

a* (aj) = arg max [ (1 - e) ■ Ui(a* ,Oj) + e* ■ (-1) ],

which differs from the Nash best reply correspondence only by a factor and a constant. Consequently, independently of e, only the mixed strategies o\ (T) = o\ (B) = <72(i) = 02(R) = \ form a CNE. This is also the unique Nash equilibrium, but every strategy profile is rationalizable. We have thus established proposition 2:

Proposition 2. Non-rationalizable strategy profiles may be Choquet-Nash equilibria. Conversely, not every rationalizable strategy profile is a Choquet-Nash equilibrium.

3.2. The Robustness of Nash Equilibria

The definition of a Choquet-Nash equilibrium collapses to the definition of Nash equilibrium if e = 0. So any Nash equilibrium is a CNE for e = 0. We will show that a given Nash equilibrium may also be a CNE for e > 0, and that the highest such e can be regarded as a measure of robustness of a given Nash equilibrium.8 To establish this claim, we first need the following lemma:

8 See also Eichberger and Kelsey (1994).

L R

1,-1 -1,1

-1,1 1,-1

1,1 -99,0

0,1 0,0

Lemma 2. Let (I,S,u) be a finite two-player game in normal form. Let 0 < e < 1. Let a* be a Choquet-Nash equilibrium. If a* is a Nash equilibrium, then it is also a Choquet-Nash equilibrium for all 0 < e' < e.

Proof. Let 0 < e < 1 and 0 < e' < e. Since a* is a Nash equilibrium, ui(a*,aj*) >

u*(a*,a*). Since a* is a CNE for e,

(1 - e) ■ Ui(a*,a*) + e ■ fSi minSj.ESj U(su Sj) da* ]

> (1 - e) ■ Ui(a*, a*)+ e ■ fSi minSjes3- U(s*, Sj) dai ]

for all a* and all i. Consequently, for any a G [0,1],

a ■ Ui(a*,a*) + (1 - a) ■ [(1 - e) ■ Ui(a*,a*) + e ■ fs, minSj.ESj u*(s*, Sj) da* ]

> a ■ Ui(ai, aj*) + (1 - a) ■ [(1 - e) ■ Ui(ai, a*)+ e ■ fSi minSjesj u*(s*, Sj) dai ]

for all <jj. So for a = 1 — ^ we have a G [0,1] and

(1 - e') ■ Ui(a*,a*) + e' ■ fSi minSjeS3- u*(s*, sj) da*

> (1 - e') ■ Ui(ai, a*)+e' ■ fSi minSjeS3- u*(s*, sj) dai

for all ai, and for all i G I, i. e. a* is also a CNE for e'.

On the basis of lemma 2, we can now define a measure of robustness of a Nash

equilibrium with respect to doubts about the rationality of the opponent:

Definition 4. Let (I,S,u) be a finite two-player game in normal form. Let 0 < e < 1. Let a* be a Nash equilibrium. Then the degree e(a* ) of uncertainty aversion

robustness of a* is given by the largest e for which a* is a Choquet-Nash equilibrium.

Note that e exists because the expected utility functions are continuous in e.

As the following game shows, this measure of robustness formalizes a different intuition about robustness than payoff-dominance and risk-dominance. The game in figure 5 has two strict Nash equilibria:

T

B

Fig. 5

The equilibrium (T, L) dominates the equilibrium (B, R) both with respect to payoff-dominance and with respect to risk-dominance. However, e(T, L) = |, since if a rational opponent plays L (respectively T) then it is only rational to play T (respectively L) as long as e < |. On the other hand, e(B, R) = 1, since if a rational opponent plays R (respectively B) then it is never rational to deviate from B to T (respectively from R to L).

We next show that strict Nash equilibria are robust with respect to doubts of the rationality of the opponent:9

L R

5,5 0,1

1,0 3,3

9 Note that strict Nash equilibria are pure.

Remark 2. Let (I, S, u) be a finite two-player game in normal form. Let s* be a Choquet-Nash equilibrium. If s* is a strict Nash equilibrium, then there exists an e > 0 such that s* is a Choquet-Nash equilibrium for all 0 < e < e.

Proof. Define for each i G I

Si := max [ min ui(si, s.j) — min ui(s*, s.j)]

SiESi Sj ESj Sj ESj J

a.i := min [ui(s*,s*) - ui(si,s*)]

Note that on > 0 and Si > 0. Define e* := and e := minje/ e*. Note that e > 0.

Then for any i and any si G Sj,

(1 -e) • [Ui{s*,s*) -Ui(Si,S*)]

> (1 — e) • Cti

> (1 — £i) • ai

= € i • Si

> e • Si

^ e • [minSies3- Ui^Si, Sj) minSies3- Ui(s^,

It follows from lemma 1 that only pure strategy deviations are relevant, so s* is a CNE for e, and by lemma 2 for all e <e.

The requirement that a Nash equilibrium is strict is sufficient for e > 0, but it is not necessary, as the ‘Matching Pennies’ game in figure 4 shows. The non-strictness of mixed strategy equilibria is sometimes regarded as a conceptual weakness, because the players, while having no incentive to deviate, still seem to lack a positive incentive to choose their equilibrium strategies. This has led to a justification of mixed strategy equilibria by purification arguments, i. e. in terms of an embedding of the original game into a game with (slight) incomplete information.10 However, both this criticism of mixed equilibria and their defense apply equally to all mixed strategy equilibria. Next, we show that the robustness measure e formalizes that in some games mixed strategy equilibria are more plausible than in other games.

T

B

The game in figure 6 has a mixed strategy Nash equilibrium a*(T) = a\(L) = |, <r*(B) = a?;(R) = Given that the rational opponent plays a*, a player’s expected payoff from a rational opponent is independent of his own strategy. Thus a rational player will only take into account the expected payoff from a non-rational opponent. This payoff is 0 when he plays T (respectively L) and 7 when he plays B (respectively

10 Note, however, that the justification of Nash equilibria given in Aumann and Bran-denburger (1995) is independent of the question whether the equilibrium is pure or mixed.

L R

9,9 0,7

7,0 8,8

Fig. 6

R). So a rational player will always deviate to B (respectively R) if he expects a rational opponent to play according to a* and there is doubt about the opponent’s rationality, however small it is, unless e = 0.

The stability property of mixed strategy equilibria are given by remark 3:

Remark 3. Let (I,S,u) be a finite two-player game in normal form. Let a* be a Nash equilibrium. Let i G I, si,s'i G Si, a*(si) > 0 and a*(si) > 0. Then if minSieSi Ui(si, Sj) ^ minSieSi Wj(s', Sj) then e(a*) = 0.

Proof. If a* is a CNE, both si and si must be best replies to e. However, since a* is also a Nash equilibrium, both si and si are also best replies to a*i if e = 0. So if e > 0 we must have minSies3- Wj(sj, Sj) = minSies3- Wj(s', Sj), a contradiction.

The following example shows that even for a genuinely mixed Nash equilibrium11 we may have 0 < e < 1, i. e. the Nash equilibrium is robust, but not trivially so:

L R

T B

Fig. 7

Consider the mixed equilibrium T and q* = Prob(L) = |. Then player 1 will prefer T as long as (1 — e)2 • | > 1, i. e. e < j-. Player 2 is always indifferent between L and it!, so e = g. Note that for q* = \ every p* = Prob(T) G [0,1] is also a Nash equilibrium, however, for any equilibrium with p* > 0 we have e = 0, i. e. such equilibria are not robust. The reason is that if there is a positive probability, however small, that player 2 is not rational, player 1 will prefer to play B if a rational opponent plays q* =

Note, however, that we cannot have 0 < e < 1 for Nash equilibria in 2 x 2 games in which both players use genuinely mixed strategies. The following game shows that 0 < e < 1 is possible even if both players use genuinely mixed strategies:

L C R

T C B

Fig. 8

Consider the mixed strategy Nash equilibrium p\ = Prob(T) = p2 = Prob(M) = j, q\ = Prob(L) = </2 = Prob(C’) = j. This is also a CNE as long as e < e := ^,

11 A Nash equilibrium is genuinely mixed if at least one player chooses a non-degenerate mixed strategy.

4,4 0,0 0,1

0,0 4,4 0,1

1,0 1,0 1,1

2,1 0,1

1,0 1,0

because a rational player will receive 2 from a rational opponent whom he meets with probability (1 — e), but 0 from a non-rational opponent if he plays the equilibrium strategy. Deviating to his third pure strategy will give him 1 in either case.

So far, all robust equilibria were quasi-strict. Recall that a Nash equilibrium is quasi-strict if every pure best reply to the equilibrium strategies of the opponent is in the support of the equilibrium strategy Harsanyi (1973). We next show that this is not true in general, i. e. that robustness in our sense neither implies nor is implied by quasi-strictness of a Nash equilibrium.

T

B

Fig. 9

Consider the Nash equilibrium (T, L) of the game in figure 9. It is not quasi-strict, because B is also a best reply to L. Yet it is robust, i. e. e = 1, because for player 2 L is strictly dominant. Player 1 knows that a rational opponent will play L, and in case the opponent is non-rational he will strictly prefer T to B. This shows that robustness does not imply quasi-strictness. Conversely, the mixed strategy equilibrium in the game in figure 6 is quasi-strict, yet it is not robust.

We have thus established proposition 3, which shows that our robustness concept differs from quasi-strictness:

Proposition 3. Robustness and quasi-strictness are unrelated, i.e. Nash equilibria may be robust and quasi-strict, non-robust and quasi-strict, robust and non-quasi-strict, or neither.

It remains to consider the most important special case of non-quasi-strict equilibria, namely Nash equilibria in weakly dominated strategies. We show that such equilibria may or may not be robust.

T

B

Fig. 10

The Nash equilibrium (T, L) is payoff-dominant, but involves weakly dominated strategies and is therefore not quasi-strict. For this equilibrium e = 0, so it is not robust. However, consider the game in figure 11:

L R

2,2 0,2

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2,0 1,1

L R

2,1 1,0

2,1 0,0

L C R

T B

Fig. 11

Again (T, L) is a payoff-dominant Nash equilibrium in weakly dominated strategies. However, it is indeed robust. For both T and B, a rational player 1 expects 2 from a rational opponent playing his equilibrium strategy L, and 0 from a non-rational opponent. Player 2, on the other hand, strictly prefers L to R as long as

e < 5, so e =

To summarize, we have shown that e can be interpreted as a measure of robustness of a Nash equilibrium with respect to doubts about the rationality of the opponent. This robustness concept differs from payoff dominance, risk dominance, strictness or quasi-strictness. This leads us to suggest two refinements of Nash equilibrium.

3.3. Strict Uncertainty Aversion Perfection

So far, we have shown how the concept of Choquet-Nash equilibrium sheds light on the robustness of Nash equilibria. This suggests to use this robustness analysis as a basis for equilibrium refinements. Intuitively, Nash equilibria are robust if they can be approximated by Choquet-Nash equilibria. Since this approximation can take different forms, we define two equilibrium refinements: strictly uncertainty aversion perfect equilibria (section 3.3) and uncertainty aversion perfect equilibria (section 3.4).

Definition 5. Let (I, S, u) be a finite two-player game in normal form. Let a* be a strategy combination. Then a* is a strictly uncertainty aversion perfect Nash equilibrium if and only if there exists a sequence (ek)keiN, with 0 < ek < 1 and limk^TO ek = 0, such that a* is a Choquet-Nash equilibrium for every ek.

We first note that the strictly uncertainty aversion perfect equilibria are those with a strictly positive degree of uncertainty aversion robustness:

Lemma 3. Let (I, S, u) be a finite two-player game in normal form. Let a* be a Nash equilibrium. Then a* is a strictly uncertainty aversion perfect Nash equilibrium if and only ife(a*) > 0.

Proof. Necessity (‘only if’) is immediate because e > > 0. Sufficiency (‘if’) follows

from lemma 2 by considering the sequence (-|).

Next, we show that a strictly uncertainty aversion perfect equilibrium is indeed a Nash equilibrium. This establishes that this concept is indeed an equilibrium refinement:

Remark 4. Let (I,S,u) be a finite two-player game in normal form. A strictly uncertainty aversion perfect Nash equilibrium is indeed a Nash equilibrium.

2,2 0,0 0,1

2,0 0,0 1,1

Proof. Let ek > 0. Since a* is a CNE for ek we have

(1 — ek) ■ Ui(a*,a*) + ek ■ fS, minSj.eSj Ui(si, sj) da* ]

> (1 — ek) ■ Ui(ai, a*)+ ek ■ fS, minSjESj Ui(si, sj) dai ]

for all ai and all i G I. These expected utility functions are continuous in ek, so the

inequalities also hold in the limit as ek ^ 0.

We next study the existence question:

Proposition 4. A strictly uncertainty aversion perfect Nash equilibrium need not exist.

Proof. Consider the game in figure 12:

L C R

T B

Fig. 12

Let p1 = Prob(T),p2 = Prob(B),^1 = Prob(L),^2 = Prob(C),q3 = Prob(R). Any Nash equilibrium of this game takes the formp* > ^,q* = 1. Each such (p*, q*) is an equilibrium, and there can be no equilibrium with p1 =0 (else q2 = 1 and p1 =0), so q3 = 0, and if q2 > 0 then p* = 1 and q2 = 0, a contradiction.

However, none of these equilibria is strictly uncertainty aversion perfect: Player 1 knows that he can expect 2 from a rational opponent both if he plays T and B, but from a non-rational opponent he will expect 0 from T and 1 from B. As long as ek > 0, he will play B.

This result suggest to look for existence in a subclass of games. Surprisingly, not even 2 x 2-games always possess a strictly uncertainty aversion perfect Nash equilibrium, as the game in figure 13 shows:

L R

T B

Fig. 13

2,0 0,2

1,2 2,1

2,2 2,0 0,1

2,0 1,1 1,0

This game has a unique Nash equilibria in genuinely mixed strategies p* = Prob(T) = \,q* = Prob(L) = However, if player 1 expects a rational opponent to play q*, he will strictly prefer B to T, since he will achieve the same utility from a rational opponent, but a higher utility in case the opponent is non-rational. So e = 0, and the claim follows from lemma 3.

Finally, the following remark characterises strictly uncertainty aversion perfect equilibria. It will be useful when we study zero-sum games and standard equilibrium refinements in sections 3.5 and 3.6.

Remark 5. Let (I,S,u) be a finite two-player game in normal form. Let a * be a Nash equilibrium. Then a is strictly uncertainty aversion perfect if and only if

fli G I, flsi G Si, flsi G supp a* :

ui(si,a*) = ui(a* ,a*) and min ui(si,sj) > min ui(s'i,sj).

sj €Sj j sj €Sj i j

Proof. Suppose for some player i G I there exist si and si G supp a* such that

ui(si,a*) = ui(a*, a*) and m^-esj u(susj) > m^-esj ui(si,sj). Then, as long

as e > 0, for player i a deviation from a* to si is profitable, because he will expect

the same utility as ai from a rational opponent, but a higher utility from a non-rational opponent. So e(a*) = 0, i. e. a* is not strictly uncertainty aversion perfect. Conversely, if these conditions hold, player i does not have a profitable deviation.

The following proposition summarizes the above results on the robustness of Nash equilibria, in case a strictly uncertainty aversion perfect equilibrium exists:

Proposition 5. Let (I,S,u) be a finite two-player game in normal form. Let a* be a strictly uncertainty aversion perfect Nash equilibrium.

(1) Every strict equilibrium is strictly uncertainty aversion perfect. However, strictly uncertainty aversion perfect equilibria need not be strict.

(2) Quasi-strict equilibria in general, and mixed strategy equilibria and equilibria in weakly dominated strategies in particular, may be, but need not be, strictly uncertainty aversion perfect.

3.4. Uncertainty Aversion Perfection

Because strictly uncertainty aversion perfect equilibria need not exist, we suggest the following weaker refinement of Nash equilibria:

Definition 6. Let (I,S,u) be a finite two-player game in normal form. Let a* be a strategy combination. Then a is an uncertainty aversion perfect Nash equilibrium if and only if there exists a sequence (ek)keiN, with 0 < ek < 1 and limk^TO ek = 0, and a sequence of strategy profiles (ak)k£]N, such that each ak is a Choquet-Nash equilibrium for ek and limk^TO ak = a *.

Since this definition allows constant sequences of strategy profiles, every strictly uncertainty aversion perfect equilibrium is indeed uncertainty aversion perfect.

Remark 6. Let (I, S, u) be a finite two-player game in normal form. An uncertainty aversion perfect Nash equilibrium is indeed a Nash equilibrium.

Proof. Let ek > 0. Since ak is a CNE for ek we have

(1 — ek) ■ ui(a*,k,aj,k) + ek ■ Is, minsjeSj ui(sh sj) da*k ]

> (1 — ek) ■ ui(ai,a* k) + ek ■ fS. minsjeSj ui(si, sj) dai ]

for all ai and all i G I. These expected utility functions are continuous in ek , ai and

aj, so the inequalities also hold in the limit as ek ^ 0 and ak ^ a *.

Proposition 6. Every finite two-player game in normal form has at least one uncertainty aversion perfect Nash equilibrium.

Proof. Consider a sequence ek ^ 0. By proposition 1, there exists a CNE for every ek. Since the strategy sets are compact subsets of finite-dimensional euclidean spaces, by the Bolzano-Weierstrafi Theorem, every sequence of CNEs ak has a convergent subsequence a;*. Since the associated sequence q also converges to 0, the limit of ak is an uncertainty aversion perfect Nash equilibrium.

We end this section with an example of an equilibrium in pure strategies that are not weakly dominated that is not uncertainty aversion perfect:

T

B

Fig. 14

Consider the equilibrium (T, L). The strategy T is undominated, and L is weakly dominant. Yet (T, L) is not uncertainty aversion perfect: As long as e > 0, a rational player 2 will play L because it is weakly dominant. But given L, player 1 will expect utility 1 from a rational opponent both if he plays T or B, but since e > 0 he will strictly prefer B.

3.5. Zero-Sum Games

Under complete ignorance, an uncertainty averse player will allocate probability weight 1 to the outcome that is worst for himself. Intuitively, this suggests a close relationship of Choquet-Nash equilibria with minimax strategies in zero-sum games.

We next show, however, that this is not the case12. First, consider strictly uncertainty aversion perfect equilibria:

T

B

Fig. 15

In the game in figure 15, the Nash equilibrium is unique, and since the game is zero-sum the strategies are minimax strategies. However, remark 5 implies that this equilibrium is not strictly uncertainty aversion perfect. This example also shows that in even in zero-sum games a strictly uncertainty aversion perfect equilibrium need not exist.

L R

0,0 2,-2

2,-2 1,-1

L C R

1,1 2,0 0,0

1,0 1,0 1,0

12 This result is due to a lack of preference for uncertainty, see section 4.

However, in the previous game the minimax strategies are uncertainty aversion perfect. The following example shows that not every Nash equilibrium in a zero-sum game is uncertainty aversion perfect:

T

B

Fig. 16

The pair of minimax strategies (T, R) is not uncertainty aversion perfect: As long as e > 0, player 2 prefers to play L because L is weakly dominant.

3.6. Equilibrium Refinements

The fact that not all Nash equilibria are robust in the sense of (strict) uncertainty aversion perfection raises the question whether perfect Nash equilibria are more robust with respect to doubt about the rationality of the opponent. In this section we present the relationship between uncertainty aversion perfection and other equilibrium refinements.

First, note that the equilibrium (T, L) in figure 14 is proper, because C and R are equally costly mistakes for player 2. So for e = ^ the strategy combinations O]-with Prob(T) = 1 — Prob(B) = j:, Prob(L) = Prob(C) = Prob(i?) = is an e-proper equilibrium, and as k we have e ^ 0 and ak ^ (T, L). However,

as shown above, this equilibrium is not uncertainty aversion perfect. So properness does not imply uncertainty aversion perfection.

Note also, however, that this equilibrium is not strictly perfect. Our next example shows that strict perfection does not imply strict uncertainty aversion perfection:

L R

T B

Fig. 17

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The mixed strategy equilibrium p* = Prob(T) = ^,q* = Prob(L) = | is strictly perfect, because it is completely mixed. But by remark 5, it is not strictly uncertainty aversion perfect.

The next game shows that strictly perfect equilibria even need not be uncertainty aversion perfect:

2,0 0,2

1,2 2,1

L R

1,-1 1,-1

0,0 1,-1

L R

T M B

Consider the mixed strategy Nash equilibrium p\ = Prob(T) = 5, q* = Prob(L) = This equilibrium is strictly perfect: Let Mr, Mm, Ms, Ml, M-R be anY strictly positive trembles (minimum probabilities). Then the strategy combination crM with pi = ^(1 - hb),P2 = 5(1 - Ms),P3 = Ms, Qi = ® = 1 - <7i

is a Nash equilibrium of the perturbed game, and as /i,L,... ^ 0 we have aM ^ a*. However, the equilibrium is not uncertainty aversion perfect: For player 1, as long as e > 0, strategy T gives 2(1 — e)qe and strategy M gives 2(1 — e)(1 — qe), where qe is the strategy of a rational player 2. In order to find a sequence of mixed strategies for player 1 that converges to p = he must be willing to mix between T and M, which implies that in any Choquet-Nash equilibrium qe = But then both T and M yield less than B, so for e > 0 no such equilibrium exists.

Conversely, we can ask whether (strictly) uncertainty averse equilibria also satisfy refinement criteria for Nash equilibria. However, the equilibrium (T, L) in figure 11 is strictly uncertainty aversion perfect, yet T is weakly dominated for player 1, and therefore (T, L) is not trembling-hand perfect.

We have thus established a lack of relationships between robustness with respect to lack of mutual knowledge of rationality and equilibrium refinements that is summarized by the following proposition:

Proposition 7. Neither a proper equilibrium nor a strictly perfect equilibrium need be uncertainty aversion perfect. Conversely, even a strictly uncertainty aversion perfect equilibrium need not be trembling-hand perfect.

4. Extensions

So far, we have defined the solution concept only for 2-player games with finitely many strategies. Typically, in economic games the strategy spaces are infinite, for instance if firms choose prices, quantities, a location, a point in time, or a certain probability.

The Choquet-integral of a general random variable X is defined as

p 00 /* 0

Xdv ■= v(X > t)dt + [v(X > t) — 1]dt.

J 0 J —0

As before, we define the expected utility from a non-rational opponent as ui(si,vj) := /s ui(si,sj) dvj and the payoff from a mixed strategy ai G AS\ as

Ui (ai , vj ) : fg. U1(si, vj ) dai .

As before, we can thus define a weak Choquet-Nash equilibrium for 2-player games with possibly infinite strategy spaces. Under the assumptions of a common prior about rationality, complete ignorance about non-rationality and uncertainty aversion this reduces to:

2,2 0,0

0,0 2,2

1,1 1,1

Fig. 18

Definition 7. Let (I,S,u) be a two-player game in normal form. Let 0 < e < 1. Then a* is a Choquet-Nash equilibrium iff

a1 G arg max [ (1 — e) • ui(ai, a2)+ e ■ min ui(si .so) da1 ],

1 ai£^l 2 JSl 82ES2

a* G arg max [ (1 — e) • u2(a*, a2) + e • min u2(si_, s2) da2 ]. a2E^2 Js2 si^Si

As an example, consider a symmetric duopoly with linear cost and demand curve. Under Bertrand competition, setting price equal to marginal cost is a Choquet-Nash equilibrium independently of e. Under Cournot competition, however, the firms have an incentive to offer less than the Cournot equilibrium output, and set higher prices, since for any given production there is a small chance that a non-rational opponent swamps the market and drives down profits.

The extension to n players is conceptually straightforward. However, it has to take into account that the events that different opponents are non-rational are independent. For instance, if there are three players, then player 1 should maximise13

max [ (1 — e)2 • ui(ai,a*,a3)

CTi eSi

+ e(1 — e) • V' ai(si) • min ui(si,s2,a3)

--' s2 e S2

Si eSi

+ (1 — e)e • ai(si) • min ui(si, a*,S3)

--- S3eS3

si eSi

+ e2 • V' ai(si) • min ui(si,s2,s3) ].

,' (s2,S3)eS2XS3

sieSi

In general, we can formulate the solution concept in the following way: Let I be the player set, and for J C I let sj be a strategy profile that specifies a pure strategy for each player in J. Let SJ be the set of such profiles, i. e. Sj = xieJS\. Let s-J be a strategy profile that specifies a pure strategy for all players not in J.

Definition 8. Let (I, S, u) be a finite two-player game in normal form. Let 0 < e < 1. Then a* is a Choquet-Nash equilibrium iff for every player i G I

a* G arg max [ (1 — e)^ • ui(ai,a*_i)

&i eSi

+ ^ [ eIJ 1(1 — e)A(J]

• [ V ai(si) • mm ui(si,a-(JU{i}),sJ) ] ],

^—* sjeSj v L J/

SieSi

where \J| denotes the number of players in J C I.

13 We continue to make the assumptions for Choquet-Nash equilibria: common priors e, complete ignorance and uncertainty aversion.

5. Related Literature

The aim of this section is to argue that our equilibrium concept circumvents some of the controversial aspects of previous attempts to generalize the equilibrium concept to non-additive beliefs: the definition of support of a non-additive measure, the requirement that players’ beliefs are simple capacities, and the definition of independence of several non-additive beliefs.

Previous solution concepts — with the exception of Mukerji (1994) and Lo (1995) — have not distinguished between rational and non-rational players. In those models, the rational player is allowed to have non-additive beliefs about the opponent’s play. An equilibrium is then interpreted as an equilibrium in beliefs. However, since beliefs are non-additive, they cannot be correct, so the weaker consistency requirement that players are not wrong is imposed on equilibrium beliefs. Following Dow and Werlang (1994), this is formalised as the requirement that the players anticipate the support of the opponent’s beliefs.14 This raises the question, however, how the support of a non-additive capacity should be defined, and different support concepts give rise to different equilibrium concepts. These issues are surveyed, e.g., in Eichberger and Kelsey (1994) and Haller (1997).

Since defining the support as the smallest set of strategies that has belief 1 under uncertainty aversion does not impose any restriction on the support, Dow and Werlang (1994) define the support as the smallest set of strategies whose complement has belief 0.15 The support, so defined, need not be unique. The approach of Dow and Werlang (1994) models a situation in which rational players lack logical omniscience, in that they do not draw the logical conclusions of their knowledge.

The question how to define the support of a non-additive capacity does not arise in our model. Here, players have additive beliefs about the rational opponents. So their expectations can be correct in the usual, literal, sense. Also, the rational players are assumed to be logically omniscient.

In the Dow and Werlang (1994) model, the support question has a natural answer in the special case, in which the non-additive beliefs are ‘simple capacities’, i. e. capacities that uniformly distort probabilities

where uncertainty aversion corresponds to the assumption that a < 1. For such simple capacities, the Choquet-integral of a random variable X takes the form

Thus, our concept of Choquet-Nash equilibrium corresponds formally to the case where16 a = (1 — e). However, this analogy is purely formal: A weak Choquet-Nash equilibrium cannot be re-interpreted as a simple capacity, and for non-simple

14 Klibanoff (1993) formalises an equilibrium concept for a more general class of games on the basis of maxmin expected utility theory with set-valued beliefs Gilboa and Schmeidler (1989). There, the weaker consistency requirement is that the players consider the equilibrium strategies possible.

15 See Marinacci (1994) for an equilibrium concept with a different definition of support.

16 This is the approach taken by Mukerji (1994). In Lo (1995) it is infinitely more likely that the opponent is rational than that he is not.

E = n, E = n,

j X dv = a • j X dp + (1 — a) • min X(w)

capacities the above decomposition does not hold. We are not requiring that rational players’ beliefs about the opponents’ play are simple, but that beliefs about rational opponents are additive, whereas those about non-rational opponents may be nonadditive, but otherwise arbitrary (i. e. non-simple).

Finally, Dow and Werlang (1994) define their equilibrium concept for 2-player games. Eichberger and Kelsey (1994) extend their solution concept to n-player games and allow for the possibility that a rational player beliefs that his opponents do not act independently. In their approach, imposing such a restriction requires an independence concept for capacities (see, e.g., Ghirardato (1997) and Hendon et al. (1996)) ).

In our approach, this issue also does not arise. Since rational players have additive beliefs about their rational opponents, the usual independence concept applies and the equilibrium concept for n-player games assumes that rational players believe that their rational players act independently. This is in line with the underlying assumption that the game form models a non-cooperative situation and is common knowledge among the rational players.

6. Preference for Uncertainty

Recall that in section 2 we have defined the expected utility from pure strategy si against a non-rational opponent by the Choquet expectation ui(si,vj) := fs, ui(si,sj) dvj. Then we defined his payoff from a mixed strategy ai G ASi as ui(ai ,vj) := S-£S- ai (si) • ui(si, Vj). As a consequence, the overall expected utility is linear in the probabilities ai(si). Since vj is non-additive, the order of integration in ui(ai, vj) is important. In this section we present and analyse an alternative equilibrium concept in which this order is reversed.

We continue to make the assumptions of a common prior about rationality, complete ignorance about non-rational play and uncertainty aversion. Note that then

ui(ai, vj) = min ui(ai,sj) = min 'S'] ai(si)ui(si,sj).

s,eS, s,eS,

SieSi

First note the following lemma:

Lemma 4.

y ai(si) min ui(si,sj) < min ^ ai(si)ui(si,sj).

S , S, S , S,

SieSi , , , , SieSi

The inequality may be strict. Proof. For all si and Sj

min ui(si, sj) < ui(si, sj). S, e S,

Therefore for all sj

y ai(si) min ui(si,Sj) < ^ ai(si)ui(si,Sj).

S, e S,

Si eSi 3 3 Si eSi

So this holds in particular for the smallest value of the right-hand side. To see that the inequality may be strict, consider the following example:

L R

T 2 0

B 0 2

Fig. 19

Here

S3 e S3

min ui(si, Sj

c. v ' J

0 < min y ai(si)ui(si, Sj) = 1.

S

SieSi

Thus, reversing the order of integration allows players to have a strict preference for mixed strategies in a game. The first equilibrium concept that captures this phenomenon in strategic interaction is given by Klibanoff (1993), who based it on maxmin expected utility theory of Gilboa and Schmeidler (1989), in which players have set-valued beliefs. Allowing strict preference for mixed strategies gives rise to the following definition:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Definition 9. Let (I,S,u) be a finite two-player game in normal form. Let 0 < ei < 1. Then a* is a strong Choquet-Nash equilibrium iff

Under uncertainty aversion, a strong Choquet-Nash equilibrium always exists. This is essentially the same argument as in proposition 1, except that objective function is now quasi-concave in the probabilities ai. However, the analogous solution concept for uncertainty love is now no longer guaranteed to exist, since the objective function need not be quasi-concave. As a consequence, the best-reply correspondence need not be convex-valued (see, e.g., Crawford (1990) and Dekel, Safra and Segal (1991)).

The main characteristic of a strong Choquet-Nash equilibrium is that in zero-sum games, the solution concept coincides with Nash equilibrium: Since it is already rational to play maxmin strategies against rational opponents, and since this is also rational against non-rational opponents, it is overall rational. More generally:

Remark 7. Let (I, S, u) be a finite two-player game in normal form. Let 0 < e < 1. Then every equilibrium in maxmin-strategies is also a strong Choquet-Nash equilibrium independently of e.

Proof. If a* is an equilibrium then for all i G I and all ai G Ei

a* G arg max [ (1 — e1) • u1(a1, a*) + e1 • min u1(a1, s2) ],

a* G arg max [ (1 — e2) • u2(a*,a2) + e2 • min u2(s1, a2) ].

ui(a*,a*) > ui(ai,a*).

Since a* are maxmin strategies, for all ai G Ei

min ui(a*,sj) = min ui(a*,aj) > min ui(ai,aj) = min ui(ai,sj).

Sj eSj 1 Jj eSj 1 J Jj e£j Sj eSj

Combining both inequalities gives the result.

7. Conclusion

The paper presented equilibrium concepts that formalize the idea that lack of mutual knowledge of rationality together with a lack of a theory of non-rationality create genuine uncertainty. However, on the basis of decision theory with non-additive, or set-valued, beliefs, rational behaviour is still well-defined, if the attitude towards uncertainty is specified.

The motivation for developing Choquet expected utility theory were deviations from subjective expected utility in experiments, as in the Ellsberg paradox. This behaviour can be parsimoniously explained as uncertainty aversion. Thus we also formulated the solution concepts under this assumption. To what extent these solution concepts can model behaviour is an empirical question; this also holds for the question which solution concept is relevant in a particular situation. For instance, we see the question whether players have a strict preference for mixed strategies as an empirical one.

The assumption of extreme uncertainty aversion is rather crude; however, in the absence of a theory of bounded rationality that imposes restrictions on deviations from rational play, it seems the only assumption consistent with the fact that only rational strategies are derived.

Our results suggest robustness concepts for Nash equilibria. In so doing, we consider mutual knowledge of rationality as a limiting case of lack thereof. This is entirely analogous to Selten’s (1975, p. 35) view of “complete rationality as a limiting case of incomplete rationality”. However, we would argue that robustness with respect to ignorance about non-rational play is more plausible than robustness with respect to ‘trembles’ of otherwise fully rational players.

The following are suggestions for future research: First, the question arises whether there are epistemic foundations for Choquet-Nash equilibria in a model similar to that of Aumann and Brandenburger (1995). Secondly, it will be interesting to study the effects of communication and correlation on a Choquet-Nash equilibrium in the spirit of Aumann’s (1974) correlated equilibrium. On the other hand, the equilibrium concepts could also be weakened to rationalizability concepts along the lines of Bernheim (1984) and Pearce (1984). Finally, combining our robustness concepts with equilibrium refinements for rational players will further narrow down the set of equilibria.

Completely new conceptual issues arise in the extension of this approach to extensive games. There, non-additive beliefs allow the formalisation of the idea that deviations from the equilibrium path are considered evidence of lack of rationality. These issues are treated formally in Rothe (1999 a,b).

Acknowlegments. For helpful comments on the first version I would like to thank Jurgen Eichberger, Leonardo Felli, Hans Haller, David Kelsey, Marco Mariotti, Su-joy Mukerji, Matthew Ryan and audiences at the Jahrestagung of the Verein fur Socialpolitik in Bern (1997), the 1st World Congress of the Game Theory Society in Bilbao (2000), the 8th World Congress of the Econometric Society in Seattle

(2000), and seminar audiences at the universities of Cambridge (2001), Leicester

(2005), and Newcastle (2006). Errors are my responsibility.

References

Anscombe, F. J. and R. J. Aumann (1963). A definition of subjective probability. Annals of Mathematical Statistics, 34, 199-205.

Aumann, R. J. (1974). Subjectivity and correlation in randomized choice. Journal of Mathematical Economics, 1, 67-96.

Aumann, R. J. and A. Brandenburger (1995). Epistemic conditions for Nash equilibrium. Econometrica, 63, 1161-1180.

Bernheim, D. B. (1984). Rationalizable strategic behavior. Econometrica, 52, 1007-1028.

Choquet, G. (1953). Theory of capacities, Annales de l’Institut Fourier (Grenoble), 5, 131-295.

Crawford, V. P. (1990). Equilibrium without independence. Journal of Economic Theory, 50, 127-154.

Dekel, E., Safra, Z. and U. Segal (1991). Existence and dynamic consistency of Nash equilibrium with nonexpected utility preferences. Journal of Economic Theory, 55, 229246.

Dow, J. and S.R. d.C. Werlang (1994). Nash equilibrium under Knightian, uncertainty: Breaking down backward induction. Journal of Economic Theory, 64, 305-324.

Eichberger, J. and D. Kelsey (1994). Non-additive beliefs and game theory. Discussion Paper 9410, Center for Economic Research.

Eichberger, J. and D. Kelsey (1996). Uncertainty aversion and preference for randomisation. Journal of Economic Theory, 71, 31-43.

Epstein, L. G. (1997a). Preference, rationalizability and equilibrium. Journal of Economic Theory, 73, 1-29.

Epstein, L. G. (1997b). Uncertainty aversion. mimeo.

Ghirardato, P. (1997). On independence for non-additive measures, with a fubini theorem. Journal of Economic Theory, 73, 261-291.

Ghirardato, P. and M. Marinacci (1997). Ambiguity made precise: A comparative foundation and some implications. mimeo, University of Toronto.

Gilboa, I. and D. Schmeidler (1989). Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18, 141-153.

Haller, H.H. (1995). Non-additive beliefs in solvable games. mimeo.

Haller, H.H. (1997). Intrinsic uncertainty in strategic games. mimeo.

Harsanyi, J. C. (1973). Games with randomly disturbed payoffs: A new rationale for mixed strategy equilibrium points. International Journal of Game Theory, 1, 1-23.

Hendon, E., Jacobsen, H. J., Sloth, B. and T. Tranaes (1995). Nash equilibrium in lower probabilities. mimeo.

Hendon, E., Jacobsen, H. J., Sloth, B. and T. Transs (1996). The product of capacities and belief functions. Mathematical Social Sciences, 32, 95-108.

Klibanoff, P. (1993). Uncertainty, decision and normal norm games. mimeo.

Kreps, D.M. (1988). Notes on the Theory of Choice. Westview Press, Boulder, COL.

Kreps, D.M., Milgrom, P., Roberts, J. and R. B. Wilson (1982). Rational cooperation in the finitely repeated prisoners’ dilemma. Journal of Economic Theory, 27, 245-252.

Lo, K. C. (1995). Nash equilibrium without mutual knowledge of rationality, mimeo, University of Toronto.

Lo, K. C. (1996). Equilibrium in beliefs under uncertainty. Journal of Economic Theory, 71, 443-484.

Marinacci, M. (1994). Equilibrium in ambiguous games. mimeo.

Mukerji, S. (1994). A theory of play for games in strategic form when rationality is not common knowledge. mimeo.

Nash, J. F. (1950). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences of the USA, 36, 48-49.

Pearce, D. G. (1984). Rationalizable strategic behavior and the problem of perfection. Econometrica, 52, 1029-1050.

Ritzberger, K. (1996). On games under expected utility with rank dependent probabilities. Theory and Decision, 40, 1-27.

Rothe, J. (1996). Uncertainty aversion and equilibrium. mimeo.

Rothe, J. (1999a). Uncertainty aversion and equilibrium in extensive form games. mimeo.

Rothe, J. (1999b). Uncertainty aversion and backward induction. mimeo.

Rothe, J. (2009). Uncertainty aversion and equilibrium. In: Contributions to game theory and management (Petrosjan, L. A. and N. A. Zenkevich, eds), Vol. ll, pp. 363-382.

Ryan, M. (1997). A refinement of Dempster-Shafer equilibrium. mimeo, University of Auckland.

Samuelson, P. A. (1952). Probability, utility, and the independence axiom. Econometrica, 20, 670-678.

Savage, L. J. (1954). The Foundations of Statistics. Wiley: New York, NY (2nd edn.: Dover, 1972).

Schmeidler, D. (1989). Subjective probability and expected utility without additivity. Econo-metrica, 5Т, 571-587.

Selten, R. (1975). Re-examination of the perfectness concept for equilibrium points in ex-

tensive games. lnternational Journal of Game Theory, 4, 25-55.

Tan, T. C.-C. and S.R. d. C. Werlang (1988). The Bayesian foundations of solution con-

cepts of games. Journal of Economic Theory, 45, 370-391.

von Neumann, J. and O. Morgenstern (1944), Theory of Games and Economic Behavior, Princeton University Press, Princeton, NJ. 2nd edn. 1947, 3rd edn. 1953.

i Надоели баннеры? Вы всегда можете отключить рекламу.