Update
Show Summary Details

Page of

PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 26 May 2022

Game Theory

Source:
The Oxford International Encyclopedia of Peace
Author(s):

Håkan Wiberg

Game Theory. 

Epistemological discussions since the late nineteenth century have revealed two main ways of explaining an event. One is to derive it from some natural law, plus existing particular circumstances. The other is to see it as a result (not necessarily the intended one) of rational behavior of some actor(s), with game theory as a new and particular case. It models strategic behavior in situations in which outcomes depend on what two or more actors do and where they all take this into account. It also makes very strong assumptions about the actors’ knowledge of the outcomes of different combinations of decisions, their preferences between these possible outcomes (often represented by a so-called utility function), their ability to calculate, and their kind of rationality. (There are several kinds of rationality and even of means-end rationality; doing what on average gives the best result may differ from doing what guarantees at least a minimum level of output.) Assumptions on all these things have to be made explicitly and precisely, whereas in traditional actor analysis they tend to be more implicit and vague. Together they are so strong that the decisions of the actors can be derived mathematically.

An article on game theory as such was published in 1928 by the French mathematician Emile Borel, soon followed by the mathematician and polyhistor John von Neumann. We can find even older cases of game theoretical thinking, but what is generally seen as the first major work, Theory of Games and Economic Behavior, authored by von Neumann and the economist Oscar Morgenstern, appeared in 1944 and dealt primarily with situations with three or more actors and coalition formation between them, trying to model, for example, situations of oligopoly. A solution is defined as a set of actors’ choices such that no actor has any motive to change his or her choice as long as there is the assumption that the others will not. This may be unique, consisting of a single choice for each actor, but it may also consist of two or more such combinations, between which the theory does not discriminate. If three persons are to divide a dollar by majority decision, then the solution is the set of the three possible combinations whereby two people each get half a dollar and the third nothing.

Later developments in (n-)person game theory has used it to explain political coalitions and how they shift, sometimes by adding substantive assumptions, for example, that a coalition is more likely if the political positions of the parties are close to each other. Another assumption soon made was that the coalition actually formed will be minimal, as small as possible according to the rules for winning, with no more members than needed to win, and divide the spoils of winning. This was used to explain why national or international grand coalitions tend to dissolve after their victory. The use of game theory has spread widely in the social sciences, ethics, evolutionary biology, and in the formal sciences. It was employed as instrument of analysis from around 1960 onwards by such students of peace and conflict as Anatol Rapoport and Thomas Schelling.

Two-person theory soon became the main branch of game theory and developed rapidly in the 1950s The simplest case is that each actor has one move only, with two options only, and they move simultaneously (in the sense that neither knows what the other has done). Depending on the preferences each of them has between the four possible outcomes, games will have different strategic properties. In “zero-sum” games, their preferences are strictly opposed. What is better for one is always worse for the other; therefore, nothing can be gained by cooperation. The rational solution may be a “saddle point,” whereby either actor would lose by changing the option when the other actor does not. Where there is no saddle point, the actors could try to second-guess each other forever; the theory shows the solution to be that each actor makes a random choice between the two options with the probabilities derived by calculations on the outcomes (if I do not know what choice I will make, you cannot predict it either). The opposite extreme is a purely cooperative game whereby preferences coincide, at least on what is the best outcome. If there is only one such combination of options, the solution is for each to make the choice necessary to get there. If there are two or more, there is a problem of coordination.

In between these two extremes are the “mixed-motive” (or variable-sum) games. With two options there are four possible outcomes, which each actor can rank order in twenty-four different ways (four factorial). When mirror images and so on have been excluded, that leaves seventy-eight games with different strategic properties, wherein for at least one pair of outcomes, the actors agree which one is better, and for at least another pair, they disagree. Several of these games were named after the anecdote originally used to illustrate them: the Prisoner’s Dilemma (PD), Chicken, Stag Hunt, Battle of the Sexes. The PD anecdote assumed that two suspected accomplices are in separate cells with no communication. They know that they will both get better outcomes if both deny rather than if both confess, but they also know that if one confesses and the other one denies, the outcome will be that most preferred by the one who confesses and that least preferred by the one who denies. It turns out that both actors are worse off if both act rationally than if neither does, and a vast literature has tried to deal in different ways with this apparently counterintuitive conclusion.

The theory distinguishes between cooperative games (in another sense than this), whereby the actors can communicate and make binding agreements before choosing, and noncooperative games when they cannot. The crucial thing is binding agreements; if the prisoners can only communicate but not make their agreement binding, the options just change names from “deny” and “confess” to “keep the agreement” and “break the agreement.”

In mixed-motive games, no outcome is best for both parties, but they may have a joint interest, as in PD, in excluding some outcomes. In “Battle of the Sexes,” the husband wants to go to a ballet performance in the evening and his wife wants to go to a boxing match, but going to separate places is worse for both. If they can communicate, they will be best off agreeing to make a joint random decision on where to go rather than making separate decisions and risking going to different places; the theory then derives the optimal mix of probabilities from the structures of their preferences.

Game theory in its original form tells nothing about empirical reality. It is a branch of pure mathematics deriving consequences from a set of axioms defining a game and a rational actor. A game may have any number of actors as well as options for each actor’s moves. The moves and who is to move when may be preset, depending on what happens in the game, or may be decided by some random mechanism. A strategy is a set of prescriptions for the way that the actor is to respond to any move made by the opponent. It is only in single-shot simultaneous games that strategies coincide with options—usually their number is much greater. Games are most often represented by a tree (“extensive form”) or by a matrix (“normal form”). In a tree, each node represents that somebody is to move and the branches the options that person has in that move; at each tip, figures indicate who gets what by the combinations of choices leading to that tip. In a matrix, the rows represent the different strategies of one player and the columns those of the other; each cell then contains two figures representing what they get by that combination of strategies (in a zero-sum game, one figure is enough).

Various versions are defined by whether the game is cooperative or not, by whether we look for best choices for individual players or for a “fair” solution (in the sense that it reflects the balance of power defined by the game matrix), by how often the game is repeated, by whether the players know this or not, and so on. For instance, if both players know that a PD game will be repeated one hundred times, the solution is the same as in the single game, but if they do not know the number of repetitions that will take place, that no longer applies.

Thousands of laboratory experiments have been published since the 1950s, most of them on two-person games (with PD as the most popular). They are not tests of the formal theory (mathematics can never be empirically tested) but, rather, of what can be derived by means of the theory from various assumptions on human perceptions and preferences. Here are some of the results in shorthand version. In three-person games, individual notions of justice or equity often seem to take priority before monetary gains; a dollar tends to divided equally rather than the 50-50-0 indicated by the theory. The analyst misrepresents the game by failing to take these norms into account in the utility functions of the actors. In two-person zero-sum games with merely two or three options, players tend to be able to identify the saddle point if one exists and act accordingly; this is not the case when the complexity is higher. In PD, effects of many background variables of individual players were sought, usually with quite modest results. If the game is repeated several times, the best predictor by far of what one player will do in a move is not background or personality but what the other player did in the preceding move; interaction counts most. When one player is a secret collaborator of the experimenter, using a predetermined strategy, the strategy inducing most cooperative behavior from the other player is “tit-for-a-tat plus”—the collaborator plays cooperatively in the first move and then systematically copies what the other player did in the preceding move, thereby rewarding cooperation and punishing betrayal. When both players (or computers) are preprogrammed by the experimenter, the same holds; their joint winnings are higher than with any other combination of strategies. Context variables may play an important role. Two players cooperate more when, before the game, they have been irritated by a stooge of the experimenter, which seems to create some kind of bond of the type “the enemy of my enemy is my friend.” A player who is playing on behalf of a group is less cooperative than one playing for himself or herself only, and apparently is less willing to take the risks involved in cooperation if the other player defects.

More generally, the players often seem to play other games than those defined in purely monetary rewards and losses defined by the experimenter, for example, by letting moral norms override sheer individual gains, or by introducing competition where none was intended. When both players had to play Black in order for both to maximize their winnings in cents, about half of them nevertheless played Red, apparently in an attempt to maximize how much more than the other player they get (in which case the experimenter misrepresented their utility functions).

Game theory and gaming (empirical experiments) have the same strength as some other radical simplifications—they lay bare the essential logic of a situation and stimulate thinking on that. Yet they are extreme simplifications in more ways than one. When the game theory analysis begins, all difficult problems are, or are assumed to have been, solved already: How many players are there and what do they believe about this? What options do they believe that they and the other players have? What do they think the outcomes of various combinations of players’ strategies will be? How do they value these outcomes relative to each other, when everything (material and other gains, social relations, moral norms, and so on) is taken into account? What do they assume about other players’ abilities to assess these things and make correct calculations? A strategy in the game theory sense of that term usually does not coincide with a simple option, because it has to specify how to respond to each option available to the other player. If we make an extreme truncation of chess in which the players only have one move each (with rules defining “win,” “draw,” and “lose”), then strategies and options coincide for White, who has twenty. For Black, they do not coincide, so the number of strategies is 104,857,600,000,000,000,000,000,000. Chess is a zero-sum game with perfect information (each player knows what options the other player has chosen in all preceding moves) and a finite number of moves (given the rules on mandatory draws). It can be proved that the players therefore have optimal strategies whose combination makes each game end the same way (as in tic-tac-toe). Yet it will not be known whether this result is White or Black victory or a draw, and no imaginable multiplication of computer power will ever be able to calculate these strategies, even though the best chess programs today are able to give even international champions a hard time.

Similar things can be said about macro level applications of game theory, the classical example being arms races modelled by the Prisoner’s Dilemma, and nuclear deterrence strategies, modelled by Chicken (if both are set on winning, both die). They may be highly useful for understanding complex situations and processes by presenting their essential features, but they are too simplified to have much predictive value for what states, and so on, will actually do (postdiction when the facts are in is—as always—a different matter). One main reason for this is that the utility functions defining preferences are even more difficult to establish for collective actors than for individual ones, because they see reasons to keep preferences secret or lie about them. Since game theory analysis is a “rational-actor” explanation, in a rigorous form, alternative explanations of that kind have even greater shortcomings. Abandoning clarity on assumptions and rigor of analysis in favor of traditional, purely verbal, and vaguer accounts may give more leeway for finding what looks like plausible explanations after the fact, but certainly not better predictive capability.

[See also Conflict Analysis; Conflict Transformation; Drama Theory; and Reframing and Restructuring Conflicts]

Bibliography

Axelrod, Robert. The Evolution of Cooperation. New York: Basic Books, 2006.Find this resource:

Davis, Morton D. Game Theory: A Non-Technical Introduction. Mineola, N.Y.: Dover, 1997.Find this resource:

Poundstone, William. Prisoner’s Dilemma. New York: Anchor Books, 1993.Find this resource:

Rapoport, Anatol B. Two-Person Game Theory. Mineola, N.Y.: Dover, 1999.Find this resource:

Rasmussen, Eric. Games and Information: An Introduction to Game Theory. Bognor Regis: Wiley-Blackwell, 2006.Find this resource:

Schelling, Thomas C. The Strategy of Conflict. Cambridge, Mass.: Harvard University Press, 2007.Find this resource:

von Neumann, John, and Oscar Morgenstern. Theory of Games and Economic Behavior. Princeton: Princeton University Press, 1944.Find this resource:

Håkan Wiberg