background preloader

Game theory

Facebook Twitter

How to split a cab fare fairly using game theory. I came across a fantastic game theory article that appeared in the Wall Street Journal Number’s Guy blog all the way back in 2005. The article is about three friends who agree to share a cab, and the possible ways they can split the costs. I highly recommend you read the article.

The thing I liked most is the article describes various fair division methods. As I have described before in my article about splitting restarant bills, fair division is not just a mathematical concept. Fair division depends on social norms and how people perceive fairness. Therefore, it is useful to understand many methods of fair division and have them in your toolkit. Below I will describe some of the fair division methods mentioned in the article about splitting cab fares. The details of the cab ride The situation is a common one: three friends agree to share a cab to different destinations, and they need to split the costs fairly.

More specifically, let us consider the following situation: 1. Game theory. Game theory is the study of strategic decision making. Specifically, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. "[1] An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory.[2] Game theory is mainly used in economics, political science, and psychology, as well as logic, computer science, and biology. The subject first addressed zero-sum games, such that one person's gains exactly equal net losses of the other participant or participants. Today, however, game theory applies to a wide range of behavioral relations, and has developed into an umbrella term for the logical side of decision science, including both humans and non-humans (e.g. computers, animals). Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann.

Representation of games[edit] Extensive form[edit] [edit] Strategy (game theory) The strategy concept is sometimes (wrongly) confused with that of a move. A move is an action taken by a player at some point during the play of a game (e.g., in chess, moving white's Bishop a2 to b3). A strategy on the other hand is a complete algorithm for playing the game, telling a player what to do for every possible situation throughout the game.

A strategy profile (sometimes called a strategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player. A player's strategy set defines what strategies are available for them to play. A player has a finite strategy set if they have a number of discrete strategies available to them. A strategy set is infinite otherwise. In a dynamic game, the strategy set consists of the possible rules a player could give to a robot or agent on how to play the game. In a Bayesian game, the strategy set is similar to that in a dynamic game.

Best response. Best response correspondence[edit] Figure 1. Reaction correspondence for player Y in the Stag Hunt game. , for each player from the set of opponent strategy profiles into the set of the player's strategies. So, for any given set of opponent's strategies represents player i 's best responses to Figure 2. Reaction correspondence for player X in the Stag Hunt game.

There are three distinctive reaction correspondence shapes, one for each of the three types of symmetric 2x2 games: coordination games, discoordination games and games with dominated strategies (the trivial fourth case in which payoffs are always equal for both moves is not really a game theoretical problem). Coordination games[edit] Anti-coordination games[edit] Figure 3.

Games such as the game of chicken and hawk-dove game in which players score highest when they choose opposite strategies, i.e., discoordinate, are called anti-coordination games. Figure 4. Games with dominated strategies[edit] Figure 5. Matching pennies[edit] where. Strategic dominance. Terminology[edit] When a player tries to choose the "best" strategy among a multitude of options, that player may compare two strategies A and B to see which one is better. The result of the comparison is one of: B dominates A: choosing B always gives as good as or a better outcome than choosing A. There are 2 possibilities: B strictly dominates A: choosing B always gives a better outcome than choosing A, no matter what the other player(s) do.B weakly dominates A: There is at least one set of opponents' action for which B is superior, and all other sets of opponents' actions give B the same payoff as A.B and A are intransitive: B neither dominates, nor is dominated by, A.

This notion can be generalized beyond the comparison of two strategies. Mathematical definition[edit] For any player , a strategy weakly dominates another strategy if (With at least one that gives a strict inequality) strictly dominates where represents the product of all strategy sets other than player 's See also[edit] Pareto efficiency. Pareto efficiency, or Pareto optimality, is a state of allocation of resources in which it is impossible to make any one individual better off without making at least one individual worse off. The term is named after Vilfredo Pareto (1848–1923), an Italian economist who used the concept in his studies of economic efficiency and income distribution. [citation needed] The concept has applications in academic fields such as economics and engineering. For example, suppose there are two consumers A & B and only one resource X. Suppose X is equal to 20. Let us assume that the resource has to be distributed equally between A and B and thus can be distributed in the following way: (1,1), (2,2), (3,3), (4,4), (5,5), (6,6), (7,7), (8,8), (9,9), (10,10).

At point (10,10) all resources have been exhausted. No further distribution is possible—if redistribution continues, it will lead to a position (11,9) or (9,11) that makes one better off and the other worse off. Pareto efficiency in short[edit] Nash equilibrium. In game theory, the Nash equilibrium is a solution concept of a non-cooperative game involving two or more players, in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.[1] If each player has chosen a strategy and no player can benefit by changing strategies while the other players keep theirs unchanged, then the current set of strategy choices and the corresponding payoffs constitutes a Nash equilibrium.

The reality of the Nash equilibrium of a game can be tested using experimental economics method. Stated simply, Amy and Will are in Nash equilibrium if Amy is making the best decision she can, taking into account Will's decision while Will's decision remains unchanged, and Will is making the best decision he can, taking into account Amy's decision while Amy's decision remains unchanged. Applications[edit] History[edit] The Nash equilibrium was named after John Forbes Nash, Jr. Let . Perfect information. Perfect information is a situation in which an agent has all the relevant information with which to make a decision. It has implications for several fields. Game theory[edit] In game theory, an extensive-form game has perfect information if each player, when making any decision, is perfectly informed of all the events that have previously occurred. [1] Card games where each player's cards are hidden from other players are examples of games with imperfect information.[2][3] See also[edit] References[edit] Jump up ^ Osborne, M.

Further reading[edit] Subgame perfect equilibrium. A subgame perfect equilibria necessarily satisfies the One-Shot deviation principle. The set of subgame perfect equilibria for a given game is always a subset of the set of Nash equilibria for that game. In some cases the sets can be identical. The Ultimatum game provides an intuitive example of a game with fewer subgame perfect equilibria than Nash equilibria. Example[edit] An example for a game possessing an ordinary Nash equilibrium and a subgame perfect equilibrium is shown in Figure 1. Figure 1: An extensive-form game together with two different equilibria An extensive-form game A Nash equilibrium which is not subgame perfect A subgame perfect equilibrium The payoff matrix of the game is shown in Figure 2.

Finding subgame-perfect equilibria[edit] See also[edit] References[edit] ^ Jump up to: a b An Introduction to Game Theory, Osborne, M.J., Oxford University Press, USA, 2004. External links[edit] Folk theorem (game theory) For an infinitely repeated game, any Nash equilibrium payoff must weakly dominate the minmax payoff profile of the constituent stage game. This is because a player achieving less than his minmax payoff always has incentive to deviate by simply playing his minmax strategy at every history. The folk theorem is a partial converse of this: A payoff profile is said to be feasible if it lies in the convex hull of the set of possible payoff profiles of the stage game.

The folk theorem states that any feasible payoff profile that strictly dominates the minmax profile can be realized as a Nash equilibrium payoff profile, with sufficiently large discount factor. For example, in the Prisoner's Dilemma, both players cooperating is not a Nash equilibrium. The only Nash equilibrium is given by both players defecting, which is also a mutual minmax profile. In mathematics, the term folk theorem refers generally to any theorem that is believed and discussed, but has not been published. 1. 2. 3. 2. 3. Grim trigger. In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game, such as an iterated prisoner's dilemma.

Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game. In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it performs poorly.[1] See also[edit] References[edit] Repeated game. In game theory, a repeated game (supergame or iterated game) is an extensive form game which consists in some number of repetitions of some base game (called a stage game).

The stage game is usually one of the well-studied 2-person games. It captures the idea that a player will have to take into account the impact of his current action on the future actions of other players; this is sometimes called his reputation. The presence of different equilibrium properties is because the threat of retaliation is real, since one will play the game again with the same person. It can be proved that every strategy that has a payoff greater than the minmax payoff can be a Nash Equilibrium, which is a very large set of strategies.

Finitely vs infinitely repeated games[edit] Repeated games may be broadly divided into two classes, depending on whether the horizon is finite or infinite. Infinitely repeated games[edit] Finitely repeated games[edit] Repeated prisoner's dilemma[edit] References[edit] Bayesian game. In game theory, a Bayesian game is one in which information about characteristics of the other players (i.e. payoffs) is incomplete. Following John C. Harsanyi's framework,[1] a Bayesian game can be modelled by introducing Nature as a player in a game. Nature assigns a random variable to each player which could take values of types for each player and associating probabilities or a probability density function with those types (in the course of the game, nature randomly chooses a type for each player according to the probability distribution across each player's type space). Harsanyi's approach to modelling a Bayesian game in such a way allows games of incomplete information to become games of imperfect information (in which the history of the game is not available to all players).

The type of a player determines that player's payoff function and the probability associated with the type is the probability that the player for whom the type is specified is that type. ) . , where for all . Cooperative game. This article is about a part of game theory. For video gaming, see Cooperative gameplay. For the similar feature in some board games, see cooperative board game In game theory, a cooperative game is a game where groups of players ("coalitions") may enforce cooperative behaviour, hence the game is a competition between coalitions of players, rather than between individual players. An example is a coordination game, when players choose the strategies by a consensus decision-making process. Recreational games are rarely cooperative, because they usually lack mechanisms by which coalitions may enforce coordinated behaviour on the members of the coalition. Mathematical definition[edit] A cooperative game is given by specifying a value for every coalition.

. , called the grand coalition, and a characteristic function [1] from the set of all possible coalitions of players to a set of payments that satisfies . Conversely, a cooperative game can also be defined with a characteristic cost function . Let. Normal-form game. In static games of complete, perfect information, a normal-form representation of a game is a specification of players' strategy spaces and payoff functions.

A strategy space for a player is the set of all strategies available to that player, whereas a strategy is a complete plan of action for every stage of the game, regardless of whether that stage actually arises in play. A payoff function for a player is a mapping from the cross-product of players' strategy spaces to that player's set of payoffs (normally the set of real numbers, where the number represents a cardinal or ordinal utility—often cardinal in the normal-form representation) of a player, i.e. the payoff function of a player takes as its input a strategy profile (that is a specification of strategies for every player) and yields a representation of payoff as its output.

An example[edit] Other representations[edit] Uses of normal form[edit] Dominated strategies[edit] Sequential games in normal form[edit] such that where: D. Extensive-form game. Minimax. Backward induction. Markov decision process. Fictitious play. Regret (decision theory) Prisoner's dilemma. Ultimatum game. Matching pennies. Battle of the sexes. Coordination game.