Imagine you're a soldier posted on a defensive line. Tomorrow, there will be a great battle. There are two possible outcomes of the battle (victory or defeat), and two possible outcomes for you (surviving or dying). Clearly, your preference is to survive.
If your line is breached, you will die. However, even if the defensive line holds, you may die in battle. It seems that your best option is to run away. But if you do, the ones who stay behind and fight may die. You realize that every other person on the defensive line is thinking this very same thing. So if you decide to stay and cooperate but everyone else flees, you'll certainly die.
This problem has plagued military strategists since the beginning of warfare. That's why there is generally a new condition entered into the equation -- if you flee or defect, you will be shot as a traitor. Therefore, the best chance you have of surviving is to keep your position on the line and fight for victory.
How does this relate to game theory?
Game theory isn't the study of how to win a game of chess or how to create a role-playing game scenario. Often, game theory doesn't even remotely relate to what you'd commonly consider to be a game.
At its most basic level, game theory is the study of how people, companies or nations (referred to as agents or players) determine strategies in different situations in the face of competing strategies acted out by other agents or players. Game theory assumes that agents make rational decisions at all times. There's some fault in this assumption: What passes for irrational behavior by most of society (a buildup of nuclear weapons, for instance) is considered quite rational by game theory standards.
However, even when game theory analysis produces counterintuitive results, it still yields surprising insights into human nature. For instance, do members of society only cooperate with each other for the sake of material gain, or is there more to it? Would you help someone in need if it hurt you in the long run?
To learn why a rational person must behave selfishly, continue to the next section.
The Prisoner's Dilemma
One of the best ways to understand some basic game theory principles is to look at a classic game theory example: the prisoner's dilemma. This game examines how two players interact based on an understanding of motives and strategies. The prisoner's dilemma is a game that concerns two players -- both suspects in a crime. They're arrested and brought to a police station. If both suspects protect each other by staying quiet (called cooperation in game theory terms), the police have only enough evidence to put each in jail for five years.
However, each suspect is offered a deal. If either one confesses (defection from a cooperative relationship), and the other suspect doesn't, the defector will be rewarded with freedom, while the tight-lipped suspect will get 20 years in jail. If both confess, both get 10 years in jail.
It seems both players benefit most by cooperating with each other. Receiving a 20-year jail term is an unacceptable outcome. Since there's an opportunity for one of them to go scot-free by defecting, both players know the other is thinking along these same lines, both must defect out of self-interest. In doing so, the suspects receive 10-year sentences. This isn't the best outcome, but it is the best strategy for the situation the players find themselves in.
Any agreement or heartfelt promise between the two players to cooperate only guarantees that both will, in fact, secretly defect. A mutual promise not to confess actually encourages confession, which leads to freedom (the best individual outcome) for the self-interested.
This is the prisoner's dilemma. Game theorists have determined that confessing is always the answer for both parties in this case. The reason for this is that each party must assume that the other will act with only self-interest in mind.
We can examine the situation by charting it out on a matrix. Matrices allow us to examine all possible strategies and the outcomes that the combinations will produce.
To determine motives, we'll assign a range of preferences to the different outcomes, with 1 representing the worst outcome (20 years' imprisonment) and 4 the best (going free):
- 20 years: 1
- 10 years: 2
- 5 years: 3
- Go free: 4
Now we know our outcomes and preferences, as well as our available strategies: don't confess (cooperation between players) or confess (defection). We can see how different combinations of strategies will create different results. The outcomes are represented by number-pairs, with the first number representing Player 1, and the second number representing the outcome of Player 2.
Now, evaluate your options by examining the outcomes represented in each column. Looking at the first column, we see that 2 is greater than 1, and in the second column, 4 is greater than 3. So, your best strategy, no matter what your partner does, is to defect (confess). Since the outcomes of your confession are better than the outcomes of not confessing, this is called a strictly dominant action.
Why wouldn't a game theorist study solitaire? Keep reading to find out.
John von Neumann and Oskar Morgenstern introduced game theory to the world in 1943 with "Theory of Games and Economic Behavior." They hoped to find mathematical answers to economic problems.
According to economic theory, producers could make a greater profit by reacting to conditions such as supply and demand. But these theories fail to account for the strategies of other producers, and how the anticipation of those strategies affects each producer's moves. Game theory attempted to account for all of these strategic interactions. It didn't take long for military strategists to see the value in this.
When we discuss game theory, we assume a few things:
- A game is considered any scenario in which two players are able to strategically compete against one another, and the strategy chosen by one player will affect the actions of the other player. Games of pure chance don't count, because there's no freedom of choice, and thus no strategy involved. And one-player games, such as solitaire, aren't considered by game theorists to be games, because they don't require strategic interaction between two players.
- Players in a game know every possible action that any player can make. We also know all possible outcomes. All players have preferences regarding these possible outcomes, and, as players, we know not only our own preferences but also those of the other players.
- Outcomes can be measured by the amount of utility, or value, a player derives from them. If you prefer reaching point A to reaching point B, then point A has higher utility. By knowing that you value A over B, and B over C, a player can anticipate your actions, and plan strategies that account for them.
- All players behave rationally. Even seemingly irrational actions are rational in some way. For instance, if you were to play two games of pool, you wouldn't intentionally lose your money on the first game unless you believed that doing so would bolster your opponent's confidence when he or she was deciding how much to bet on game 2 -- a game you anticipate winning. This is an essential difference between one-shot and repeating games. In a one-shot game, you play once; in a repeating game, you play multiple times. (A little later, we'll look at how rational thinking varies between one-shot and repeating games.)
- If no player can reach a better outcome by switching strategies, the game reaches an impasse called the Nash Equilibrium. Essentially, this boils down to players keeping their current strategies (even if they don't have the highest preference) because switching won't accomplish anything.
In the next section, we'll put this information to use and see what we can learn about strategy by plotting it on a game tree.
Using a Game Tree
In a previous section, we examined the prisoner's dilemma and plotted it on a matrix. That was an example of a simultaneous-move game. Games are either simultaneous-move or sequential-move games. In simultaneous-move games, both players make a move without knowledge of the other players' moves. A blind auction works this way, as does bidding for contracts. In sequential-move games, the players take turns moving, as in chess or negotiations.
For games in which both players move simultaneously, we must assume our opponent is going to seek the best outcome possible. Therefore, we must protect ourselves by also making the most advantageous move possible. Would players choose different strategies in the prisoner's dilemma if it was played sequentially instead of simultaneously?
While simultaneous-move games can be plotted on matrices, sequential-move games can be plotted on game trees. The prisoner's dilemma mapped out on a game tree would look like this:
The order of moves is represented top-to-bottom on the tree. The uppermost node represents the first move of Player 1 (confessing or not confessing). The two nodes below it are subgames. Each represents Player 2's possible reaction to the first move made. The four nodes on the bottom row are terminal nodes and represent all four possible payoffs to this particular game.
We're able to look at the game in this form and choose the best strategy for the player making the final move in the game, which is Player 2. This allows us to work backwards to determine what move Player 1 should make, knowing how Player 2 will rationally behave in either scenario. This process of looking at the game from end-to-beginning is called backward induction. We see that Player 2's best strategy is to confess, regardless of Player 1's opening move. By moving our analysis further up the tree, to the beginning of the game, we know that Player 1 must choose defection (confession).
We see that in a one-shot game like prisoner's dilemma that playing simultaneously or sequentially doesn't change the optimal strategy for each player.
In the next section, we'll learn why perfectly rational selfishness must be thrown out the window when we play prisoner's dilemma over and over.
One-shot Games vs. Repeated Games
In a one-shot game, such as our previous example of the prisoner's dilemma, the stakes are high -- but carry no further repercussions. However, when playing a repeated game, a one-shot strategy may not be the best move: You and your opponent can get better returns in the long run by cooperating (not confessing) at times and defecting (confessing) at others. This helps you probe one another's strategies and is known as a mixed-strategy.
Let's say that you know your prisoner's dilemma is just one scenario in a series of repeated games. So you choose not to confess on your first move. Instead of taking advantage of this, Player 2 may reciprocate your trust, and also not confess, resulting in the best mutual payoff: five years each in jail. Strategy in repeated games takes into consideration the opponent's reputation and future cooperation, and so these games can play out much differently than one-shot games.
In fact, even if you repeat the game, but still know exactly how many games there will be, both players will both expect the other to maximize utility by defecting on the very last move, or the last game in the series. Knowing this, both players realize they must defect on the second-to-last move. But since both players know that will be the optimal strategy, they'll each play their most self-serving strategy the move before that, and so on, until they're pre-empting the other on the very first game in the series. This is the only chance for either player to do so, lest both immediately fall to a disadvantage, never to recover the lead.
Playing a series of games with no known end, the players can adopt a tit-for-tat strategy, which punishes the opponent for defecting. The players match defection in kind with their own defection for a predetermined number of moves, before attempting to re-establish trust. This is called a trigger strategy. For instance, if Senator 1 cooperates on a bill sponsored by Senator 2, but Senator 2 doesn't reciprocate the cooperation, Senator 1 might refuse to cooperate when Senator 2 proposes his or her next bill: tit-for-tat.
Another trigger strategy is the grim trigger strategy, in which Player 1 cooperates until Player 2 defects, causing Player 1 to defect on every move thereafter regardless of future cooperation on the part of Player 2. While tit-for-tat leaves room for forgiveness, grim trigger strategy is an endless cycle of defection.
Sometimes, players threaten a grim-trigger strategy and don't follow through with it. This is known as cheap talk: a threat without commitment. So if your fiancé moves in with you but doesn't break the lease on his apartment, that's cheap talk. If he burns his former home to the ground (and gets a tattoo of your name), that's commitment.
Continue reading to learn how game theorists saved the world -- or nearly ruined it -- on a daily basis for several decades.
Game Theory and the Cold War
Game theory's development accelerated at a record pace during World War II. Though it was intended for economics, both the United States and the Soviet Union quickly saw its value for forming war strategies.
Early in the Cold War, the Eisenhower administration viewed nuclear weapons like any other weapon in the arsenal available for use [source: Spence]. Game theorist Thomas Schelling convinced officials that nuclear weapons were only useful as deterrents. Additionally, he proposed that the U.S. should have a variety of responses it could call upon in relation to the size of the offense against it.
A balance was struck in which neither nation could gain advantage through nuclear attack -- the reprisals would be too devastating. This was known as Mutual Assured Destruction (MAD). This balance required open acknowledgment of each nation's strengths and vulnerabilities. However, as prisoner's dilemma showed us, both players must assume the other is only concerned with self-interest; therefore, each must limit risk by adopting a dominant strategy.
If one nation changed the balance of power (by building a missile-defense shield, for instance), would it lead to a strategic blunder that resulted in nuclear war? Governments consulted game theorists to prevent such imbalances. When one nation built missile silos, the other nation targeted them. The Soviet Union and the U.S. then spread out and hid their launch sites around the globe, which required both nations to commit more missiles to a potential first strike in order to diminish the retaliatory abilities of the other. They also kept nuclear-armed aircraft aloft in the skies at all times to provide a deterrent if the silos were destroyed. As another deterrent, they established nuclear-armed submarines. This pretty much covered all bases: ground, air and sea.
The atmosphere was tense, and there was a constant threat of miscommunication leading to disastrous results. In the midst of such massive distrust, even a defensive move (such as building fallout shelters) could be interpreted as provocative. Building fallout shelters, for instance, makes it look like you're expecting trouble. Why are you expecting trouble, unless you're planning on starting it?
By no rational or mathematic measure would it make sense to launch nuclear weapons after your nation has already taken a significant hit. What would be the point? World destruction for the sake of revenge? But if revenge isn't a deterrent, what keeps either nation from launching a first strike? To counteract the threat of a first strike, American and Soviet leaders sometimes used a "madman strategy" or released rumors that they were mentally unstable or blind with rage to keep the other off guard.
Weapons control and disarmament negotiations were essentially repeated games that allowed both parties to reward cooperation and punish defection. Through repeated meetings and increased communication, trust and cooperation led to (some) disarmament and less strategic posturing. This was also due in no small part to the resources required to maintain an ever-growing nuclear capability.
Fortunately, neither nation was willing to play the final stage of a game in which the best possible outcome involved a victory that could only be celebrated by a handful of survivors underground.
So aside from Cold War strategies, how else can game theory be useful? Find out next.
Other Games and Applications
Game theory is also useful for sociological studies. There are different games or scenarios that theorists use to analyze behavior patterns. One of these is the ultimatum game.
In the ultimatum game (a one-shot game), two players start off with nothing. Player 1 is given $10 and is instructed to give a portion of it to Player 2, who can accept or reject the offer. If Player 2 accepts Player 1's offer, both players walk away with something. But if Player 2 rejects the offer, then neither profits.
Theorists initially believed that Player 1 would offer grossly uneven splits (like only $2 out of the $10, for example) and Player 2 would accept, since it was better than nothing. This supported early economic models that suggested a player always acts out of self-interest. So, if Player 1 offered just $1, Player 2 would accept. After all, something is better than nothing. However, studies have shown that even in one-shot scenarios, Player 2 will sometimes reject the offer. It could be interpreted that Player 2 is simply insulted by such a low-ball offer because it's unfair. But is it really?
When accepting $2, the gain can be either viewed as net or as relative. So, from one point of view, $2 is better than nothing every time. However, if your net gain is two dollars, you have advanced from a point of having equal value to your opponent (nothing) to a point of a great relative disadvantage, in that your opponent in the course of one move now has $6 more than you do. Depending on the situation, it may be advantageous to reject any offer that is not evenly split.
What good is this? Well, by studying how players act within this game, we may learn what truly motivates people. Economic theory maintains that making as much money as possible is the most important thing. We know, however, that life is about more than just that one pursuit. But money is important to survival, and our use of it reflects other values. Study participants belonging to cultures that value gift-giving are more likely to make offers that favor the recipient. Other cultures may decline a favorable offer because acceptance would bring with it an obligation to the gift-giver [source: Henrich].
In another application of game theory called evolutionary theory, each player is viewed as a strategy him or herself. That is, you represent the result of your ancestors' decisions. If your ancestors chose to steal from their neighbors, you're the walking embodiment of that survival strategy. As these strategies compete for dominance, certain strategies will dominate and replicate, in the form of children. Eventually, these will dominate other strategies by sheer numbers.
A scenario called public goods tests players' rationality. In this game, a group of six players is given $10 each. They are then told that any money contributed to a general pool will be tripled and divided evenly among all players, regardless of how many contribute or how much. The rational course of action is to defect -- not to contribute -- and benefit from whatever dividend may come your way. Fortunately for us, in real-life situations, people sometimes deviate from the rational course and contribute to the pot. One real-life example of the public goods game is the environment. Whether or not an individual invests money or effort into environmental stewardship, that individual will benefit from any contribution made by others.
Is game theory just an excuse to look out for No.1? Continue reading to find out.
Criticisms of Game Theory
Despite its applicable functions, game theory isn't without criticism. It's been pointed out that game theory can help only so much if you're trying to predict realistic behavior. Every action, good or bad, can be rationalized in the name of self-interest.
A constant difficulty with game theory modeling is defining, limiting, isolating or accounting for every set of factors and variables that influence strategy and outcome. There's always an X-factor that simply cannot be accounted for. For instance, no strategy can predict the actions of a negotiator who is in the throes of a religious revelation.
Game theory is based on rationality. And in traditional economic models, rationality is the maximization of one's own payoff. Therefore, in every situation, you'll always act to gain as much as possible, regardless of how it affects others. Interestingly, studies have found that the subjects most likely to fully embrace the economic model of a self-serving, payoff-maximizing agent are kindergarten students, but that by the fourth grade, their behavior begins to favor cooperative strategies [source: Henrich].
Game theory argues that cooperation between players is always the rational strategy, at least when participating in a game-theory experiment (even if it means losing the game). Consider this scenario: You participate in what you are told is a one-shot game. To win this game, you must take advantage of the other player. After doing so and winning, you learn that this game is actually one of two games in a series.
Now the roles are reversed. The test-givers want to see how Player 2 will behave after Player 1 defects in the first game -- this is the true purpose of the study. Your rational, self-maximizing action in the first game is now irrational outside the framework of a one-shot game.
Test-givers often trick test-takers as a strategy to obtain the optimal outcome: full knowledge of players' strategic choices in different game scenarios. A test-giver's strategy of concealing the true nature of the game itself will dominate any player's strategy within the game. The test-giver receives maximum information (which offers the most utility within a larger framework of test-giving). This information comes, however, at the expense of the player, who reveals to a fellow citizen his or her willingness to defect within the larger framework of life.
The prisoner's dilemma shows us we must assume agents always play dominant strategies. Therefore, the best strategy for a game theory experiment is to assume the test-giver is manipulating the game to make players reveal information. In a game, then, it's always better to cooperate -- even if it means losing the game. The worst outcome from this strategy is still an acceptable outcome. Essentially, losing an experimental game when you've been tricked isn't such a loss -- as long as you maintain your reputation within a much larger series of life scenarios.
Is it rational to take advantage of a player within the hypothetical (and possibly misleading) parameters of a game when you might have to share an elevator with them afterward? Ask yourself that before your next board meeting.
For more information on game theory, visit the links on the next page.
More Great Links
- Brañas-Garza, Pablo. "Promoting helping behavior with framing in dictator games." Journal of Economic Psychology. August 2007 (May 9, 2008). http://www.sciencedirect.com/science?_ob=ArticleURL&_udi= B6V8H-4MFCVST-1&_user=10&_rdoc=1&_fmt=&_orig=search&_ sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_ userid=10&md5=89443199abdab66180afdce858c5ea5b
- Chaplin, Virginia. "Princeton & Mathematics: A Notable Record." Princeton Alumni Weekly, May 9, 1958. http://www34.homepage.villanova.edu/robert.jantzen/princeton_math/pmcxpaw.htm
- Derderian, Hovnan. "A Message from Archbishop Hovnan Derderian on the occasion of 4th of July." Western Diocese of the Armenian Church. June 29, 2007. http://www.armenianchurchwd.com/A-Message-from-Archbishop-Hovnan- Derderian-on-the-occasion-of-4th-of-July/
- Economist. "Money isn't everything." July 5th, 2007. http://www.economist.com/science/displaystory.cfm?story_id=9433782
- Glossary of Research Economics. Ed. Peter B. Meyer. (May 9, 2008). http://econterms.com
- Hart, Sergiu. "An Interview with Robert Aumann." Center for the Study of Rationality, the Hebrew University of Jerusalem. January 2005 (May 15, 2008). http://www.ma.huji.ac.il/hart/papers/md-aumann.pdf
- Hauert, Christoph. "Public Goods." January 2005 (May 9, 2008).http://www.univie.ac.at/virtuallabs/PublicGoods
- Henrich, Joseph. "Foundations of Human Sociality: Experiments in 15 Small-Scale Societies." California Institute of Technology. (May 9, 2008). http://www.hss.caltech.edu/roots-of-sociality/phase-i
- Internet Movie Database. "Synopsis for Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb." (May 9, 2008).http://www.imdb.com/title/tt0057012/synopsis
- James, Oliver. "Think again: New research on schizophrenia suggests that the drugs won't always work." The Guardian. October 22, 2005. http://lifeandhealth.guardian.co.uk/experts/oliverjames/story/0,,1619897,00.html
- Journal of Economic Behavior and Organization. "Reciprocity in a Two-part Dictator Game." Avner Ben-Ner, Louis Putterman, Fanmin Kon, Dan Magan. December 3, 2002 (May 9, 2008). http://www.econ.brown.edu/fac/Louis_Putterman/working/pdfs/chig_aer.pdf
- Krieger, Lou. "A Little Game Theory." Lou Krieger Online. (May 9, 2008). http://www.loukrieger.com/articles/gametheory.htm
- Mandel, Michael. "A Nobel Letdown in Economics." Businessweek, Oct. 11, 2005. http://www.businessweek.com/bwdaily/dnflash/oct2005/nf20051011 _3028_db084.htm
- McCabe, Kevin. " What is the Ultimatum Game?" Neuroeconomics. September 24, 2003 (May 9, 2008). http://neuroeconomics.typepad.com/neuroeconomics/2003/09/what_is_the_ult.html
- PBS. "Hernan Cortes Arrives in Mexico." (May 9, 2008).http://www.pbs.org/kpbs/theborder/history/timeline/1.html
- Princeton University Library. "The Princeton Mathematics Community in the 1930s: An Oral History Project." (May 9, 2008). http://www.math.princeton.edu/quicklinks/
- Reynolds, Winston A. "The Burning Ships of Hernán Cortés." Hispania, Vol. 42, No. 3, pg. 317-324. September, 1959. http://www.jstor.org/pss/335707
- Shor, Mike. "Game theory in film, music, and fiction: The Simpsons." (May 9, 2008). http://www.gametheory.net/popular/reviews/Simpsons.html
- Spence, Michael. "Mr. Counterintuition: America is safer with sophisticated enemies." Wall Street Journal. February 17, 2007. http://www.opinionjournal.com/editorial/feature.html?id=110009690
- Stanford Encyclopedia of Philosophy. "Game Theory." March 10, 2006 (May 9, 2008). http://plato.stanford.edu/entries/game-theory/
- Suri, Jeremi. "The Nukes of October: Richard Nixon's Secret Plan to Bring Peace to Vietnam." Wired.com. February 25, 2008 (May 9, 2008). http://www.wired.com/politics/security/magazine/ 16-03/ff_nculearwar?currentPage=all/
- Weisstein, Eric W. "Game of Hex." MathWorld -- a Wolfram Web Resource. http://mathworld.wolfram.com/GameofHex.html
- Whitaker, Robert. "Mind drugs may hinder recovery." USA Today. March 3, 2002. http://www.usatoday.com/news/opinion/2002/03/04/ncguest2.htm