Game Theory Bargaining - William Spaniel

October 14, 2022 | Author: Anonymous | Category: N/A
Share Embed Donate


Short Description

Download Game Theory Bargaining - William Spaniel...

Description

 

Game Theory 101: Bargaining By William Spaniel

 

Copyright 2014 William Spaniel.   All rights reserved.

 

Acknowledgements I thank Danielle Rivera Doi, Myles Mack-Mercer, and Kenny Oyama for their comments and suggestions as I compiled this book. Special thanks goes to Lacey Piekarz of Peak Composition, LLC, for editing and proofreading services. I originally learned game theory from Branislav Slantchev, John Duggan, Mark Fey, and Avidit Acharya. Please send feedback and report possible errors to [email protected] [email protected]..

 

About the Author William Spaniel is a PhD candidate in political science at the University of Rochester, author of Game Theory 101: The Complete Textbook   and The  Rationality of War, War , creator of the popular YouTube series Game Theory 101, 101, and founder of gametheory101.com gametheory101.com.. You can email him [email protected] [email protected] or  or follow him on Twitter @gametheory101 @gametheory101..

at

 

Table of of Co Contents ntents 1: WHO WINS? CHAPTER 1: WHY BARG ARGAINING AINING MATTERS WHA HAT T IS BARGAINING POWER? WHY GAME THEORY IS AWESOME THE LIMITA IMITATIONS TIONS OF GAME THEORY OUTLINE OF THE BOOK

CHAPTER 2: 2: THE ULTIMATUM GAME APPLICATIO PPLICATION N: WHY RENTERS GET SCREWE CREWED D OUT OF SECURITY DEPOSITS INTERPRETIN NTERPRETING G THE ULTIM LTIMATUM ATUM GAME APPENDIX: OTHER SOLUTIONS TO THE ULTIMATUM GAME

CHAPTER 3: 3: PROPOSAL PROPOSAL POWER AND CONTINUOUS BARGAINING SPACES THE CONTIN ONTINUOUS UOUS BARGAINING SPACE ULTIMAT LTIMATUM UM GAME APPLICATIO PPLICATION N: AGENDA SETTING, THE HASTERT RULE, AND WHY THE MAJORITY DOES NOT RULE IN CONGRESS APPLICATI PPLICATION ON: THE FEDERALIST NO. 78 AND EXECUTIVE POWER A BRIEF ASIDE ON MONOPOLY

CHAPTER 4: 4: COUNTEROFFERS, DISCOUNTING, AND DISCOUNTING, AND THE CONSEQUENCES CONSEQUENCES OF DELAY UNDERSTAN NDERSTANDING DING DELAY: THE DISCOUNT FACTOR THE POWE OWER R OF A COUNTEROFFER APPLICATIO PPLICATION N: NEGOTIATING  STARTING SALARY THE POWER TO REJECT OR THE POWER TO COUNTEROFFER? CONCLUSI ONCLUSION ON

CHAPTER 5: 5: OUTSIDE OPTIONS, OPTIONS, RISK, AND THE VALUE OF BEING UNIQUE OUTSIDE OPTIONS APPLICATIO PPLICATION N: NEGOTIATING  A RAIS AISE E APPLICATI PPLICATION ON: UNEMPLOYMENT BEN ENEFITS EFITS PARK PLACE IS WORTHLESS: BARGAINING OVER MCDONALD’S MONOPOLY PIECES PPLICATION: THE DE BEERS DIAMOND MONOPOLY A WHO RECEIVES  THE ADDITIONAL PROFITS? APPLICATION: GOOGLE, APPLE, AND A $9 BILLION ANTI-TRUST LAWSUIT APPLICATION: STAR FREE AGENT ATHLETES APPLICATION: MICROSOFT’S MISGUIDED EVALUATION SYSTEM BENEFITS WITH ASYMMETRIC EMPLOYERS APPLICATION: THE CARDINAL SIN OF BUYING A NEW CAR HEDGING AGAINST RISK APPLICATION: WARREN BUFFETT’S BILLION DOLLAR CHALLENGE APPLICATION: DEAL OR NO DEAL CONCLUSION

CHAPTER 6: MAKING THREATS CREDIBLE TYING HANDS PPLICATION

RIPWIRES

EST

ERLIN AND THE

OLD

AR

A T PROXY: ,TW , EGOTIATORS    C  AND  W AGGRESSIVE ATTORNEYS BARGAINING :BY  UNION N OUGH B

 

CHAPTER 7: BARGAINING WITH UNCERTAINTY THE RISK-RETURN TRADEOFF AND EXTREMELY SIMPLE UNCERTAINTY APPLICATION: NEGOTIATING WITH A CAR DEALERSHIP REDUX APPLICATION: DRIVING HOME A DEALERSHIP VEHICLE LABOR STRIKES AND SCREENING APPLICATION: THE 2013 CBS/TIME WARNER CABLE BLACKOUT APPLICATION: THE OCTOBER 2013 UNITED STATES GOVERNMENT SHUTDOWN APPLICATION: OUT-OF-COURT SETTLEMENTS APPLICATION: NEGOTIATING PEACE DURING WAR NEGOTIATING OVER USED CARS AND THE MARKET FOR LEMONS COSTLY SIGNALING AND THE VALUE OF A USELESS COLLEGE EDUCATION CONCLUSION APPENDIX: UNCERTAINTY WITH CONTINUOUS TYPE SPACES AND THE RISK-RETURN TRADEOFF

CHAPTER 8: COMMITMENT PROBLEMS THINKING LIKE A CRIMINAL: WALTER WHITE ON B REAKING B AD NEGOTIATING WITH THE POLICE COMMITMENT PROBLEMS IN NEWLY INDEPENDENT COUNTRIES POST CIVIL WAR COMMITMENT PROBLEMS APPLICATION: YELP, ANGIE’S LIST, AND EBAY’S REPUTATION SYSTEM THE KIDNAPPER’S DILEMMA CONCLUSION CHAPTER 9: ALTERNATING OFFERS BARGAINING BARGAINING WITH TWO COUNTEROFFERS BARGAINING WITH THREE COUNTEROFFERS BARGAINING WITH N COUNTEROFFERS CONCLUSION

CHAPTER 10: RUBINSTEIN BARGAINING INFINITE HORIZON BARGAINING THE FIRST MOVER’S ADVANTAGE HOW THE RICH GET RICHER

CHAPTER 11: UNDERSTANDING BARGAINING

 

Chapter 1: Who Wins? Tomorrow, Albert will walk into his boss's office and attempt to negotiate a raise. Albert recently received a job offer at another firm for $20 per hour. As such, he plans to leave if his boss stands firm at any smaller amount. Meanwhile, the boss greatly values Albert's contribution to the firm. In fact, if it were necessary, the boss would be willing to pay up to $50 an hour to keep Albert around. This $50 reflects how much additional profit Albert brings to the company. Thinking holistically, Albert and his boss should reach some agreement to keep him with the firm. It would be silly for his boss to demand to keep the wage lower than $20—such a lowball proposal would guarantee Albert's departure from the company. Likewise, it would be silly of Albert to demand more than $50—Albert's boss is unwilling to pay such high amounts to keep him around. However, any amount between $20 and $50 leaves both parties better off—Albert would receive more than he would earn at the other company, while his boss would pay him less than the amount of profit Albert adds to the firm's bottom line. Visually, the problem looks like this:  

Again, Albert finds any wage less than $20 unacceptable, while his employer finds any amount greater than $50 unacceptable. The amounts between $20 and $50 constitute a bargaining range —the set of settlements mutually preferable to bargaining breakdown. The $30 difference between the minimum Albert needs and the maximum his employer will pay is called the surplus surplus that  that an agreement creates. While the parties should reach an agreement within the bargaining range, it is entirely unclear which division they will ultimately settle on. Can the

 

boss extract the entire surplus by forcing Albert to accept only $20? Can Albert extract the entire surplus by demanding his boss offer $50? Will the parties settle halfway in between at $35? Or will the division tilt more favorably to Albert at $40? The number of possible settlements is very large if the parties can negotiate and notover just fractions whole dollars. Furthermore, thedifferences number is infinite if over they cents can bargain of a cent as well. The are not trivial either. Given that they will settle, the parties have diametrically opposed preferences regarding the ultimate offer—Albert would like to push the wage as close to $50 as possible, while his boss would like to keep it as close to $20 as possible. In the end, the parties will eventually reach some agreement. Bargaining ower determines ower  determines whether that amount is closer to $20 or closer to $50. This book searches for sources of bargaining power and explains why Albert wins when he wins and why he loses when he loses.  

 

Why Bargaining Matters To be sure, Albert’s dilemma is not unique. Many other bargaining situations share the same common structure. Here are a few: Buying a used car. After receiving his raise, Albert decides to upgrade his clunky vehicle. He finds Barbara’s gently used Honda Civic through Craigslist. Barbara is willing to sell the car for at least $4500. Albert is willing to buy the car for no more than $5000. The two should agree to some sale price between $4500 and $5000, but it is again unclear how much money will ultimately change hands. And given the $500 range of prices, the difference is once again far from trivial. Purchasing any good.  This same logic applies to the purchase of any good that lacks a fixed price. Suppose the seller of the good needs at least $x for it, while Albert will not pay more than $y. If y is less than x, the parties cannot strike a workable deal. But if x is less than y, any sale price between $x and $y is mutually agreeable. The question still remains whether the price will be closer to $x (where the buyer wants it) or closer to y (where the seller wants it). Purchasing any service.  The bargaining object need not be physical property. For example, imagine that Albert’s home has a clogged drainage system. He calls a plumber, who is willing to clean the system for no less than $300. Albert knows another plumbing company would perform the task for $500. Albert should certainly hire the original plumber for the service, but where will the price fall? Sports contracts. Perhaps Albert is not working a 9 to 5 job at a firm but rather is an all-star basketball point guard. At the end of the season, he will become a free agent. His team values his services at a value equivalent to $15 million per year. Albert expects to only receive offers of $12 million from other teams as a free agent—those teams do not find his skill set as attractive as his current team. Given the alternatives, Albert ought to re-sign, but will the contract be for an amount closer to $12 million or $15 million? Monopoly. Albert and Barbara are the only two players left in a game of Monopoly. She offers to trade a few of her properties for some of his. What kind of deals would they both agree to? And how does this change if there was an additional player left? McDonald’s Monopoly.  In McDonald’s Monopoly, players can win $1 million theycontainers. obtain the Park andPlace Boardwalk drink and French iffry WhilePlace Park piecespieces are from plentiful, the

 

manufacturers intentionally print only one copy of Boardwalk. Suppose one day at lunch Albert finds that Boardwalk piece. Someone sitting next to him has Park Place. Alone, these pieces are worthless. But how much must Albert pay the stranger to buy the Park Place piece and win the million dollar prize? Legislation.  A terrorist attack rattles the country. President Albert and opposition that currentits counterterrorist measures are inadequate and that theleaders nationagree should increase tax rate to fund intelligence gathering efforts. However, they disagree on the size of the new agency. Albert wants to increase income taxes by a full percent to create a large organization, while the opposition only wants to increase them by half a percent for a smaller intelligence community. How high is the tax rate ultimately set? Kidnapping and ransom.  A group of thugs kidnap Albert’s daughter. The thugs place absolutely no value on her life. Albert, of course, is willing to pay any feasible price for her safe return. As with the previous situations, a large bargaining range exists. How much will Albert pay for ransom? And how does working outside the framework of the law affect bargaining patterns? Out-of-court settlements. Going to trial over a legal matter is needlessly wasteful. Court cases can drag on, costing both parties absurd amounts in legal fees. If the parties can agree what the likely outcome of a trial would be, they could implement that same settlement out of court and pocket the money. But given that many settlements are mutually better than a trial, which will they ultimately agree on? War and peace.  War is costly—it destroys property, kills people, and traumatizes the survivors. It also eventually produces some sort of outcome on the ground. Thus, potentially warring factions would be better off implementing a peaceful solution that closely matches whatever the eventual outcome of war would be. This explains why most countries do not fight most other countries most of the time. But who benefits more from saving the costs of conflict? And also note that as with kidnapping and ransom payments, war operates outside the realm of well-enforced law. Indeed, legendary Prussian general Carl von Clausewitz once noted that war is politics by other means. Does this sabotage the incentive to negotiate? As the examples demonstrated, bargaining is everywhere. Given that better bargaining tactics can potentially save a consumer thousands of dollars (or produce thousands of dollars extra for a producer), it would be good to know what exactly constitutes good bargaining tactics.

 

However, the standard method of learning—listening to friends’ and colleagues’ anecdotes—leaves much to be desired. When people complete such bargaining tasks, they often regale others with tales of their own negotiating prowess, which ultimately caused the other side to cave in to more favorable terms. Yet it is difficult to tell whether such strategies have any merit or if all the stories are from ad hoc anecdotes. hoc Consequently, this book aims to investigate negotiations a  anecdotes. broad perspective to discover how agents can increase their bargaining power.  

 

What Is Bargaining Power? To spoil the results that follow, bargaining power comes from five sources: (1) control over proposals, (2) patience, (3) the attractiveness of alternatives should bargaining break down, (4) knowledge of the opposition's preferences, and (5) the credibility of one’s threats and promises. First, proposal power gives individuals the ability to control which agreements the parties settle upon. Perhaps surprisingly, this leads the proposer to receive the lion’s share of the gains from bargaining. In most cases, people readily understand the value of proposal power; it is why you go back and forth with offers at a flea market or on a used car lot. Still, many legislative settings explicitly allow only one side to introduce legislation. These rules ensure that the party in control of the government reaps most of the benefits. Second, patience permits actors to delay agreement until a later date, when the terms of an agreement may be more favorable. In turn, this forces other parties to be more generous in the present, knowing that their patient bargaining opponent might delay acceptance otherwise. Interestingly and unfortunately, these effects allow the rich to get richer and the poor to get poorer. Third, the attractiveness of outright bargaining failure determines the minimal amount parties must receive to accept an offer. In bargaining theory, such an alternative is called an outside option. option. If a party’s outside options are abysmal, the opposition can successfully demand more in negotiations, knowing that the first party might have to accept due to a lack of better choices. Alternatively, if a party has stronger outside options, it can leverage those opportunities and force the opposition to give up more to stop the first from walking away from the table. Fourth, knowing how much the other side is willing to give up allows a negotiator to properly calibrate his offers. In contrast, when the negotiator is in the dark, he may demand too much. This causes the other side to reject an agreement, leading to unnecessary delay or outright bargaining failure. To use a cliché, knowledge is power in negotiations. Fifth, credibility plays two related roles. Actors would love to threaten their opponents with terrible outcomes should they not behave a particular way—after all, if bargaining breakdown meant nuclear annihilation, the opponent would be happy. desperate to do everything in its power to ensure that the first side remained However, not all threats are credible. A used car

 

dealer, for example, does not have nuclear codes and therefore cannot adequately leverage atomic destruction against you. In turn, one way to increase bargaining power is to develop the means and the desire to credibly follow through on your threats of punishment. In addition, commitment to abide by the terms of an agreement can convince other side to agreeRather to terms would be about unacceptable the absence ofthe such a commitment. thanthat purely being money, in many bargains take the form of “If you do x for me today, I will do y for you tomorrow.” For such an agreement to be mutually worthwhile, both parties must benefit if x and y exchange hands. But notice the timing of the deal—x must be done now, while y must be done in the future. If the party doing y would not want to follow through after receiving x, then the other party would never do x on the basis that it will never receive y. Without credible commitments to follow through, bargaining may therefore break down even though the parties in principle would prefer doing each other a favor. Overall, the lesson is clear. Bargaining power is not about being clever at the bargaining table—it is about carefully maneuvering the chess pieces before   bargaining begins. Table talk should be a mere declaration of before checkmate.  

 

Why Game Theory Is Awesome The previous section provided some insight as to why those five advantages translate to greater shares under a negotiated agreement. However, at present, they appear to be unsubstantiated speculation. The rest of this book develops models of bargaining, from which we can derive these sources of bargaining power. But to model the bargaining process, we must turn to game theory for assistance. Why use game theory to develop an understanding of bargaining leverage? The above examples provide convincing evidence that bargaining behavior is strategic —that is, the tactics that Albert adopts affect him and Barbara, and the tactics that Barbara adopts affect her and Albert. In other words, such behavior is strategically interdependent. As a result, when Albert formulates his strategy, he cannot only think about his abilities and desires—  he must also consider how his opponent’s abilities and desires affect him. The logic of strategic interdependence can quickly become convoluted. Without imposing structure on the situation, casual analysis might miss out on some critical facets of the process. At that point, the analysis would be no better than a friend’s anecdotes about his bargaining experiences. Fortunately, game theory is the perfect solution. Game theory developed as an academic field in the 1940s and 1950s as a branch of mathematics and economics. Social scientists from all disciplines have since adopted the methodology to identify causal relationships in strategic interactions. Indeed, mathematical logic makes causation explicit; the equation y = 3x + 2 directly informs an individual that a one unit change in x results in a three unit change in y. Game theory allows us to convert human interaction into that sort of equation, which then allows us to draw clear inferences. Another compelling reason to use game theory is that it allows us to think clearly about situations that are not familiar to us. Bargaining is rife with knowledge asymmetry. Human resources at large companies hire people simply to negotiate lower salaries for their employees; you may only work through them once when you initially accept the job. A car salesman might go through eight customers a day; you might negotiate with a car salesman eight times in your life life.. A police officer might pull over a speeding driver a couple times per shift; if you are lucky, you may only encounter this problem a couple times over your driving career. The HRa pad salary car salesman, police officer have never used andnegotiator, paper to work through theand logic of bargaining. Butlikely they

 

do not need to—they can learn the ropes on the fly, and they will have plenty of opportunities to teach themselves with firsthand experience. You, on the other hand, will likely never accumulate that type of knowledge. But game theory gives you an alternative. Working through the strategic situation might protect you from a speeding ticket or save you a few hundred dollars when you Moreover, purchase a car. using game theory forces us to consider the other side’s incentives. Too often, we get lost in ourselves—we think about our own wants, needs, and strategies at the expense of thinking about the other side’s wants, needs, and strategies. Understanding bargaining requires a big picture approach. Game theoretical models require a broader perspective, which will make us think carefully about how bargainers interact with one another.  

 

The Limitations of Game Theory For all these benefits, using game theory comes with a couple costs. First, our conclusions are purely a result of our assumptions. Along these lines, our struggle to formulate causal relationships is a three-step process:   1) Make assumptions. 2) Use game theory. 3) Draw conclusions.   The assumptions represent the essential features of the bargaining situation at hand. For the most part, we will need to make assumptions about the preferences of the actors, the maneuvers they can make during the bargaining process, and their knowledge of the situation at those times. Game theory provides no black magic. Instead, it merely allows us to convert those assumptions into mathematical expressions. From there, as long as the math is correct, the inferences we make about bargaining power are logically consistent with the assumptions. That last statement is worth emphasizing: the inferences are logically consistent with the assumptions. assumptions. Put bluntly, our conclusions are only as valuable as the assumptions made. Thus, if the assumptions are nonsensical, we should not be surprised if the results are equally bizarre. On the other hand, if the assumptions are good, then we can rest assured that the inferences are useful as well. Game theory has a second drawback: replacing the imprecision of the English language with mathematical formulas can make the end result appear like a foreign language. Some academics spend years in graduate school to learn it; understandably, most people have neither the time nor the desire to do the same. To resolve the tradeoff, the remaining chapters use the simplest models possible and jargon-free analysis. Where a game theory textbook might discuss subgame perfect equilibria, the following chapters replace that jargon with discussions of optimal strategies and credible threats. This allows for additional precision without complete loss of clarity. And while models abstract away from the real world, their simplicity allows us to see exactly where bargaining power originates. We can then take these lessons and apply themInback to substantive addition, the bookproblems. contains multiple real-world applications of the

 

theories wherever possible. These illustrations help explain why particular facets of bargaining are important and will help readers relate these models to their everyday lives. Still, it is important to realize that no single theory can explain every facet of any historical event. Rather, individual theories only partially help us understand why things happened the way they happened. As such, whenevaluation this book system uses bargaining explainengineers, why Microsoft’s employee failed to theory retain to valuable do not conclude that it was the only only   reason. Instead, consider the lesson as one of possibly many takeaways the historical example provides. While the book emphasizes simple theory and real-world illustrations, some technical details are interesting and difficult to avoid. Wherever relevant, I have included them in appendices at the end of the corresponding chapter. They are not for the faint of heart. Fortunately, readers less interested in technical knowledge can skip these without sacrificing the fundamentals of bargaining. For further future chapters assumeuseful, no prior of game theory. Ofaccessibility, course, any experience will prove and knowledge I recommend Game Theory 101: The Complete Textbook   to anyone interested in brushing up before heading off on this adventure.  

 

Outline of the Book The remaining chapters broadly tackle five different subjects. The first portion introduces the most basic model of bargaining: the ultimatum game. game. Chapter 2 provides a formal example of the workhorse model. In it, a proposer makes a take-it-or-leave-it offer to a receiver, who then accepts or rejects it. This setup gives the proposer a disproportionately large share of the bargaining power, thereby giving him the vast majority of the surplus. I then analyze the model’s implications regarding fairness and compare the results to experimental evidence. The second portion relaxes the rigid structure of the ultimatum game to see how actors behave in a more realistic bargaining environment. Chapter 3 tweaks the ultimatum game to allow for proposers to make extremely finegrained offers; this results in the proposer stealing everything to be gained from bargaining. The chapter then compares this result to the Hastert Rule, an institution within the United States House of Representatives that guarantees the minority party will almost never get its way, even if a majority of legislators overall would support a bill. Chapter 4 explores the utility of a counteroffer. When actors can reject offers and make counterproposals, they can force concessions from the other party up front. Thus, the ultimatum game’s unfair result is merely an artifact of being unable to propose counteroffers. The third portion uses the elements from the first two portions to show how actors can gain bargaining leverage. To reiterate, most leverage does not come from carefully maneuvering chess pieces at the bargaining table, but rather from properly setting up the pieces before arriving. Chapters 5 and 6 cover a variety of tactics: (1) tying your hands to reject low offers, (2) shopping around to increase competition and drive down prices, and (3) bringing something to the agreement that no alternative bargaining partner possibly could. Readers interested in the applied side will find these chapters most interesting, though a careful examination of the more technical chapters is necessary to fully appreciate the details. The fourth portion explains where bargaining can go wrong. Chapter 7 begins with the role of information. Knowing how much the other side is willing to give up is critical to appropriately tailoring one’s offer to the other side. Guessing incorrectly leads to bargaining failure and wasted resources. But playing it safe potentially means actor could have demanded more and received it. This puts a proposer in a an damned-if-you-do-damned-if-you-don’t

 

situation. Put differently, what you don’t know will hurt you and sometimes causes bargaining to fail. Chapter 8 then discusses commitment problems. As previewed earlier, many bargains take the form of “do x for me today, and I will do y for you tomorrow.” Despite how such a deal could be potentially beneficial to both parties, negotiations may fail if the person in question cannot credibly commit to doing y once the other person has done x for him. Enforceable contracts solve the dilemma but unfortunately are not always available. The fifth and final portion builds on the counteroffers model from the second section, allowing the parties to potentially continue negotiations until the end of time. This is called Rubinstein bargaining and it is the canonical model of the field because it makes fairly realistic assumptions. Although technically challenging, Rubinstein bargaining shows that back and forth offers yield a fair solution if the actors are patient. But if the actors are both impatient or one is substantially more patient than the other, the result can appear unjust. I will avoid making normative judgments about bargaining Throughout, outcomes. Many models will show that bargaining is inherently stacked against those who are in the most need of protection. Do not interpret this as advocacy for such social injustice. Rather, if anything, these chapters should serve as warning signs. Part of being vulnerable is not knowing your vulnerabilities. By the end of this book, you should be able to identify potential shortcomings. Only with that knowledge can you begin to counterstrategize accordingly.

 

Chapter 2: The Ultimatum Game Studying the sources of bargaining power requires a baseline baseline   model. (Baseline is the keyword—the game presented here is the baseline of richer, more realistic bargaining environments.) The simplest form of bargaining is an ultimatum ultimatum,, or a format in which one party makes a take-it-or-leave-it offer to the other party. That second party then accepts or rejects the ultimatum. To make the problem more concrete, suppose Albert wishes to purchase a used car from Barbara. Albert is willing to spend up to $5000 on the vehicle while Barbara needs at least $4500 to be willing to sell it. According to the ultimatum protocol, Albert makes a take-it-or-leave-it offer to Barbara, who then accepts or rejects it. Accepting initiates the trade at that price. Rejecting permanently terminates negotiations without anything changing hands. Throughout the modeling process, drawing game trees  trees  will help; they illustrate the flow of the interaction in an intuitive manner. The game tree below represents Albert and Barbara’s situation:  

Because we will be working with these trees frequently, taking a moment to understand this one will prove useful. Consider the top half:  

 

  “Albert” appears above the diverging lines, signaling that Albert takes this particular action. The arc indicates the actions available to him. On the left side of the arc, $0 represents the minimum offer size he can make to Barbara. (In other words, he cannot demand Barbara pay to give him the car. Of course, even if he could, Barbara would simply laugh at such a proposal.) On the right side of the arc, $5000 signifies the maximum amount he can offer Barbara. (It goes without saying that he could conceivably offer more, but $5000 is the maximum he is willing to pay for the car. Thus, it would never make sense for him to buy the car for price greater than that.) Consequently, Albert chooses a value for x from $0 to $5000. Barbara’s action is more straightforward:  

Rather than a complicated range of choices, Barbara has a simple response to each possible offer from Albert: accept or reject. Accepting completes the trade, with Albert paying $x to Barbara in exchange for ownership of the car. In this case, Albert’s payoff—which comes first since he is the first player to move—is $5000 minus $x. In other words, he receives his personal value for the car minus the price he paid for it. Barbara’s payoff is $x, or the amount of money she receives from Albert. Rejecting yields no exchange of the good. Consequently, Albert receives nothing. Barbara, meanwhile, retains ownership of the vehicle, which she values at $4500. Hence, she receives that amount. For the moment, assume that both actors only want to maximize their

 

own personal economic welfare. Thus, the numbers in the game tree represent their actual preferences, with greater numbers representing more preferred outcomes. How should Albert decide which offer to make? There are two perspectives on this. First, Albert could pick a number that seems right without any foresight. Alternatively, Albert could think about how Barbaratowould respond to all strategy. of his possible offers and then use that information construct an optimal Unsurprisingly, game theory indicates that the second method is the better way to think. When setting your alarm clock for tomorrow, for example, you do not ignore the next day. Instead, you work backward from the time of your first appointment in the morning. Thus, you might set your alarm for 7 a.m. if you must to be at work at 8:30 and it takes you half an hour to shower, half an hour to eat, and half an hour to commute. In sum, the things you want to do in the future determine your actions in the present. The same logic applies in strategic interactions. However, with more than one relevant actor, need to think aboutand their own potential actions andindividuals desires but not alsoonly others’ potential actions desires. Consequently, Albert needs to consider how Barbara’s desires affect her future actions and how that in turn affects his actions now. In other words, Albert must start at the end and work backward to construct an optimal strategy. Game theorists call such reasoning backward induction. induction. With that in mind, consider all of Barbara’s possible responses, starting with her response to Albert’s offer of $5000 for the car:  

Since this is Barbara’s decision, the game tree has replaced Albert’s payoffs with question marks because they do not directly factor into Barbara’s welfare. If Barbara accepts Albert’s proposal, she receives $5000. If Barbara rejects, she retains ownership of the car she values at $4500. Since $5000 beats $4500, she accepts Albert’s offer and sells the vehicle for $5000. Now consider Albert’s offer of $4999:  

 

  This proposal is slightly less advantageous than the $5000 offer, but Barbara still finds it acceptable. If she accepts, she receives $4999. If she rejects, she still keeps the $4500 that reflects her personal value for the vehicle. Since $4999 beats $4500, she accepts in this case as well. Next, suppose Albert offered $4998:  

Though the offer is slightly less attractive, Barbara still receives more from accepting than rejecting. For the same reasons as before, she thusly accepts. Rather than draw 500 separate decision nodes for Barbara, we can generalize the results. As long as x is strictly greater than $4500, Barbara accepts—this is because any offer of that size is worth more to her than keeping the car. What if Albert’s offer is strictly less than $4500? Intuitively, Barbara rejects. For example, suppose Albert proposed $4499:  

In this case, Albert is offering one dollar less than how much Barbara values the car. Thus, rejecting leaves her in a preferable position.

 

Clearly, this same logic extends to any offer smaller than $4499; if $4499 is insufficient to entice Barbara, anything less only becomes more unacceptable. As such, Barbara rejects all values for x less than $4500. Only the $4500 value remains:  

Here, Barbara is indifferent. Regardless of whether she accepts or rejects, she still receives a value of $4500. Consequently, accepting and rejecting are both rational strategies for her. For now, however, suppose Barbara certainly rejects an offer of $4500. (The appendix at the end of the chapter will look at what happens if Barbara takes other actions.) This completes the rundown of Barbara’s optimal responses. Now consider Albert’s decision. If he offers less than $4501, Barbara rejects. Albert therefore receives a payoff of $0; no money exchanges hands and Barbara keeps the car. Alternatively, Albert could propose an amount greater than $4500 and up to $5000. This time, Barbara accepts the offer. Albert receives $5000 (his personal value for the vehicle) minus $x, where $x is the amount he pays to Barbara. Note that Albert’s payoff is decreasing in x this time around. For example, Albert $50 if he offers (That is, ahenet receives from owning the profits vehicleby but pays $4950 to $4950. purchase it, for gain of$5000 $50.) Thus, if he would prefer to purchase the vehicle, he has a single optimal proposal: $4501. Any greater offer still successfully purchases the car but at an unnecessarily high price. The last question to answer is whether Albert would prefer to buy the car for $4501 or offer less than that and induce Barbara to reject. If Albert offers $4501, Barbara accepts, the car exchanges hands, and Albert earns $5000 –  $4501 = $499. If Albert offers any smaller amount, Barbara rejects, the car stays with her, and Albert receives $0. Obviously, $499 is better than $0. Consequently, Albert optimally offers $4501, and Barbara sells the car. The outcome is striking: Albert makes out like a bandit while Barbara

 

barely receives any benefit from the transaction. Recall that Albert values the car at $5000 while Barbara placed it at $4500. A $500 trade surplus exists if Albert and Barbara completed the trade. But Barbara received only $1 of that surplus—Albert paid $4501 for the car she valued at $4500. Meanwhile, Albert receives $499 of the surplus—$5000 for the car minus the $4501 paid to Barbara. So while Barbara marginally benefits from the swap, Albert finds the final agreement substantially more attractive. As we will see over the next couple chapters, this is the result of the ultimatum bargaining structure. Barbara’s only decision is to accept or reject an offer; she thus suffers because cannot dictate the terms of the settlement.  

 

Application: Why Renters Get Screwed out of Security Deposits While it may seem weird that someone cannot influence the terms of a bargain, such situations are commonplace for individuals renting a place to live. A security deposit is an amount of money (usually equal to one month’s rent) that a renter must pay before he or she moves into an apartment or a house. The security deposit ensures that the property owner can pay for whatever damage the renter leaves behind. The remainder, in theory, must go back to the renter at the end of the lease. However, anyone who has ever rented knows that security deposits rarely work in this manner. Instead, the property owner normally takes a substantial share of the deposit regardless of how pristine the property is at the time of the move out. The renter’s only recourse is to take the property owner to small claims court, which is usually not worth the time and effort. As a result, resu lt, the security deposit is normally a check that the renter should never expect to see again. Why does this happen? As it turns out, the security deposit screw-over is a simple application of the ultimatum game. The property owner acts as the proposer, offering a split of the security deposit. The renter can accept or reject. Accepting returns the sum of money that the property owner proposed. Rejecting takes the issue to small claims court. To show why this eventually turns into an ultimatum game, suppose that the security deposit was for $1000 and the renter caused $200 in damage. Thus, if the property owner used the security deposit as intended, he would return $800. The small claims court will recognize that $800 is the fair outcome based on receipts for repairs. Nevertheless, the renter would like to avoid wasting time going to court. Indeed, he values that time worth $100. In turn, if he rejects and goes to court, he will only effectively receive $700—that is, $800 for the fair amount minus $100 for the time wasted. The property owner also values his time worth $100 and thus only effectively receives $100 if he goes to court—that is, $200 for his fair share minus $100 for the time. Here is the game tree:  

 

  Taking a step back, the fair outcome would seem to give the landlord $200 and the renter $800. But note that the property owner is the only one who can make an offer here. While the renter might suggest various other amounts, owner in the end writes agame, checkthis formeans any value he wishes to.the As property we saw in the previous ultimatum that the property owner can obtain more than his “fair” share. The renter truly receives a raw deal. To understand why, consider the renter’s final decision:  

The landlord has sent some amount of the security deposit back. It is up to the renter to decide whether to take the issue to court. If she does, she will receive $800 from the court but suffer $100 in wasted time. As such, she is indifferent between accepting and rejecting an offer of $700. Meanwhile, she strictly prefers accepting any amount greater than $700. As with the previous section, suppose that the renter will reject when indifferent. How should the landlord decide what to write the check for? First, consider any amount greater than $701. The renter would not go to court if offered that amount. That leaves the landlord with the remainder of the money. Since the presumed amount is greater than $701, the leftover

 

amount is less than $299. However, consider an offer of $701 instead. $701 remains more than what the renter can make in court, so she still accepts. This leaves the landlord with $299, which is more than he would keep if he wrote a check for a larger amount. As such, if the landlord wishes to avoid court, he should offer $701. The last thing to check is whether the landlord actually prefers staying out of court. Recall that the renter will go to court if she receives an offer less than $701. After accounting for the lost time, a court judgment leaves the landlord with $100. But this is less than the $299 he could keep by writing a check for $701 instead. Consequently, we know exactly what the landlord should do: write a check for $701 and avoid forcing the renter to go to court. Overall, the model predicts two things. First, the parties should avoid going to court—the time wasted in the process incentivizes the parties to reach a mutually preferable solution. Second, however, it also predicts that the setup heavily favors tothethelandlord. that the renter only caused $200 in damage property.Keep Yet,inthemind landlord keeps $299. In essence, he leverages the fact that the renter’s only recourse is to go to court. Unfortunately, that option requires a time expense for the renter, which the landlord can ultimately extract through the bargaining process. This reasoning also helps explain why college students are especially vulnerable to losing security deposits. They face two problems in particular. First, college students tend to pack three, four, or five people into a single property. They collectively split the cost of the security deposit. But that means each individual has less incentive to go to court since each person has less money at stake than if a sole renter leased the entire property for himself. Second, college students often do not have a permanent residence in the city where they rent from. Consequently, going to court is more time-consuming for a student than a local resident since a student might have to trek back to the college town to fight the landlord. Both of these factors drive up the effective cost of going to court, and larger court costs mean that the property owner can extract a larger amount from his tenants. As a result, landlords can extract more from a college student’s security deposit than an ordinary tenant. The ultimatum game also helps explain why many courts allow plaintiffs to sue for punitive damages. As the name implies, punitive damages punish defendants for engaging in predatory behaviors by rewarding plaintiffs with money above and beyond their fair share if the initial offers were absurd. In

 

effect, this compensates renters for their time and effort to go to court. As a result, the renter’s perceived cost of challenging diminishes, which in turn forces the landlord to increase his offer to avoid a hearing. Of course, the ultimatum game’s outcome is not always what we see in practice. Some landlords actively want to return a fair amount to their renters and are thus unwilling to leverage the cost of going to court to their advantage. And sometimes shady landlords overestimate how much they can steal from their renters and end up in court despite those costs. Nevertheless, this model is useful for explaining why renters often feel that their landlords are ripping them off. Meanwhile, as we go forward, we can develop richer, more realistic models that help us understand these other facets of bargaining.  

 

Interpreting the Ultimatum Game The ultimatum game is one of game theory’s most misunderstood models. As such, it is worth briefly discussing some of the many criticisms and their shortcomings. To begin, many people shout “that’s not fair!” when they see the outcome of the ultimatum game. And they are right—the outcome of the game is not even remotely fair. As detailed previously, the interaction between Albert and Barbara had a $500 surplus, the difference between Barbara’s value of the car ($4500) and Albert’s ($5000). In a world with distributive justice, they ought to receive an equal split of that surplus, with $250 going to each party. Yet Barbara winds up receiving $4501 for a car she values at $4500, allowing her to only internalize $1 in profit. In contrast, Albert pays $4501 for a car that he values at $5000, giving him $499 in benefits. Albert’s portion of the surplus is 499 times larger! Likewise, the landlord-renter interaction had a $200 surplus from not going to court. The landlord kept $199 of that surplus—$299 from his share of the deal minus the $100 he expected to receive by going to court. Meanwhile, the renter only kept $1 of the surplus—$701 from the returned deposit minus the $700 he could obtain by going to court. Consequently, the landlord’s share of the surplus was a ridiculous 199 times bigger. While indeed massively unfair, these outcomes should not be a criticism of the model. Indeed, one of the most important things the ultimatum game teaches us is that bargaining is not  always   always fair. Some people will win more and others will win less. It is up to the players to figure out how to alter the interaction in their favor. To wit, recall that Barbara had minimal control over the structure of the interaction. If negotiations take the form of a single ultimatum offer, Barbara can only say yes or no; she has no direct control over the price point. Albert has direct control, however, and he therefore selects his most favorable price that Barbara still prefers to accept. The ultimatum game teaches us that control over the offers is fundamental to obtaining a better share of the surplus and hints that Barbara might be better off if she were capable of proposing counteroffers to Albert. This is something we will pick up on in later chapters. The second common criticism of the ultimatum game is that the players do not care about fairness. Note that when Barbara chooses whether to accept or reject an offer, we assumed that she only aimed to maximize her own

 

economic welfare. As a result, she accepted Albert’s minimalist generosity. This criticism has solid foundations in experimental economics. In addition to being one of the most misunderstood models in game theory, the ultimatum game is also one of the most frequently tested models inside of laboratories. Normally, one participant is given a small amount, perhaps $10, and told that he must offer a share of it to a second participant. If she accepts the offer, they keep the split; if she rejects, the experimenter takes back the money. If the participants only cared about maximizing economic welfare, the proposer would keep just about all of the cash, while the receiver of the offer would accept very small amounts. In practice, however, receivers often reject amounts that are very small. Perhaps anticipating this, proposers often inflate their offers to much higher levels. While our ultimatum game did not predict this behavior, the experimental results should not be particularly surprising. In these lab experiments, the players haveoriginal some sort of valuation of fairness. Despite this, we kept fairness clearly out of the model of the ultimatum game. If assumptions are not accurate, we should not expect the predictions to be either. Nevertheless, the ultimatum game provides a fundamental insight that carries over to a situation where players care about fairness as well. Note that when Albert picked his offer, he chose the minimum minimum   amount necessary to induce Barbara to accept. This is the fundamental result of the ultimatum game—that proposers do not give anything more than necessary to induce compliance. And this fundamental result holds true regardless of whether the receiver of an offer has preferences for fairness. Moreover, there are a couple of theoretical issues with applying these experimental results to real life bargaining scenarios. First, the stakes of these games are normally very low. A receiver might wish to reject a $1 share of a $10 pool out of fairness, but she would have a much harder time rejecting a $100 share from a $1000 pool even though both divisions give the same proportional split of the surplus. And, indeed, this seems to be the case in experiments with higher stakes. (Of course, not much data exist on higher stakes divisions—it is hard for economists to scrounge up the money necessary to conduct such expensive experiments.) In addition, the framing of these laboratory experiments do not match actual economic interactions. Two players splitting $10 that an experimenter gifts them has a fundamentally different framing then two actors seeking to

 

engage in a mutually beneficial sale. It is understandable that players in a lab forgo some money for the sake of fairness. However, it would be odd for someone to walk away from a transaction simply because he believed the other side was receiving a better share of the revenue. Of course, there are many good reasons why individuals might walk away in such a real life scenario. Perhaps by doing so, they believe the other side will come back and offer a more generous proposal. Maybe they have a second bargaining partner that they believe will provide a better offer. Or maybe they have incentives to protect a strong reputation for toughness. These are all things that we will discuss in later chapters. However, it remains exceptionally strange for someone to permanently walk away from a profitable transaction simply because the other side is reaping more of the gains. The lab setting is also unique in that the players know exactly how much surplus exists because the experimenter explicitly gives them some amount of money to divide. This aiscar, rarely case can in real lifeguess negotiations. Formuch example, when you go to buy thethe dealer only as to how you actually value the vehicle. As a result, the dealer would have a hard time calculating whether you are receiving an equal split of the surplus. This uncertainty can sometimes be very beneficial to the receiver and is also a point of discussion in later chapters. Finally, for better or for worse, valuations of fairness have little impact on massive business negotiations between corporations. Corporations with the largest profits are more likely to survive than corporations with smaller profits. With that in mind, suppose the CEO of a company rejected a merger because, while profitable for his company, it would have been even more profitable for the other company. Shareholders would rightly fire that CEO—  in the survival of the business fittest, such an action actively promotes the company’s own demise. All told, criticisms of the ultimatum game tend to be criticisms of its rather strong assumptions. As such, over the course of this book, we will be altering various assumptions and seeing how those changes affect the results. Indeed, this is a good general lesson—rather than simply object to assumptions, you should change the assumptions and then see exactly how they change the results.  

 

Appendix: Other Solutions to the Ultimatum Game In solving for Albert’s optimal offer, we assumed that Barbara would certainly reject if she was indifferent between accepting Albert’s offer and rejecting it. However, indifference means indifference. Thus, Barbara could ust as easily (and still rationally) accept Albert’s offer when indifferent. Even stranger, she could flip a coin when Albert makes her indifferent; she accepts the offer on heads and rejects the offer on tails. This remains rational, as Barbara receives the same payoff regardless of her decision when Albert makes her indifferent. This appendix covers those missing cases. Readers interested in the game theoretical mechanisms of bargaining will find it useful. Others may wish to skip it. First, suppose Barbara accepts if Albert offers her $4500. Then Albert faces the following decision problem. If he offers at least $4500, Barbara accepts. In turn, Albert receives $5000 – $x, where $x is the value of the proposal. As before, Albert’s welfare is decreasing in the offer size. Thus, his optimal acceptable offer equals $4500 since any additional amount unnecessarily gives more money to Barbara. He receives a net gain of $500 (or $5000 minus $4500) for taking this action. Alternatively, Albert could offer less than $4500. Barbara rejects in this case. Albert then receives nothing. But nothing is obviously worse than offering $4500 and profiting by $500. Therefore, if Barbara were to accept when indifferent, Albert optimally offers $4500 and Barbara accepts. Second, suppose Barbara decides whether to accept or reject in a random manner if Albert offers her $4500. Note that this randomness need not be a coin flip; Barbara could accept 10% of the time and reject 90% of the time, or she could accept 87% of the time and reject 13%, and so forth. Thus, rather than working through each of these cases individually, let p be the probability Barbara accepts and 1 – p be the probability she rejects. Now consider Albert’s decision. As always, he could offer $4501 and guarantee Barbara’s acceptance. He earns $499 as a result. Offering any more unnecessarily gives Barbara more money, so those proposals cannot be optimal. Offering less than $4500 cannot be optimal either since this induces Barbara to reject and leaves Albert with nothing. Consequently, the question is whether Albert prefers offering $4501 and having Barbara accept with certainty or offering $4500 and having Barbara sometimes accept and sometimes reject.

 

To decipher which is optimal, note that Albert receives $500 in profit whenever Barbara accepts and $0 whenever she rejects. Mathematically, this equals the following:   Expected Payoff = $500(p) + $0(1 – p)   Note that we multiply $500 by p since p is the probability that Barbara accepts. Likewise, we multiply $0 by 1 – p since 1 – p is the probability that Barbara rejects. Of course, the $0 cancels out 1 – p when multiplied together, so Albert’s expected payoff quickly reduces to $500p. Alternatively, Albert could offer $4501 and receive $499 with certainty. Thus, Albert must optimally offer $4500 if:   $500p > $499 p > 499/500  Thus, if Barbara accepts when indifferent with probability greater than 499/500 (or 99.8% of the time), Albert must optimally offer $4500. The percentage is extremely high because by offering $4500 Albert is risking having the entire transaction fall through over the matter of $1. As a result, he must be very sure that Barbara will accept when offered $4500 to make proposing that amount worthwhile rather than offer $4501 and receive a guaranteed $499 in profit. By analogous argument, if Barbara accepts when indifferent with probability less than 499/500, Albert must optimally offer $4500. If Barbara accepts when indifferent with probability exactly equal to 499/500, things get strange. Albert is indifferent between offering $4500 and $4501 here. Thus, he can optimally randomize between offering either amount or choose to offer one of those amounts with certainty. Before moving on, a few notes are in order. First, the above analysis assumed that Albert has risk neutral preferences. In practice, Albert could be risk-averse. Specifically, he might prefer offering $4501 and receiving $499 with certainty over offering $4500 and receiving $500 in profit 99.8% of the time and no profit the remaining 0.2% of the time. In other words, he may have a preference for playing it safe. If this is the case, then similar results would follow, except Barbara would need to accept at an even higher rate when indifferent. In contrast, if Albert received some perverse pleasure from

 

making risky choices, Barbara would need to accept at a lower rate when indifferent. In either case, Albert would have a point of indifference that yields the strange (but still optimal) outcome. Second, note that not all of these trades yield the same benefit. For example, if Barbara rejects with certainty when offered $4500, Albert instead proposes $4501 and Barbara accepts. But if Barbara accepts with certainty when offered $4500, Albert goes with that amount. Barbara earns an additional dollar in the second case, and Albert loses that dollar. Put directly, exercising the credible threat to reject the offer of $4500 gives Barbara more money,, though that amount is largely irrelevant in the grand scheme of money things. Regardless, the ultimatum game paints a clear picture: if only one party controls the trade price, then that price setter will enjoy a substantial majority of the benefits.

 

Chapter 3: Proposal Power and Continuous Bargaining Spaces Previously,

Albert and Barbara could only bargain over discrete increments. For example, in the original used car situation, Albert could only offer whole dollar amounts like $4500, $4501, and $4502. This was a practical restriction. Perhaps Albert was not carrying any change with him when he met Barbara to negotiate over the car’s price. But even if he was, Albert and Barbara would encounter a similar practical restriction on another level: Albert would then only be able to offer whole cent   amounts like $4500.00, $4500.01, and $4500.02. Writing a check will not solve this problem either, as a bank would likely refuse to cash a check written for fourthousand five-hundred dollars and one-half cent. Not all bargaining problems have this restriction. Although one cannot deposit a half a cent into a bank, those fractions can matter over the course of a long-term contract. If Albert were negotiating with his boss over a pay raise, he would certainly prefer working for $20.00 and one-half cent per hour than $20 even; the company can carry over a fraction of a cent to the next pay period whenever necessary. Likewise, a supplier would much rather receive $0.49 and one-third of a cent per pound of salt from a distributor than ust $0.49. The large quantities of salt smoothes over awkward fractions, and any remaining fractions could once again be held over until the next purchasing period. Consequently, bargaining over discrete prices does not make sense in some instances. As such, a natural question to ask is what happens when the parties can bargain over a continuous continuous amount  amount of money—that is, when they can select any fraction of a value to offer the other side. This chapter answers that question. Broadly, we find that the proposer captures even more of the surplus—in fact, the receiver accepts the offer but is no better off than before the negotiations began. The results here further underscore how important proposal power is to obtaining greater shares of the gains from bargaining. We will then see how such proposal power affects legislative political maneuvering.  

 

The Continuous Bargaining Space Ultimatum Game Imagine that once again Albert and Barbara are bargaining. To generalize and streamline the game from the last chapter, suppose that Albert must make an offer between 0 and 1 to Barbara. An offer of 0 means that he will give Barbara 0% of the surplus, while an offer of 1 means that he will give her 100%. Any value in between represents an analogous percentage split. If Barbara accepts, they divide the surplus according to the proposal. If she rejects, both receive 0. Below is the now familiar game tree:  

By representing the interaction with offers between 0 and 1, the tree draws a clear connection between all sorts of bargaining situations. In the previous chapter, Albert and Barbara were bargaining over $500 of surplus in selling a car. But the bargaining situation just as easily could have been $10 in surplus for babysitting or $1,000,000 in surplus for a major manufacturing agreement between two large corporations. Standardizing the values between 0 and 1 allows us to represent all of these cases simultaneously. And, like before, rejection means that no deal transpires and therefore the parties share no surplus. Unlike before, however, we impose no limitations on the possible divisions in the interval. That is, Albert can choose 0, 1, or any fraction in between, no matter how small. Now to solve the game. To begin, consider Barbara’s response to any offer greater than 0:  

 

  Barbara earns some positive amount for accepting and nothing for rejecting. Since something is better than nothing, Barbara accepts all all positive  positive offer sizes. This is true no matter how microscopic the offer—even accepting .0000001 beats receiving 0. This leaves Albert offering 0 as the last remaining case:  

As usual, offering nothing leaves Barbara indifferent between accepting and rejecting. Thus, accepting with certainty, rejecting with certainty, and acting randomly are all optimal for her. To find Albert’s optimal offer size, we will need to consider each of these cases individually. First, however, note that offering any positive amount can never be optimal for Albert. For an offer size to be optimal for him, Albert cannot switch to any other offer size and expect to do better. So suppose x > 0 is an optimal amount. Now compare that x to x/2. Recall that Barbara accepts all positive offers. The value x is positive, so she accepts it. But x/2 is also positive (half a positive amount is still a positive amount), so she accepts that as well. Consider Albert’s welfare under both cases. If he offers x, he receives 1 –  x. If he offers x/2, he instead receives 1 – x/2. If x is truly optimal, then Albert cannot earn more by offering x/2. But that is not the case:   1 – x/2 > 1 – x -x/2 > -x x > x/2

 

1 > 1/2   This holds. So offering x/2 is better than offering x. But this contradicts the assumption that x was optimal in the first place. Therefore, no no   positive amount is optimal for Albert. In the discrete version of the ultimatum game, Albert offered the smallest positive amount possible that induced Barbara to accept. Why does that logic fail here? It is because no smallest possible amount exists. Can offering .01 be optimal? No, because Albert could instead offer .005, still induce Barbara to accept, and earn .995 rather than .99. But can offering .005 be optimal? No, because Albert can once again halve the proposal. Barbara still accepts .0025, and Albert profits by .9975 instead of .995. Is offering .9975 optimal? For the same reasons, the answer is no. This logic repeats infinitely and ensures that Albert finds no positive amount optimal. Consequently, if an optimal offer size exists for Albert, it must be 0. Under what conditions this true? Suppose accepts with certainty when indifferent. Thenis Albert receives 1 ifBarbara he offers 0. Albert cannot possibly receive more, so this is Albert’s optimal strategy. Note that this outcome is extremely poor for Barbara—she receives the same payoff as her rejection outcome. In other words, Barbara’s welfare is the same regardless of whether the parties engage in the trade or not. Unlike the discrete version of the ultimatum game, which had multiple optimal strategies for Albert depending on Barbara’s actions when indifferent, this is the only circumstance that gives Albert an optimal strategy. To understand why, suppose instead that Barbara rejected with certainty when Clearly, Barbaraindifferent. will reject and leave offering him with nothing nothing. is not optimal for Albert—  But is offering any positive amount x optimal? No. Like before, Barbara accepts under this circumstance since x > 0, leaving Albert with 1 – x. However, Albert could halve the offer to x/2. Barbara must still accept under these circumstances, as x/2 > 0. This leaves Albert with 1 – x/2, which is greater than the 1 – x he received before. As such, no positive offer can be optimal for Albert. This result makes the ultimatum game further unsettling. When the parties could bargain using only discrete amounts, Barbara at least received a small portion of the surplus. However, as this section just showed, that small amount is simply an artifact of the discrete dollar divisions. Once the parties

 

can fully bargain over a continuum of possible amounts, Barbara’s share of the surplus drops to zero. Again, the major lesson here is that the importance of proposal power cannot be overstated. If an actor has all the proposal power, he will receive all (or just about all) of the surplus. If an actor completely lacks proposal power, he will be no better off (or only slightly better off) than if bargaining could not take place. That being so, one might wonder why actors would ever deny themselves proposal power. After all, bargaining does not always have to be a take-it-orleave-it proposition. Why doesn’t Barbara simply ignore offers she does not like and make counterproposals instead? Our results from the ultimatum game clearly tell us that she should, and we will investigate what happens in that case in the next chapter. However, there are many situations that prevent actors from making counteroffers in this manner. One reason is because the original proposer already controls the money to be divided. We saw an example of this in the last with the landlord/renter interaction. Another reason is because thechapter law expressly forbids such counteroffers. We will explore the results of such rules in the next few sections.  

 

Application: Agenda Setting, the Hastert Rule, and Why the Majority Does Not Rule in Congress The inability to make meaningful counteroffers is a fact of life in the United States House of Representatives and many other parliamentary bodies. Voting rules in the House are straightforward—if a majority supports a bill and the U.S. Senate also supports it, it goes to the President for a signature. However, the rules governing whether a bill goes up for a vote are complicated. In general, a majority of the majority party must party must support a bill for it to receive a full vote on the floor. This is known as the  Hastert Rule, Rule, named after former Speaker of the House Dennis Hastert, who was in power when the rule became well-known. The Hastert Rule’s gatekeeping effect is subtle but powerful. To see why, consider the following example. For simplicity, imagine that the House has only five members: three Democrats and two Republicans. Thus, the Democrats are the majority party. Further, going back to the legislation example from Chapter 1, suppose a terrorist attack rocks the United States. The House recognizes a need to increase income taxes to pay for homeland security. No such tax exists at present, and the congressmen unanimously agree that some increase is preferable to nothing. However, substantial disagreement exists on the optimal level of taxation. Two Democrats believe that the increase should be very small—one believes taxes should increase by 0.03% while the other believes it should increase by 0.05%. The third Democrat supports a modest increase to 0.1%. Meanwhile, the respective Republicans support a tax rate of 0.2% and 0.25%. Let D1, D 2, and D3 represent the Democrats and R1 and R2 represent the Republicans. If we represent their ideal levels of taxation on a number line, their preferences look like this:  

If the House catered to the majority opinion of its legislators, they would set the tax rate at 0.1%. To see why, consider any other amount. If that amount is less than 0.1%, then D 3, R1, and R2 would vote to increase it. If that amount is greater than 0.1%, then D1, D 2, and D3 would vote to decrease it.

 

The only tax rate a majority would not prefer to change is when the tax rate falls exactly on the median representative’s ideal policy. Nonetheless, the median representative lacks the power to choose which bill goes to the floor. Instead, that ability generally falls on the median within the majority party, or D2 in this case. Because the status quo tax rate is 0.0%, note that if D2 proposes a tax increase of 0.05%, he will have the support of at least four representatives: himself, D3, R1, and R2. (Whether D1  would support the bill depends on whether he prefers a tax rate of 0.0% or 0.05%. Although 0.05% is closer to D1’s ideal point of 0.03%, he may deem each additional hundredth of a percent greater than 0.03% to be substantially more painful than each hundredth of a percent fewer than 0.03%.) As such, D 2 can impose his will on the House. If the outcome seems bizarre, that is because it is. Once again, a majority of representatives—D3, R1, and R2 —would support increasing the taxation up to 0.1%. However, these three lack the power to propose such revisions. Unable to do anything other than vote up or down on proposals sent to them, they cannot extract any more of the surplus. It is worth noting that the Hastert Rule does not always hold, though its exceptions are rare and usually noteworthy for a different reason. For example, consider the U.S. sovereign debt crisis in 2013. Republicans controlled the House of Representatives at the time, and the majority of the party wanted to limit the United States’ ability to increase its national debt, which was the status quo. Democrats and a small portion of the Republicans in the House wanted to increase the limit to avert a government shutdown. This second group held an overall majority of the seats. According to the Hastert Rule, the majority would not have its way, as the majority of the majority party preferred maintaining the status quo. However, House Republican leaders eventually caved and allowed a compromise bill to go through. Democrats and a minority of the Republicans passed the legislation. Why the Republican leadership permitted a violation of the Hastert Rule is unclear, but a good guess is that they did not want the public to view the party as obstructionist. Ordinary, day-to-day legislation receives scant media attention. The debt crisis, on the other hand, created a firestorm. Democrats controlled the Senate at the time, meaning the two chambers would have to reach some sort of consensus to allow a bill to go to President Barack Obama

 

for his signature. Ultimately, the Republican leadership backed down in the game of chicken rather than face a public backlash.  

 

Application: The Federalist No. 78 and Executive Power The Federalist Papers were a series of editorials written anonymously by some of the United States’ Founding Fathers. They argued that the states should ratify the newly written Constitution. At the time, the U.S. was governed by the Articles of Confederation, a weak document that granted the federal government surprisingly few powers. The Constitution—the document that currently governs the U.S.—sought to reverse this and create a strong, centralized government. However, broad federal powers concerned American citizens. Living in the shadow of the British king, Americans worried about excessively powerful individuals. Judges especially concerned them. Unlike powerful elected officials like the president, a Supreme Court justice would have lifetime tenure—just like a king. As a result, some citizens were reluctant to ratify the Constitution. The Federalist No. 78 tackles this precise issue. (Though anonymously penned at the time, we now know Alexander Hamilton authored the article.) Despite predating modern bargaining theory by almost two centuries, No. 78 argues that actors with no proposal power and only the ability to accept or reject have comparatively little bargaining strength. Hamilton writes:   The judiciary, from the nature of its functions, will always be the least dangerous to the political rights of the Constitution….The  Executive not only dispenses the honors, but holds the sword of  the community. The legislature not only commands the purse, but   prescribes the rules by which the duties and rights of every citizen are to be regulated. The judiciary, on the contrary, has no influence over either the sword or the purse; no direction either of the strength or of the wealth of the society; and can take no active resolution...   To summarize, justices are receivers, not proposers. And as we know, proposers extract the entire surplus out of receivers. So while receivers can ensure that bargains leave them no worse off, they cannot access the additional benefits that a proposer can. Consequently, Hamilton argues that Americans should not be afraid of judicial power. Rather, they should be more concerned with Congress and the President because these actors hold significantly greater sway over policies.

 

Surprisingly, we can draw a similar parallel with executive power. The president and Supreme Court justices are ordinarily very different. Presidents have substantially more visibility than Supreme Court justices, and they can use the “bully pulpit” to set the public agenda—and, therefore, the legislative one. Presidents also have the authority as commander-in-chief to authorize military action. And unlike Supreme Court rulings, all legislation must eventually receive a presidential signature or face an uphill battle gathering a supermajority in Congress. However, presidents are still stuck on the receiving end of legislation—  they can only accept or reject. As we know, this gives Congress a substantial advantage in dictating the terms of legislation. So while we commonly associate most enacted legislation with the president who oversaw its passing, we might be better served understanding why Congress chose to send a particular version of the bill to the president. This also holds true in lower-level state politics, between governors and their state illustration. legislatures.Mitt The Romney 2012 Republican primary election servestoaswin an important was the overwhelming favorite the nomination. He eventually succeeded but had to slog through tough opposition during the primary season. One frequent condemnation of Romney by conservatives was that he was the governor of Massachusetts when the state passed its 2006 healthcare reform, often dubbed “Romneycare.” Critics claimed that the provisions were remarkably similar to Obamacare and that Romney was a liberal in disguise. Nevertheless, to understand the legislation, we must look at the bigger bargaining picture. Massachusetts tends to be one of the most liberal states in the country.inAnd, sure enough, Romney facedcare a Democrat-controlled legislature 2006. Thus, while the health reform often bearsstate the Romney name, it was ultimately a compromise issued by a Democratic legislative body. As a result, much of that criticism was unfair because it failed to properly appreciate Romney’s bargaining constraints.  

 

A Brief Aside on Monopoly The next chapter deals with complex, technically challenging bargaining environments. As such, it is worth doing one last fun application before entering a period of difficult material. Think about a game of Monopoly with only two players left in the game. If the players are experts, what can we say about any trades made? Surprisingly, quite a bit: if players only care about winning, trades have no impact on the outcome of the match. Victory is uncertain at just about any point in a game. A few lucky rolls of the die, for example, could turn a losing position into a winning one. With only two players remaining (Albert and Barbara), we can represent the probability Albert wins as p, where 0 ≤ p ≤ 1. (This is the same thing as saying that Albert wins some percent of the time between 0% and 100%.) Because there are only two players left and someone must win, the probability Barbara wins is simply 1 – p. What defines the exact value of p? Board position, money, property, and get-out-of-jail-free cards. More property and more money imply a greater chance of winning. Now consider any trade proposal. Let x represent the state of the game if the players complete the trade. Then the new probability of Albert winning can be represented as f(x), where f represents a function mapping the board state into a probability of winning. That is, the more stuff Albert has, the larger the output for f(x). And because only one player can win, Barbara’s probability of victory equals 1 – f(x). With those preliminaries out of the way, consider the game tree:  

 

  As with any other ultimatum game, we start at the bottom and work our way up. Consider Barbara’s accept or reject decision:  

If Barbara accepts, she wins with probability 1 – f(x). If she rejects, she wins with probability 1 – p. As such, she accepts if f(x) < p, rejects if f(x) > p, and is indifferent when f(x) = p. Now consider Albert’s decision. For agreement to be possible, he must offer an amount tempting enough to Barbara such that f(x) ≤ p. However, if f(x) < p, Albert ends up in worse position than if he did not make a trade at all! Thus, he would never offer such an x. This leaves f(x) = p as the only case. But here, it does not matter whether Barbara accepts or rejects—each player’s probability of winning is identical to the start of the situation. Why are only purposeless agreements possible? When the game comes down to two players, everything good for Albert is bad for Barbara and vice versa. So if an agreement is good for Albert, it is bad for Barbara. Thus, Barbara should reject. But if an agreement is good for Barbara, it is bad for Albert. In turn, Albert should never make the offer in the first place. As a result, the players can never optimally agree on a trade that impacts the endgame. Before moving to the next chapter, a few notes are in order. First, in our discussion of Monopoly here, we assumed that both players were experts at the game and unable to be fooled. In practice, some players are more skilled than others. However, the next time someone offers you a trade in Monopoly, ask yourself if you are really smarter than your opponent. If the answer is no, the proposal is likely a trick, and you should reject. Second, players only cared about the probability of winning in this example. Perhaps they also care about saving time as well—Albert might be willing to lower his win percentage by a few points to make sure the game lasts ten more minutes and not ten more hours. A mutual desire to end the game more quickly can open up a time surplus. Based on that, the players

 

could potentially reach an agreement. Finally, this result breaks down when the game has more than two players. Here, it is easy to think of trades that benefit both parties. For example, if Albert is holding a property that Barbara needs to complete a monopoly and Barbara is holding a property that Albert needs to complete a monopoly, then swapping those properties is a good idea. The real losers are the rest of the players in the game—by completing the trade, the win percentages jump for Albert and Barbara, but must drop for the remaining players as a consequence. But, of course, that is exactly what Albert and Barbara want.

 

Chapter 4: Counteroffers, Discounting, and the Consequences Suppose I gave of youDelay the choice

between a check for $100 today and a check for $100 next year. Which would you pick? Barring some strange circumstances—perhaps a man is standing next to you and will kill you if and only if you are holding a check for $100—you would take the $100 immediately. A number of reasons justify that decision. For one, $100 next year is not worth as much as $100 today; inflation destroys the spending power of future money, while a cash advance today allows you to invest and grow your assets. Likewise, if you really wanted to buy a game theory textbook that you have had your eye on, $100 today helps you reachI will yourlose goaltouch faster. You from mightnow, also and haveI practical concerns. Perhaps you and a year will be unable to deliver as promised. Worse, one or both of us might die in the interim. Or a meteor could destroy the entire world! Regardless of your exact reasoning, time is money. Thus, if I changed my offer to $100 today versus $100 next month, you would still select today—  the difference is less significant than before but nevertheless present. Likewise, $100 today beats $100 tomorrow. And $100 right now is preferable to $100 two minutes from now, though at this point the difference becomes close to trivial. In the ultimatum games explored so far, agreements either happened immediately or not at all. In practice, some middle ground exists. Barbara might reject Albert’s offer initially but then counter with a different proposal. However, Barbara’s initial rejection causes some (perhaps minor) delay in bargaining. Since time is money, a model with counteroffers must simulate the costliness of delay. This chapter delves into the mechanics of delay and its impact on bargaining. We begin with a quick introduction to discount factors. The second section explores a model allowing for Barbara to make a single counteroffer to Albert if she rejects the initial proposal; this minor tweak allows Barbara to steal a substantial share of the surplus. Finally, the remaining section tweaks the counteroffer so that Albert makes a second

 

proposal if Barbara rejects. Here, Albert takes the entire good just as before. This shows that Barbara’s newfound bargaining power does not come from her ability to reject Albert’s divisions but rather her ability to propose her own.  

 

Understanding Delay: The Discount Factor Fortunately, modeling the cost of delay is straightforward. Meet δ, the lower case Greek letter for delta (which looks like Δ when capitalized). Let 0 < δ < 1. Given those constraints, multiplying δ by the present value of the good makes it depreciate over time. For example, imagine Albert and Barbara bargained over $500 of surplus. If they reach an agreement immediately, there would be $500 of surplus to go around. However, if a unit of time passes, the surplus size from today’s perspective decreases to $500δ; since δ is less than 1, $500δ must be less than $500. Then, if another unit of time passes, the surplus from today’s perspective decreases again by a factor of δ. Thus, it shrinks to $500δ 2. After a third unit of time passes, the surplus drops to $500δ 3. And so forth. Using the discount factor in this manner gives bargaining a “melting ice cream” property. Every moment that passes makes the agreement less attractive. Given enough time, the bargain becomes pointless. However, because δ > 0, δ raised to any power is also greater than zero. In turn, it is not clear to either party when bargaining becomes worthless. Similarly, the longer the ice cream sits out, the less of it is still frozen. Eventually, the ice cream will spoil, but it is never quite clear when exactly that moment will occur. Also, note that the amount of time that passes between each phase of negotiations partially determines the size of δ. Imagine that negotiations could only take place in ten-year intervals. Then the value of δ is very small, as ten years of bargaining benefits disappear with each rejected offer. In contrast, suppose the time between offers was a matter of seconds. In this case, δ would be very close to 1—making a deal ten seconds from now is only marginally worse than making that same deal immediately.  

 

The Power of a Counteroffer With the discount factor introduced, we can now model an interaction allowing for a counteroffer. As before, bargaining begins with Albert making an offer x between 0 and 1 to Barbara. Barbara accepts or rejects. Accepting implements Albert’s offer and ends the interaction. Rejecting leads to the second stage of bargaining. Here, Barbara (not Albert) makes the proposal y between 0 and 1. Albert now accepts or rejects. Rejecting ends the interaction with no trade, so both receive a payoff of 0. Accepting grants Albert y and Barbara the remainder. However, due to the delay, each player’s payoff for the counteroffer stage is multiplied by δ. The following game tree represents the aforementioned interaction:  

As before, to solve for his optimal strategy at the beginning, Albert must understand how he and Barbara will behave at the end. Thus, consider how Albert responds to the counteroffer:  

If ythan is positive, only optimal action is to something is better nothing. Albert’s But if y equals zero, Albert is accept; indifferent between

 

accepting and rejecting. For the reasons discussed in the previous chapter, suppose that Albert accepts. Then Albert optimally accepts any any offer  offer Barbara makes. Now consider Barbara’s offer:  

Regardless of the value she selects for y, Albert accepts and she receives δ(1 – y). As such, her payoff is decreasing in y. In turn, to maximize her own payoff, she offers 0. This completes the optimal actions for the counteroffer phase. Albert receives 0 and Barbara earns δ. We can now move to the first phase of the game. Moreover, we can substitute the payoffs that the players will earn if they reach the counteroffer stage into the payoffs for rejecting. Making that swap and simplifying the game tree yields the following:  

 

  Now consider Barbara’s accept or reject decision:  

In the ultimatum game, Barbara received 0 if she rejected Albert’s offer. Here, rejecting looks much more attractive since she knows she can follow up with a counteroffer and receive δ of the good. Thus, if Albert offers less than δ, Barbara optimally rejects. On the other hand, if Albert offers more than δ, Barbara must accept. When Albert sets x exactly equal to δ, Barbara is indifferent between accepting and rejecting. As before, this implies that Barbara could optimally accept with certainty, reject with certainty, or randomly decide whether to accept or reject. Once again, and for the same reasons as before, we will assume she accepts with certainty. Now consider Albert’s proposal. Note that setting x to a value greater than δ can never be optimal for Albert. Barbara must accept such proposals since they generate a higher payoff than rejecting and receiving δ. In that case, Albert receives the remainder, or 1 – x. But consider the outcome if Albert offers the midpoint between his current value for x and δ, or (x + δ)/2. Because x is currently greater than δ, the midpoint between x and δ is also greater than δ. Thus, Barbara accepts (x + δ)/2. Albert receives the remainder, or 1 – (x + δ)/2. As such, offering x is not optimal if:   1 – (x + δ)/2 > 1 – x -(x + δ)/2 > -x x > (x + δ)/2 2x > x + δ x>δ   Recall that x > δ holds. So any value for x greater than δ cannot be optimal for Albert. Less technically, the explanation is the same as the ultimatum game from last chapter, except δ is Barbara’s value for rejection instead of 0. For

 

example, suppose δ equaled .5. Then Barbara must accept any offer greater than .5; she cannot receive more than .5 if she rejects and advances to her counteroffer. To run through the logic, is .75 Albert’s optimal proposal? No, as he could cut the offer to .625 (the midpoint between .5 and .75) and still induce Barbara to accept. Then is .625 optimal? Again, no—Albert could cut the offer to .5625 (the midpoint between .5 and .625) and still ensure Barbara’s compliance. But this logic continues forever. So offering greater than δ can never be optimal for Albert. However, if Albert proposes exactly δ, he cannot undercut the offer any further. He receives the remainder, or 1 – δ. The last step is to check whether he prefers making this optimal acceptable offer to offering something less and forcing Barbara to reject. But inducing rejection clearly is not optimal—  Albert receives 0 in this case. Thus, when the players are strategizing optimally, Albert offers δ up front, and she accepts. Albert receives the remainder, or 1 – δ. So what is the power of a counteroffer? Recall that in a one-shot ultimatum, Albert could demand and take everything from Barbara. Here, he is not so lucky. If Albert offers nothing, Barbara can credibly threaten to reject Albert’s offer and make a counterproposal. Furthermore, Albert must be careful to induce Barbara to accept; otherwise he will ultimately receive nothing later on. Internalizing this, Albert knows he must be more magnanimous when he makes his first offer to avoid a bad fate for himself. Consequently, he concedes δ to Barbara. Note that the value of δ determines who receives the better end of the bargain. Recall that if the actors are exceptionally patient and can exchange offers in all rapid δ is close this rejecting case, Barbara receives virtually of succession, the good. Why? Whentoδ 1. is In large, up front costs Barbara very little. As such, she is more willing to rebuff low offers from Albert, which in turn forces Albert to give her a larger share. In contrast, if the actors are extremely impatient or cannot exchange offers quickly, δ is close to 0. Here, Albert keeps most of the good for himself, similar to the ultimatum game. Why? Although Barbara can reject Albert’s offer and take everything later on, small discount factors imply that she cares minimally about the good at that point. Consequently, Barbara’s vulnerability leads her to accept smaller offers up front. Albert sees this vulnerability and exploits it.  

 

Application: Negotiating Starting Salary The takeaway from the previous model is simple: counteroffers are imperative for the receiver of an initial offer to obtain a good deal. Yet people often ignore the principle of counteroffers at their own peril. This is most evident in salary negotiation at the start of a new job. To see the problem, consider the following common scenario. A hiring manager named Albert has identified that Barbara is the best candidate for his company’s open position. He informs her of the good news. Albert then offers her some (low) starting salary. Not knowing what to do, Barbara hastily accepts. Barbara’s predicament is understandable. She is excited that she received the offer, and she is worried that being aggressive might cause the company to rescind it. So she timidly accepts without thinking much about the strategic constraints of the situation. Companies love love   this. When new hires act like counteroffers are impossible, employers keep the lion’s share of the surplus. In other words, they hire skilled employees at a substantially lower rate than what they would otherwise be willing to pay. Unfortunately, this means that Barbara left money on the table, and she will likely be kicking herself for years to come. There are two important factors at play here. First, Barbara should not be overly concerned that Albert will take the offer away from her. Companies can spend thousands of dollars on a job search. They want to find the best candidate available. And by virtue of offering the job to Barbara, Albert has shown his hand: the company values her potential work more than any of the other candidates. Albert will not suddenly go to the next candidate on the list (assuming they have identified another hirable candidate) just because Barbara asked for a little more money. Of course, there are limits here. If Albert is hiring Barbara as a cook and she demands a yearly salary of $1,000,000, Albert would rightly update his belief that Barbara is completely crazy. But no reasonable employer would immediately drop potential hires ust because they try to increase their wages. Second, issuing a counteroffer is important here because the first salary negotiation is the most important salary negotiation n egotiation at any job. This is for two similar reasons. First, all future negotiations for raises will use the current salary as a baseline. A 10% raise on a $50,000 salary is not as good as a 10% salary on a $55,000 salary. Likewise, a 3% automatic cost of living adjustment will punish you for starting at a lower salary. As a result,

 

fumbling the initial salary negotiation can make it difficult to ever catch up to where you otherwise would be. The lesson is simple: if the first offer is subpar, separate yourself from those who ignore the possibility of a counteroffer and ask for your fair share.  

 

The Power to Reject or the Power to Counteroffer? The previous model showed that Barbara can receive a substantial share of the bargaining good if she can reject Albert’s offer and make an offer of her own. But where does her bargaining power come from? Is it due to the existence of a second period of bargaining, which did not exist in the original ultimatum game? Or is it because Barbara—not Albert—makes the counteroffer? And if it is a mixture of the two, which accounts for more of the bargaining power? Fortunately, we can address these questions directly by making a minor modification to the counteroffer game. This time, Albert makes both both offers—   offers—  the initial offer up front and the second in case of initial rejection. Once we find the optimal strategies for both players in this tweaked bargaining game, we can compare the outcomes to the original ultimatum game and the counteroffer game. Ultimately, we will see that the division of the good in this game matches the original ultimatum game. In other words, the ability to delay agreement does not give Barbara any extra bargaining power. However, the ability to make proposals does. This fits with the previous chapter’s claim that proposal power is a major form of bargaining power. The game is as it seems. Albert offers some division of the good between 0 and 1 to Barbara. Barbara accepts or rejects. Accepting imposes that division. Rejecting forces Albert to offer another division between 0 and 1. Accepting again imposes that division, while rejecting leaves both parties with nothing. Here is the game tree:  

 

Solving for the players’ optimal strategies is easy at this point. In fact, we do not need to do any work to figure out what the players do in the second stage of the game. Compare the second stage of this game to the second stage of the original counteroffer game. The only difference is in the labels. Where Albert once was, Barbara now is; where Barbara once was, Albert now is. Consequently, the optimal actions are the same as before except they are exchanged between Albert and Barbara. So Barbara accepts any offer given to her—she receives nothing for rejecting—and Albert demands everything, knowing that Barbara has no better alternative. Thus, after factoring in the discounting of the future, Albert obtains δ and Barbara receives nothing if they reach the second stage of bargaining. Carrying those payoffs into the first stage of the game yields the following reduced game tree:  

As always, Albert must know Barbara’s optimal actions before he can choose his own. Notice that Barbara’s strategic positioning is terrible:  

Since rejecting gives Barbara nothing, she optimally accepts any offer. This includes an offer of 0 for the same reasons as before.

 

In turn, Albert’s decision problem is as follows. He knows Barbara accepts his offer no matter what. He receives the remainder, or 1 – x. Note that Albert’s payoff decreases in the offer size. As such, to maximize his own payoff, he must minimize x as much as possible. And since he does not need to worry about Barbara rejecting because he offered too little, he still offers her nothing. All told, Albert and Barbara reach an agreement in the first period. He receives 1 and she receives 0. The result is the same as when Barbara could not stall. Why does Barbara’s option to reject fail to yield any bargaining power? The problem is that Barbara lacks an endgame, which renders her threats to reject Albert’s offer incredible. Note that Barbara’s power to hurt Albert is real—if she were to reject his offer in the first stage, Albert would only receive δ in the second stage. Consequently, Barbara can cause exactly 1 – δ in pain to Albert (in the form of lost surplus). As such, if rejecting yielded Barbara any benefits, Albert would have to make some sort of concession to her to stop from losing that amount. Unfortunately for Barbara, she lacks the bargaining power in the second stage to make her threat relevant in the first stage. Even if she rejects Albert’s first offer, she can only accept or reject Albert’s second offer. But as the ultimatum game showed, the accept/reject decision has absolutely no bargaining power whatsoever. In turn, Barbara claws for any scraps she can receive in the first stage. This renders any threat to reject incredible. Knowing this, Albert can successfully deny Barbara any share of the bargaining good. The lesson here—as it has been many times before—is that proposal power matters. The of ability to just sayforces “no” Albert does not get Barbara However, the threat a counteroffer to play nice and very give far. her some of the good, lest she reject and settle on terms less favorable for Albert.  

 

Conclusion This chapter showed that the ultimatum game’s inequitable split of surplus was an artifact of the single-offer structure. Once we allow Albert’s bargaining partner to have some say in the negotiations—which is a more realistic assumption in most bargaining scenarios—Barbara can capture some of the surplus for herself. From here, there are two ways we can continue our analysis. First, we could continue relaxing the arbitrary cutoff of bargaining. Why stop at two total offers? Why not have three? Or four? Or potentially infinite? These are legitimate questions. Eventually, we will get to the answers. For now, the better marginal use of our time is to learn about all sorts of other bargaining kinks. Thus, the next few chapters will discuss bargaining leverage, credible threats, uncertainty, and commitment problems. We will return to back-andforth bargaining afterward.

 

Chapter 5: Outside Options, Risk, and the Value of  Being The Unique previous chapters gave us a basic background of core mechanics of bargaining theory. To summarize, proposal power gives an individual a greater share of the bargain. No proposal power, on the other hand, can lead an individual to receive none of the surplus. However, other sources of bargaining power exist. The next two chapters detail some of the most important ones. This chapter begins by exploring outside options, options, the alternatives an individual has at his disposal. Holding better outside options leads to better deals in bargaining—and, by extension, leads to worse worse deals  deals for those not  holding   holding them. In the extreme, great outside options can exploring lead an individual to obtain the  entire surplus. the entire  surplus. Before these models, a methodological note is in order. From here until the final chapters, we will mainly use ultimatum games to illustrate various points. This might be cause for concern given that the previous chapters have shown that the ultimatum game leads to an extremely uneven distribution of the surplus. Counteroffers, meanwhile, are more realistic and also allow the parties to divide the surplus in a more intuitive way. While these are valid points, the ultimatum game is more functional than the other bargaining protocols. The general goal of the remaining chapters is to show that x is a source of bargaining power, where x is the subject at hand. The choices are as follows. First, the remaining chapters could use alternating offers bargaining to show that x provides bargaining power. Second, the remaining chapters could use the ultimatum game to show that x provides bargaining power. Both options produce the same results. However, the ultimatum game involves far less mathematical stress. Given both routes generate identical outcomes, we might as well choose the path of least resistance. In this case, that means picking the ultimatum game. With the justification for the ultimatum game completed, we can move forward on exploring bargaining power.  

 

Outside Options In the standard ultimatum game, both parties receive nothing if they fail to reach an agreement. However, it is possible that one party has an alternative that delivers him or her some benefit if negotiations end. In the used car example, perhaps Albert has found a second vehicle is worth purchasing if he fails to secure a good deal from Barbara. Andthat in job wage negotiations, perhaps the employee has accumulated a large amount of retirement savings. If so, the boss must offer a higher wage because such an employee needs the extra incentive to continue working. To analyze how better outside options affect bargaining, consider the simple ultimatum game from before. Albert offers an amount between 0 and 1. Barbara accepts or rejects it. Accepting implements the division Albert proposed. Rejecting still gives Albert 0 but now gives a payoff of v to Barbara, where v represents the value of Barbara’s outside option. Here is the game tree:  

Let’s isolate Barbara’s accept or reject decision:  

 

Her choice is straightforward. If x ≥ v, Barbara is willing to accept. But if x < v, Barbara must reject. Now consider Albert’s decision. If v ≤ 0—that is, if Barbara’s outside option causes her direct pain or is worthless—Albert can demand all of the benefits of the trade and still induce Barbara to accept. This was the result of the standard ultimatum game. However, if 0 < v < 1, Albert must offer Barbara v. Offering anything less induces Barbara to reject, which leaves Albert with 0. This is worse for him than offering v and receiving 1 – v. Meanwhile, offering any more is a needless concession. As such, Barbara obtains v. Note that this is strictly more than the 0 Barbara received when her value for rejection (as in the standard ultimatum game) was 0. If v = 1, Albert can have Barbara accept, but that requires offering her the entire surplus, at which point Albert receives nothing. Either way, Barbara ends up well off while Albert struggles. Finally, if v > 1, Barbara will reject no matter what Albert offers her since she can always do better outside of the relationship with Albert. The takeaway is twofold. First, Barbara’s payoff increases as the value of her outside option increases. Second, Albert’s payoff decreases as the value of Barbara’s outside option increases. The first part should not be particularly surprising—it basically says “good things are good” for Barbara. The second part, however, is a little more convoluted and requires a clear understanding of strategic interaction to fully appreciate it. To elucidate this point, imagine that Barbara is a professional boxer and Albert is a promoter. Barbara is at the tail end of a long and successful career. Fortunately, she hasofachildren good financial planner and is fiscally set that for life. also has a couple and a husband whom she feels she She has neglected over her career because of her training and traveling. As such, she is considering an early retirement. Nevertheless, Albert knows Barbara is a guaranteed hit with the fans and will bring in millions of dollars of revenue if she fights again. Barbara is in the position of power here. She has no need to continue her career. As a result, to lure her back for one more fight, Albert would have to offer her the vast majority of that large sum of money she would attract. This is great for Barbara but hurts Albert’s bottom line. In and contrast, imagine Barbara wasted all of over her career is now broke. that At this point,had Barbara needs to her fightmoney to continue to

 

provide for her family. But this is her downfall. Because Barbara’s alternative to fighting is not nearly as attractive, Albert can keep more of that extra revenue for himself. Note that Barbara’s financial planning is completely unrelated to Albert. It was Barbara’s ongoing choices throughout her life that determined her economic welfare at the end of her career. Yet this has indirect consequences for Albert, as he can capitalize on her weakness but must be more magnanimous in the presence of her strength.  

 

Application: Negotiating a Raise Companies have little reason to care about an individual’s personal reasons for wanting a raise. After all, a constant goal of any manager is to keep costs as low as possible. Consequently, if a company issued raises to anyone who felt like they “deserved” one, it would have a hard time staying in business. Of course, that does not mean that you can never obtain a raise. Rather, your goal is to convince your employer that it is in the employer’s employer’s   best interest to pay you more. To do this, job consultants often advise their clients to emphasize two things: their value to the company and their value outside of the company. Outside options explain why. The importance of an individual’s value to the company should be obvious. If your company is paying you $40 per hour, then it must be receiving at least $40 in benefits per hour from you—otherwise, you will soon be seeing a pink slip. Practically speaking, though, managers cannot keep perfect tabs on all of the value that an employee brings to the company. Thus, reminding your employers how useful you are to their bottom line is a good way to begin negotiations. Many people stop at this point, and that is a mistake. Perhaps you are currently being paid $40 per hour but bring $80 per hour in benefits to the company. If your manager observes that you are working for that wage, he might see no need to increase it—after all, your employer wants to keep as much of the surplus for the company. Paying you any more would only lower the company’s profits. This is why it helps to point out that you have outside options. If you are really worth $70 per hour to the company, you must have skills that are marketable to other other   companies. Those other companies would correspondingly be happy to offer you a larger wage as a result. But that does not mean you need to fully pursue those other employment opportunities. Rather, the possibility that you could pursue those other employments encourages your current employer to pay you higher wages precisely to avoid your departure from the company. That said, negotiating in such a manner requires a bit of a balancing act. Companies prefer workers loyal to the brand and who are not capricious. This is why you begin the conversation by framing your continued employment as a benefit to the company, and one that you are happy to provide. Moreover, you should not flaunt the fact that you are using the possibility of leaving as

 

leverage. Instead, you can say that you enjoy working for the company and would like to stay but that you need a wage commensurate with the value you bring to the table. Of course, before any of these negotiating strategies can succeed, you need to actually make yourself valuable to the company. This is the bulk of the work, and it often will not be easy. However, at the end of the day, crafty bargaining strategies cannot radically alter the size of your paychecks. Rather, as previewed in Chapter 1, all you should be doing when you sit down at the bargaining table is declaring checkmate. The trick is maneuvering yourself into that winning position before you even arrive.  

 

Application: Unemployment Benefits Many countries offer unemployment benefits to their citizens. In some ways, these benefits act as nationwide insurance policy for the employed—  people who randomly lose their job have a safety net that they can fall back on. While unemployment country’s economy in a number of different ways, we will focusaffects on just aone: outside options. Suppose you were recently laid off and are now in search of employment. You find a job that you think is suitable and ace the interview. You and your potential employer must now decide on a wage. Consider how negotiations will go if you are desperate for a job and not receiving any unemployment benefits. In the language of our model, your outside option is worse. Yes, you will eventually find another job later on, but the pain you will feel in between will encourage you to accept smaller offers. In contrast, imagine that you have another three months of unemployment benefits. While you would still want a job—unemployment does not pay as much as actual work does—you would be much more willing to reject lower offers. This puts you in a better bargaining position to extract more of the surplus from your employer. Consequently, a common argument in support of unemployment benefits is that they can lead to higher wages.  

 

Park Place Is Worthless: Bargaining over McDonald’s Monopoly Pieces Every year, McDonald’s places Monopoly game pieces on select menu items. These pieces represent title deeds for standard Monopoly properties. Anyone who collects all of the properties from one color group wins a prize. For anyone lucky enough to obtain a Park Place and a Boardwalk winsexample, one million dollars. Of course, the producers rig the game: they make Park Place pieces abundant, but they release only one Boardwalk. (And, from 1995 to 2000, it appears the game truly was rigged. Police arrested the chief of security from the firm that produces the pieces. Evidently, he was stealing the rare pieces and distributing them to friends.) Nevertheless, anyone holding a Boardwalk piece must find a Park Place to receive any money. So how much is each piece worth in isolation? It goes without saying that Boardwalk is more valuable—perhaps a lot more valuable—than Park Place due to the discrepancy in availability. But it also appears that Park Place has some value some  value since the ultimate winner must pair Boardwalk with a Park Place piece eventually. However, Park Place is actually worth nothing. Not close to nothing, but absolutely, positively nothing nothing.. Although counterintuitive at first, the proof is simple. And despite perhaps tens of thousands of Park Place pieces floating around, the property would remain worthless even if only two existed. To see why, imagine Albert and Barbara both peeled off a Park Place and Charlie held the one and only Boardwalk. Charlie needs to buy Park Place from one of them to win the prize. Naturally, he wants to buy it from the individual selling it for the lowest price. Can the ultimate sale price be greater than $0? No. To see why, suppose it were. Further, without loss of generality, suppose Albert is the seller. Under such a deal, Charlie would be giving $x to Albert for his Park Place. But consider Barbara’s position. Without another Boardwalk in existence to match her Park Place, her piece holds no value. Thus, she would want to offer her piece for slightly less than Albert and induce Charlie to buy from her. Yet, Albert has identical incentives. If Barbara underbid him and Charlie bought from her, he would wind up with nothing. Consequently, he would want to underbid Barbara. But this process of underbidding would continue on forever until they reach a price that no one can underbid: $0. As such, Charlie emerges as the big winner. Despite only having half of

 

the million dollar puzzle, his Boardwalk is worth the full million on its own. Albert and Barbara go home penniless. What is going wrong for the Park Place owners? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk leaves with nothing, which ultimately drives down the price of Park Place to nothing as well. Before moving onto a more serious application of this property of competitive bidding, a couple of notes are in order. First, Albert and Barbara have a lot to gain by colluding. If only one Park Place piece existed, Charlie could not play Albert and Barbara against each other and force them to keep underbidding. If Albert was the sole possessor of both both Park  Park Place pieces, he would likely engage Charlie in back and forth bargaining, which would give Albert a sizeable share of the $1 million proceeds. As such, Albert and Barbara would benefit from teaming up and agreeing to split the sale price, thereby preventing Charlie from working the underbidding process. Alternatively, Albert has strong incentives to find the other Park Place owners and gobble up their pieces; if he can be the monopolist of Park Place, he will reap major rewards. Second, full underbidding requires the two Park Place holders to be simultaneously competing to sell the pieces. In practice, Charlie might have to track down Park Place providers one at a time. As a result, he may throw a few dollars at the first one he finds just to save himself some trouble. Does Park Place hold any value in this case? Still, the answer is no. Park Place in and of itself remains worthless. Instead, the Park Place seller is essentially extracting a finder’s fee. His availability availability   provides some value that the Boardwalk owner covets, and so the Boardwalker pays a price for the convenience, not the Park Place piece.  

 

Application: The De Beers Diamond Monopoly While bargaining over McDonald’s Monopoly pieces seems trivial—very few people will ever collect a Boardwalk piece after all—the underlying lesson has broad applications. At its core, McDonald’s Monopoly creates a matching A single person holds worthless by by itself. Multiple problem. people hold a different item an thatitem is that also isworthless itself. Combining the items together creates value but only for those involved in the transaction. Thus, all but one of the people clutching the second item will receive nothing. This causes them to underbid each other, ensuring that the person holding the unique item receives all of the value. Diamonds provide a similar case study. The gemstones are remarkably expensive. Naturally, it seems they must be rare as well. After all, holding all other things equal, basic supply and demand states that an item in low supply will have higher prices. However, this is not the case. In fact, diamonds are quite common. But access to access  to diamonds is not. Consider the plight of a diamond mine investor in the late 1800s. For most of the 19th Century, diamonds truly were rare, as miners had yet to find the world’s biggest deposits. That all changed in 1870, when multiple British investors found mother lode after mother lode in South Africa. But this created an inherent problem. At the time, diamonds held value because they were a scarce commodity; much as gold today, diamonds were a safety-net alternative to cash reserves. If all the investors flooded the market with the newfound South African diamonds, the market would collapse. Much as two Park Place holders have incentive to underbid one another to purchase a single Boardwalk, the investors would want to underbid one another to sell their stashes, ultimately driving out all of the potential profits. Where many investors saw panic, the enterprising Cecil Rhodes saw an opportunity. He purchased the various independently owned mines in South Africa and founded the De Beers diamond company. Rhodes’ strategic vision led to substantial profits. By the time he was done, De Beers had almost complete control of worldwide diamond production—almost as if he went around the world buying every Park Place Monopoly piece in existence. Rather than continue the free flow of gemstones into the market, De Beers cut back. Market forces correspondingly drove up the price of diamonds, leaving De Beers with a hefty profit. Unfortunately for De Beers, this business plan has struggled lately.

 

Remember, it is the low availability of diamonds that keeps the prices high. Despite Rhodes’ vision of a diamond monopoly, De Beers’ has lost its footing in recent years—the company once held 90% of the market but its share has since halved and it now faces additional competition from artificial diamond manufacturers. The company’s value has correspondingly plummeted.  

 

Who Receives the Additional Profits? The value of scarcity affects the job market as well. Suppose Albert lives in a city with two artisanal cupcake bakeries. Both need to hire a cupcake baker and can draw from the pool of common folk by paying the local minimum wage correspondence of $10 per hour. course However, recently completed Theory 101’s in Albert artisanal cupcake design.Game His stunning candied creations will draw in $20 of additional profit per hour for whichever shop hires him as compared to the minimum wage employee. As such, he is worth $30 per hour to both shops. Clearly, Albert’s ingenious designs will ensure that one of the stores will hire him. But how much of the extra profits will he keep for himself? Intuitively, Albert might expect that he will only receive a portion, with the remaining share enriching the store. Nevertheless, Albert is in for a stunning surprise: he will receive all all of  of the additional profits. Before delving into the reason fully, note the commonalities between Albert the artisanal cupcake baker and Albert the Boardwalk holder. Both have a commodity that multiple competitors want to obtain. For the Monopoly player, it is a Boardwalk game piece; for the baker, it is the supreme talents that only a Game Theory 101 correspondence course in cupcake decoration can deliver. Both versions of Albert can combine their offerings with another to increase value. Boardwalk and Park Place win a million dollars; the baker and a bake shop generate $20 more profit per hour than if Albert sat at home. And Albert’s would-be partners have competition in both cases. Thousands and thousands of Park Place pieces exist to complement Boardwalk, while the rival cupcake bakeries are both fine fits for Albert. Just as the Park Place holders massively outbid each other, the cupcake shops will ultimately keep upping the wage offer to Albert until Albert receives the entire additional revenue he creates. To see the underbidding in action, suppose Albert ultimately signed a contract that paid him under $30 per hour. The firm that signs him would be happy with the deal; they would receive $30 minus his wage per hour in additional profits. But consider the other firm’s outcome. It does not hire Albert and thus receives no additional revenue. Yet, it could slightly overbid the competing company’s winning offer. Albert would sign the agreement since it would pay him more, and the second shop would now profit from his services. Nevertheless, note that we can iterate this process repeatedly. As long as

 

Albert’s final wage is below $30, the losing firm can increase its offer, hire Albert, and profit. Consequently, we should not expect Albert to sign for less than $30. The only reasonable wage he can earn is exactly $30; at that point, neither shop has incentive to raise its offer to a greater amount because doing so would result in a net loss. The conclusions here are twofold. First, it pays to be unique. If a cupcake cook cannot distinguish himself from other cupcake cooks, he loses all of his bargaining power—after all, the companies could just hire one of the many other cupcake cooks available if one particular cook plays hardball at the bargaining table. Albert, meanwhile, brings unique profitability. This leads the firms to want to compete for his employment, which allows him to drive up his wage. Second, and more importantly, Game Theory 101’s correspondence course in artisanal cupcake design is invaluable.  

 

Application: Google, Apple, and a $9 Billion Anti-Trust Lawsuit Like Park Place owners, the shops in the cupcake example have a tremendous incentive to collude during the negotiations stage. Indeed, without any sort of manipulation, the bidding process means that the shops ultimately receive no additional profits even though Albert is a into star Albert’s cupcake designer—all $30 per hour of additional revenue goes straight pocket. Under such conditions, one store owner might approach the other and offer the following proposition: if the other owner allows the first to hire Albert for $20 an hour, the first owner will let the second owner hire the next superstar cupcake designer to come along for $20 an hour as well. They will then go back and forth and not compete with each other. Absent any other considerations, as long as the second store owner believes that the interaction will continue well into the future, he should agree. After all, engaging in a bidding war will ultimately drive out all profitability from hiring superstar cupcake designers. In contrast, agreeing to the deal will allow him to hire the second designer for only $20 per hour, allowing him to profit $10 per hour. Obviously, $10 more per hour is better than $0 per hour. Moreover, the store owners would continue to profit from this deal if they hired all future designers at the artificially low rate. While the cupcake example might seem silly, something remarkable allegedly occurred in the early 2000s between Apple and Google. (Adobe and Intel were also involved to a lesser extent.) The tech firms made a “no-hire” agreement, meaning they promised not to attempt to lure away each other’s employees. Doing so, of course, meant that an Apple employee could not leverage a job offer from Google to receive a raise from Apple and vice versa. Such an agreement was extremely attractive to both companies because they could cut labor costs in the process. With so much money at stake, the companies vigorously enforced the agreement. Indeed, claimants in the lawsuit showed an alleged email exchange between Steve Jobs (Apple’s CEO at the time) and executives at Google. A Google recruiter had attempted to steal some Apple engineers. Jobs, livid, sent an email ultimatum to Google: “If you hire a single one of these people, that means war.” Although Google might have won in the short-term by hiring those engineers, a bidding war would mean a long-term loss. Google executives fired the recruiter to maintain the peace. Of course, colluding to artificially suppress wages is highly illegal and

 

violates antitrust laws. Consequently, a group of employees filed suit in federal court, with potential damages going as high as $9 billion. The parties reached a settlement in April 2014. This was not the first time something like this happened, either. Just a year earlier, Intuit, Lucasfilm, and Pixar faced similar allegations and settled out of court. Interestingly, Steve Jobs ran both Apple and Pixar during the periods the lawsuits covered.  

 

Application: Star Free Agent Athletes People sometimes lament the fact that superstar athletes regularly sign contracts for tens or even hundreds of millions of dollars. However, the outbidding logic gives a reasonable explanation as to why this happens. There are two halves argument. First, the basic outbidding remains. When toa the player hits free agency, teamspoint will about start competing to sign him. Superstar players, by their nature, will help their teams win games and will increase fan interest, both of which help organizations earn more money. The lure of this increased revenue forces teams to outbid each other repeatedly in order to sign the player. Yet the competition ultimately benefits the player because the highest value contract must equal the amount of additional revenue he can bring to a team. The secondary explanation is that teams have a lot of extra money to spend on such superstar players. “Replacement” level players are abundant in sports leagues. These are the athletes who drift around from team to team over the years and do not have a major impact. As the term “replacement” indicates, teams do not value these players highly since they can always find another player of relatively equal skill without much effort. Teams pay such players comparatively small sums of money because they are not notably better than their competitors on the free agent market and thus lack bargaining leverage. These savings leave teams with more money in their coffers to compete for the superstar players, which ultimately means the superstar contracts will go higher than they would otherwise. This also helps explain why many sports leagues adopt maximum individual player salaries. With maximum player salaries, teams are handcuffed; although they would bid up to the player’s true value if they could, the maximum salary handcuffs them to a smaller amount. Under such conditions, the player can expect multiple teams to offer him identical amounts. The team he signs for will gain some of the surplus. Salary caps work in a similar way. When the league limits a team’s overall budget, each dollar it spends is artificially high—after all, a dollar spent on one player is a precious resource that cannot be spent on another player. Consequently, if teams were to bid $10 million for a particular player, that amount may functionally seem like $15 or $20 million instead. The teams will still bid up to the value of the player, but the overall contract will be smaller than without the budget constraint. Again, this allows the triumphant team to gain some of the surplus despite the incentives to outbid

 

one another. Overall, such constraints benefit teams by reducing competition. Of course, players’ unions often despise maximum salaries and salary caps for that reason—at the end of the day, the surplus the team gains from these rules comes directly out of the players’ bank accounts.  

 

Application: Microsoft’s Misguided Evaluation System In 2006, Microsoft began using “stack rankings” to evaluate employee performance. Stack rankings required managers to evaluate employees on a 1 to 5 scale, with higher values representing better work. Critically, the process also forced theemployees rankings toa resemble a bell curve, that to assign a lot 3, fewer employees a 2meaning or 4, and yetmanagers fewer a 1had or 5. Moreover, the system was rigid—even if a manager believed he was working with a group of superstars, he would have to give the same number of 5 values as a group full of average employees. Unsurprisingly, this system created a number of perverse incentives. First, because people who received a 1 would often receive a pink slip, the number of truly abysmal employees disappeared over time. Thus, a group of mostly productive workers would eventually become a group of only productive workers. Yet the stack rankings would force some of those people to receive a 1 and risk losing employment—even though they were perfectly productive. Second, the system incentivized highly productive employees to actively avoid collaborating with their equally productive peers. To see why, imagine you were your division’s superstar employee and consistently received 5s in evaluations. Such stellar marks would lead to sizable year-end bonuses. In a sensible world, you could transfer to a different division, collaborate with another superstar, and greatly raise the company’s revenue. Yet, by joining a division with another superstar, you would be reducing your chances of receiving a 5 and thus shrinking the expected value of your year-end bonus. Consequently, you would want to stick to the guaranteed payday in your current division, even though it would be to the company’s detriment. Finally, and most important in applying outside options, the system prevented many employees from earning market wages. Superstars at Microsoft need high salaries to keep Apple, Google, and other companies from luring them away; superstars, after all, bring in lots of extra money to their employers. But imagine that you found yourself on a team of superstars. With promotions and bonuses contingent on earning a ranking of 5, high quality employees could easily find themselves earning less than their equitable share. As a result, rival companies had the ability and desire to steal these employees away, ultimately to the detriment of Microsoft’s bottom line. The lesson here is simple: evaluation systems need to adequately reflect an employee’s value on the open market. Failing to do so actively encourages

 

employees to pursue their outside options even when both parties are better off when the employee stays at his current job. Stack rankings might be a solution to cutting through bureaucracy, but it comes at the cost of forcing valuable employees to leave the company.  

 

Benefits with Asymmetric Employers Going back to the cupcake example, we have thus far assumed that the two potential employers valued the employee at the same rate. Of course, this is often not true. For example, the two cupcake stores might cater to two different of customers; perhapsinone specializes incupcakes high-endmeant artisanal cupcakes groups while the other specializes run-of-the-mill for the undiscerning palette. Accordingly, the first would be willing to pay Albert significantly more than the second. How does this affect the bargaining environment? Suppose that the highend store would be willing to pay Albert up to $40 per hour while the lowend shop would be willing to pay up to $30. Despite the asymmetry, Albert can still force the opposing parties to bid up his wage. To see why, imagine that Albert was planning on signing a deal for less than $30. Would the losing company let that happen? Of course not—it could pay Albert slightly more and reap the benefits of his craft. However, the bidding has a stopping point: $30. Once a bid is at $30 or beyond, Albert cannot force the low-end shop to offer him a better deal. As such, we would expect him to sign for a value close to $30 with the high-end shop. Overall, competition allows Albert to negotiate a better wage than he could otherwise. Still, unlike the symmetric case, Albert does not wind up with all of the profits in his pocket—the high end shop values him at $40 per hour and thus earns roughly $10 per hour in extra revenue not ultimately paid to Albert.  

 

Application: The Cardinal Sin of Buying a New Car The logic of outside options seems obvious. And yet, people routinely ignore outside options at their own peril. For example, suppose you had to purchase a new car tomorrow. How would you do it? If you are like the average you might look online prices. to gauge theyou new models available,person, their quality ratings, andaround their average Then would go to the local dealership, find the cars that interested you from online, take a few test drives, and select the one you liked the best. Following that, you and the dealer would haggle over the price, just as Albert and Barbara have done incessantly throughout this book. This is an extremely bad plan. The theme of this chapter is that individuals benefit when others compete for their goods and services. Thus, it should go without saying that people generally ought not to intentionally limit their options when they engage in negotiations. And yet that is exactly what most people do when purchasing a car. Even if you know exactly what you want, this does not make sense. There are likely a number of car dealers within a reasonable distance of your home. Each of them wants you to purchase your car from their dealership, not one of the others. Consequently, each dealer has incentive to underbid the other and win your business. Of course, most people overlook the opportunity to manipulate the dealerships in this manner. Instead, they select the closest dealership and try to lower that dealership’s price as much as possible. Haggling is indeed useful here—as Chapter 4 showed, making counteroffers allows you to capture some of the surplus from trade—but it is not as useful as putting the dealerships in competition with one another and then then haggling  haggling afterward. To game the system, figure out the types of cars you might want to buy and the nearby dealerships that sell them; Google Local or a similar website can do most of the dirty work. Then find their phone numbers or email addresses. (The phone is better—it ensures that someone will respond to you in a prompt manner. But if you are less confident on the phone, email is a good fallback.) Tell them the car or cars you are interested in purchasing and ask them to quote a price. With any luck, after an hour’s worth of work, you will have a bunch of options to choose from. Although you could stop there and head to the dealership with the lowest price, you can go one step further. At this point, you hold in your hand the

 

best offer. This is now your fallback option. Call up (or email) the losing dealerships. Tell them that you have a lower offer from a different dealership and plan on buying it from them but are open to counteroffers. See if they lower their price further. If they don’t, cross them off the list. If they do, then you have a new best offer. Now repeat this process with the remaining dealers on the list until no one will go any further. Then buy the car from that dealership. This process may be time-consuming, but it can save you hundreds of dollars and be well worth your investment. Dealers hate this for the obvious reason—more competition gives more of the trade surplus to you and less of it to them. As such, they might resist your ploy and refuse to give you an offer over the phone or via email. But to keep your business with them, they may try to convince you to come by the dealership to negotiate a price. This is a deliberate attempt on their part to eliminate the competition for your business. Negotiating in person might allow you to lower their their price  price for the car, but it will take a significant time investment on your part. Repeating this process at more than one dealership may waste the better portion of your day. Moreover, this ignores the amount of time you will spend when you go back to those dealers using the leverage of a better offer from elsewhere. Instead, insist that the dealership quote you a price on the phone or via email. Standing firm on this point is not rude—it is business. (If anything, it is rude on their their part  part for baiting you into wasting your time and money.) There is no good reason for them to insist on in-person negotiations. Call them on it. Tell the dealers that if they want your business, they will have to give you a price. If they still refuse, you lose nothing in the process—their resistance indicates that their prices are not competitive, so you will receive a better deal elsewhere anyway. As you go through this process, be careful that you know exactly what the dealers are offering you. The exact specification of a car might vary from dealership to dealership. For example, one might have power windows, while the other might not. This is especially important if you are purchasing a used car—one dealer might be offering a lower price because the car had been in an accident a few years earlier. Less critical issues—scratches, cracked windows, ripped seat cushions—are still annoying and should affect your decision to buy. do not expect to negotiate furtherdealership, once you itare in person. If you Also, have invested your time to go to amuch particular signals to their

 

management that they have offered you the lowest price. Knowing that, they do not have much incentive to undercut the price further. After all, if you leave, you will only end up paying a greater price by going elsewhere. In addition, if you cast a wide net, you will likely be negotiating with dealerships in different cities or even different counties. As a result, the sales taxes on a vehicle might differ from dealership to dealership. You probably overlook a 0.5% city sales tax when buying groceries, but that same tax adds $100 to a purchase of a $20,000 car. Be clear that you want to know the final price from each of the dealers and that you will not pay more than what you ultimately negotiate over the phone or via email. As a final note, keep in mind that this same strategy applies to other purchases as well—hotel rooms and certain car repairs come to mind. However, the time it takes to call all a bunch of companies and negotiate a price is non-negligible. When buying a car, the process is worthwhile given how expensive vehicles are. On the other hand, saving five dollars on auto repair might not be worth a couple hours’ effort. Practical problems also are a hindrance—moving your car from mechanic to mechanic might require an expensive tow truck. Beware of the tradeoff between time and savings.  

 

Hedging against Risk Up until this point, all outside options were sure bets. But in real life, some outside options require risk. For example, if Albert and Barbara were negotiating over Barbara’s wage, she might not be certain whether the next ob she willmake take$25 willper payhour, her $25 per hour or $30 per hour. If sheAlbert; were sure she would she would accept nothing less from but if she were sure she would make $30 per hour, she would accept nothing less than that $30 wage from Albert. Absent that certainty, however, both actors must consider Barbara’s tolerance for risk. Imagine Barbara expects to receive $25 from the competitor 50% of the time and $30 the remaining 50% of the time. One might then expect she would need at least $27.50 from Albert to stay at her ob—after all, $27.50 is the average amount Barbara expects to receive if she leaves her job. Butneed Barbara mightathave different preference for risk. Forinexample, she might to make leasta $26 an hour to continue living her current apartment. As such, it is perfectly reasonable and rational for her to accept a guaranteed offer of $26 from Albert than a 50/50 shot between $25 and $30  —even though she would expect to receive more on average by pursuing a different job. We refer to such preferences as risk-aversion; that is, Barbara is averse to the risk of moving to a different employer. She is thus willing to pay a risk premium of $1.50 to take the guaranteed $26 rather than the expected $27.50. This risk-aversion benefits Albert. If Barbara only wanted to maximize her expected wage, he would have to offer her $27.50 an hour. But if Barbara is indifferent between a guaranteed $26 and the 50/50 chance of $25 and $30, Albert needs to only offer her at least $26 to induce her to accept. That extra $1.50 remains with him. Similarly, Barbara may be even more risk-averse and prefer a guaranteed $25.50 over the gamble. This puts Albert in an even better bargaining position, as he can offer her that smaller amount and keep more of the surplus for himself. The lesson is simple: the willingness to take risks leads to a stronger bargaining position.  

 

Application: Warren Buffett’s Billion Dollar Challenge March Madness is a 64 team, 63 game tournament held every year in March and April to crown a national college basketball champion. Many businesses and websites run gambling pools in which entrants must guess who will win each game. The winner is the person who picks most accurately. In 2014, Quicken Loans took the bracket contest to a whole new level: they offered $1 billion to anyone capable of creating a perfect bracket. While seemingly outlandish, it is virtually impossible to guess every game correctly because the tournament consists of so many games. To wit, depending on an entrant’s ability to assess the relative strengths of any given team, the odds of building a perfect bracket are around 1 in 128 billion to 1 in 9 quintillion. Nevertheless, companies fear such enormous risk since one lucky bracket could bankrupt the entire enterprise. Thus, Quicken Loans insured the contest through Warren an American multi-billionaire and one of the men in the world.Buffet, In doing so, Quicken Loans paid a fixed amount to richest Buffett regardless of the outcome of the contest. If someone won, Buffett would have to pay the full billion dollars from his pocket. The deal made Buffett happy because he could leverage Quicken Loans’ risk-aversion against the company; Quicken Loans was satisfied since they could write off the payment to Buffett as an advertising expense. Even so, Buffett held an ace up his sleeve. In the incredibly unlikely scenario someone guessed the first 60 games correctly, Buffett planned to buy out that person. Indeed, there are strong incentives to bargain here. With two semifinal games and one final game, the entrant would have had roughly a 1 in 8 chance of winning the billion dollars. And while a billion is an insane number, comparatively paltry sums like $100 million, $50 million, or even $10 million provide financial security for a lifetime. As a result, Buffett and the remaining competitor would have likely struck a deal. The entrant would have accepted less than the expected value of the contest to enjoy the security of guaranteed financial freedom. Buffett would have been happy with the outcome too, as he would be decreasing his expected losses. Thus, bargaining meant that the contest was never really worth a billion dollars—it was just about putting the entrants in position to bargain with Buffett.  

 

Application: Deal or No Deal  Deal or No Deal Dea l is a game show in which a contestant selects one of 26 briefcases, each containing a different amount of money ranging from $0.01 to $1 million. After choosing his own case, the contestant then picks some number of the cases not belong to him. The host revealsabout the amount of money inside, givingthat thedo player some additional information the value of his original case. This is where the game gets interesting. A producer from the show known as “the banker” sits above the studio in a dark room. The banker’s task is to buy the briefcase from the contestant, effectively ending the game. He holds no additional information about the contents of the briefcase that the player does not know. Thus, the banker offers the contestant some amount of money. The contestant hears this offer and chooses deal deal o  orr no deal. deal. Deal ends the game, giving the player the amount of money the and banker offered. deal meansThis the contestant must open additional briefcases reveal moreNo information. process then repeats until the player accepts a deal or ultimately rejects all deals and takes whatever amount of money was in his original briefcase. The producers of the show are playing a long game—they can absorb a player winning a million dollars every now and then. Consequently, the banker primarily tries to minimize the expected amount of money the producers give away. The contestant’s choice is more complicated. Of course, contestants want to win money. But the game also entails a substantial amount of risk. For example, the contestant may come down to a situation in which two cases remain: one with $1 and one with $1 million. The expected value of his briefcase is therefore $500,000.50. However, for the reasons outlined previously, that individual might be willing to sacrifice some of that expected amount for a guaranteed payoff. Indeed, the banker might offer $400,000 and the contestant might accept. Both parties win: the banker saves about $100,000 in expectation, and the contestant goes home happy with financial security. Of course, on the show, the contestants often rejected initial offers. In other words, bargaining sometimes failed. One reason bargaining failure can occur is because the banker faces uncertainty about how risk-averse the contestant is and uses smaller offers up front to screen out the more desperate types. We will revisit this learning process in a couple of chapters.

 

 

 

Conclusion In sum, outside options are critical to drive concessions at the bargaining table. After all, good competing offers force the other side to increase their offers and decrease their demands to satisfy your needs. You, in turn, receive a greater share of the bargain. Of course, not all outside options are created equal. Intentionally cutting yourself from outside options (like in the car example) effectively renders them useless. On the other hand, making yourself unique forces various potential bargaining partners to outbid each other, leaving the entire surplus in your capable hands. Still, not all outside options are risk-free, and fear of the unknown results in worse deals. Throughout this chapter, we looked at cases where outside options were inherently credible. Sometimes, however, your bargaining alternatives are not as easy to exercise. The next chapter develops methods to manipulate the system and stay on top.

 

Chapter 6: Making Threats Credible Think backoftothe thesurplus originalbecause ultimatum game. not Barbara wasthreaten unable to a larger share she could credibly tosecure reject puny offers from Albert. Thus, even if she boldly stated, “I will only accept offers that give me at least half of the surplus,” Albert recognized that she would still accept smaller shares if she only wants to maximize her economic welfare. After all, those amounts would leave her better off than if she had rejected the offer. Nevertheless, the desire to convince Albert that she would in fact reject such smaller offers is obvious. Indeed, if Albert believed Barbara, he would increase his offer to her. This would make Barbara happier at the expense of Albert’s welfare. However, Barbara does not have incentive to follow through on her threats, because Albert can safely ignore her words. If a lack of incentive to follow through is the problem, then the solution for Barbara is to develop mechanisms that force her to reject smaller offers. That, in turn, would make her threat credible and force Albert to give her more of the wealth. This chapter explains how she might accomplish that task.  

 

Tying Hands While the previous chapter advised that limiting one’s options is generally counterproductive, this section goes over a notable exception to the rule. Surprisingly, making some actions impossible (or, at a minimum, unbearably painful) can sometimes make threats believable, which in turn improves an individual’s bargaining position. For intuition, consider the following anecdote. Though variants of this likely date back to mankind’s first armies, Thomas Schelling, winner of the 2005 Nobel Prize in economics for his work in game theory, often receives credit for the following tale. Two countries, the Kingdom of Albert and Barbaraland, are embroiled in a war. A small island sits between them. Two bridges go to the island, one from each country. Both sides would prefer taking control of the island. However, it is not particularly valuable. As such, both would prefer retreating than engaging in a costly battle over it. At the across beginning of the and conflict, Kingdom positions of Albert’s military scrambled the bridge took the up defensive around the island, knowing that they could not reach and destroy Barbaraland’s bridge in time. Then, to the shock of the King Albert’s civilians, the generals set demolition charges on their own  own  bridge. The subsequent inferno burned it down. Enraged at the seemingly wasteful maneuver, civilians stormed the capital and demanded the King’s resignation. After many tense hours, Albert finally emerged to address the masses. “I have good news,” he proclaimed. “Barbaraland has conceded control of the island. We have won!” But King Albert’s words failed to calm the crowd. Angry murmurs swirled until one vocal protester yelled, “Yeah? So what? You destroyed our bridge!” Albert let out a heavy sigh. “Yes,” he said. “That’s the point. We have the island because I destroyed the bridge.” bridge.” This only fanned the flames. “That makes no sense!” yelled the vocal protester. Again, Albert let out a sigh. “Have none of you taken game theory?” he asked rhetorically, as the protesters rushed the doorway. Two hours later, after a hasty and historically inaccurate show trial, King Albert was executed for general incompetence. Despite Albert’s terminal outcome, his decision was in the country’s best interest. Before tying the noose, the lynch mob should have considered the

 

counterfactual case in which the Kingdom of Albert’s troops did not burn down their own bridge. To do this, a game tree becomes necessary. The interaction has three simple steps. First, Albert decides whether to burn down his own bridge. Second, Barbara chooses whether to have her troops invade the island. Finally, Albert decides whether to retreat. But note that this last option is only available if he did not burn down the bridge initially; without the bridge, Albert’s forces have no choice but to fight. Here is the tree:  

As before, larger numbers reflect more preferred outcomes for each player. Comparing the outcome in which Barbara stays (and hence does not attack Albert’s forces), the Kingdom of Albert is indeed better off by not burning the bridge than by burning it. After all, all else equal, having the bridge is better than not having the bridge. However, the all else equal qualifier is misleading. Albert cannot strategize in a vacuum. Instead, he must think strategically and consider how his actions affect Barbara’s decisions. To analyze this properly, both players should start at the bottom of the tree and work their way upward. Unlike past interactions, this game has two major branches, diverging where Albert decides whether to burn the bridge. Albert and Barbara will therefore have to fully consider both branches from the bottom up. So consider the side in which Albert burns the bridge:  

 

  Barbara earns 0 if she stays and maintains the status quo. If she attacks, she earns -1, as invading the island prompts Albert to fight. But recall that the island is not particularly valuable, and therefore Barbara prefers not engaging in a costly conflict over it. As such, Barbara’s troops optimally stay at home in Barbaraland, just as what occurred in the anecdote at the beginning of this section. Now consider the bottom of the other branch, where Albert decides whether to fight or retreat once Barbara has attacked the island:  

The roles are reversed on this side of the game tree. If Barbara attacks, it is up to Albert to initiate the fight or give up control of the island. Once more, the island is not worth a major battle. Thus, Albert would retreat and take a payoff of 0 rather than fight and receive -1. Barbara can use this knowledge to make a better-informed decision after Albert does not burn his bridge:  

 

  If Barbara stays, she receives her default payoff of 0. If she attacks, she prompts Albert to retreat. This gives her control over the island and a payoff of 1. Now for the final step. Albert knows how the game will play out if he burns the bridge or if he does not burn the bridge. Thus, he merely needs to compare the two outcomes and see which is better:  

If Albert burns the bridge, he can credibly threaten to fight. In turn, Barbara stays put and he earns 1. If Albert does not burn the bridge, Barbara will attack, which induces Albert to retreat. He ultimately receives 0 as a result. As such, Albert ought to burn the bridge. Why does Albert benefit from burning down the bridge? The credibility of threats matters. Albert would like to convince Barbara that his threat to engage in a battle over the island is credible since this would deter her from invading. But as long as the bridge remains standing, Barbara knows his threat is inherently incredible because Albert would evacuate if challenged. In contrast, if Albert demolishes the bridge, his hands are tied—his troops have no choice other than to fight and thus will go to battle if Barbara’s army invades. Thus, in the second case, Albert’s threat to defend the island is credible, which in turn successfully deters Barbara. The only way he can keep control of the island is to tie his hands and make it impossible to leave.  

 

Application: Tripwires, West Berlin, and the Cold War The Cold War featured an ongoing coercive bargaining relationship between the United States and Soviet Union, with each side vying for geopolitical supremacy. However, at the same time, both parties desperately wanted to avoid a war that could destroy the entire planet. Thus, the cold warriors’ moves and countermoves needed to strike a balance between extending reach and avoiding gunfire. West Berlin was a particularly delicate space on the world chess board. It was a city all alone, a part of West Germany but surrounded on all sides by hostile East Germans and their communist comrades in Moscow. This geographic positioning left it especially vulnerable to invasion. The United States, of course, wanted to keep West Berlin in the Western bloc but needed a way to deter Soviet aggression. In hostile interstate diplomacy, countries normally gain an advantage by building power and stationing at the front lines. Here, this mean sending waves of Americansitand other allied soldiers into would West Berlin to shieldlarge the city from an Eastern assault. However, this would come at the cost of a military operation with no foreseeable end in sight. Instead, leaders in Washington developed a simpler but still effective strategy. Rather than station hundreds of thousands of troops in West Berlin, the United States opted for fewer than 10,000. These units were incapable of properly repelling an attack; if Moscow wanted to retake the city, the American troops could do nothing to stop the communists. Yet the tactic worked—the Soviet Union never marched on West Berlin. Why? Obviously, there are many different explanations for this observation. However, the tripwire was a critical component. One way to maintain the peace was to convince Moscow that any battle for Berlin would not end there. Sure, the Eastern forces could have routed Western troops. Yes, East Germany could have reunified Berlin. But at what cost? A few thousand Americans would die in the process. This would cause enough domestic political upheaval in the United States that Western forces would have likely retaliated, perhaps in Germany or perhaps elsewhere. The logic of Thomas Schelling’s bridge burning should come into focus now. Working backward, Moscow could reason that a strike would prompt retaliation but only if the United States had soldiers stationed there to act as a tripwire. In turn, the United States could better secure West Berlin by putting a minimal force on the ground there. Although these deployments seemed

 

risky because they would make a major war inevitable in the event of invasion, Washington’s limited options made America’s retaliatory threat credible. This forced the Soviet Union’s compliance, thus justifying the seemingly crazy decision to station troops in Berlin in the first place.  

 

Bargaining by Proxy: Tough Union Negotiators and Aggressive Attorneys The bridge burning and Berlin examples show the power of credible commitments. Unfortunately, making commitments credible is not always easy. Recall the problem an individual faces if he wants to pretend like he values fairness in an ultimatum game. If he truly does, then he will reject low offers when push comes to shove. But if he does not, then he will accept low offers. Obviously, the receiver benefits from valuing fairness in this case. However, someone who does not value fairness cannot magically compel a rival to make more generous offers and is thus stuck accepting lowball amounts. One solution to the problem is to hire an agent to take over the bargaining process for the individual. Ordinarily, using an agent is risky. Agents are costly and often have divergent interests. Thus, a principal must worry that his agentworld, will do something different fromseemingly what he hired agentagents. to do. In an ideal bargainers would therefore want the to avoid But this principal-agent problem comes in handy during the bargaining process. Consider a union negotiator with a reputation for extracting extremely pro-labor deals out of employers during strikes. Perhaps the company is willing to give up to an additional $12 per hour while union members will settle for a $2 per hour raise. In a model with offers and counteroffers, we would expect the parties to agree to a division between those two amounts. That changes with the special negotiator in the picture. Now the company must worry that he will reject lower offers so as to protect his reputation for toughness; after all, if the negotiator accepts an awful deal for that labor union, other unions will no longer wish to hire him. Suppose that minimum acceptable amount to protect his reputation is $6 per hour. This time, we would expect the parties to reach an agreement between $6 and $12 per hour, which is a substantially better range than before. And, indeed, this should be enough to cover the cost to hire the negotiator in the first place. (Perhaps the union and negotiator had their own bargaining game to determine his wages!) Interestingly, note that the “special” negotiator in this example is not all that special. He has no brain power, and he does not craft a clever bargaining strategy to extract concessions out of the company. Instead, his bargaining power comes purely from his reputation and his desire to keep it. In turn, his threat to reject low offers becomes credible, even though the union would

 

accept some of these offers. As a result, the company concedes a higher wage to appease the negotiator, which indirectly benefits the union.

 

Chapter 7: Bargaining with Uncertainty Thus far, the actors have known everything about each other as they bargained, including how much the other party would receive if negotiations ended without a deal. But this is a strong assumption. For example, a company may not know whether an employee expects to receive a high wage or a low wage if he seeks employment elsewhere. This puts the company in a dilemma—should it offer its employee a high wage to ensure that he will stay with the company, or should it offer a smaller amount knowing that it might lose the employee but that it will save money if he stays? Ultimately, the firm must develop some sort of bargaining strategy to manage the risk. This chapter explores the optimal path.  

 

The Risk-Return Tradeoff and Extremely Simple Uncertainty Back when Albert and Barbara bargained over her car, we assumed that Albert was the only potential buyer of the vehicle. But can Albert really be sure Barbara does not have an alternative buyer in case negotiations fail? And if so, how does Albert mitigate the risk of a deal falling through? Recall from before that Albert values the vehicle worth $5000 and Barbara values the vehicle worth $4500. Again, with simplicity in mind, imagine Albert believes that he is the only buyer with probability p, where p falls somewhere between 0 and 1. Thus, with probability 1 – p, Barbara has another buyer. To further specify Albert’s problem, suppose Barbara will receive $4600 from the other buyer if that buyer exists and that the outside offer is rigid—that is, she cannot renegotiate it after Albert has made an offer to her. What is Albert’s optimal proposal? Before going further, note that if Albert knew purchase whether Barbara an alternative buyer, would always successfully the car. had Specifically, without the he other buyer, the bargaining game is identical to all of the ordinary bargaining games we have analyzed previously. Thus, Albert offers $4500 and Barbara accepts. If Barbara has an alternative buyer, then the car essentially becomes worth $4600 to her. Consequently, rather than offer $4500, Albert can bump up his offer by $100 and still induce Barbara to accept it. Nevertheless, Albert’s uncertainty can lead to bargaining failure. Indeed, Albert’s optimal offer depends heavily on whether he believes Barbara has an alternative buyer. If Albert is skeptical, he will offer Barbara her reservation value of $4500 and hope that she can only sell to him. Sometimes, his plan will backfire—if Barbara actually did have another buyer, she would reject Albert’s proposal and seek the alternative. But, in expectation, the low offer is worth the gamble. On the other hand, if he believes Barbara is very likely to have another buyer, he will match Barbara’s hypothetical outside offer. Here, the gamble is no longer attractive because it will backfire too often. Albert’s dilemma is more generally known as the risk-return tradeoff . When choosing an offer size, lower values increase the risk of being rejected. Yet, when the lower offers work, the buyer receives the good at a lower price and thus earns a greater return on his investment. Conversely, offering larger amounts decreases the risk but makes the buyer pay a higher price. Hence the tradeoff. To see this in action, consider the game tree below:

 

 

is similar one wrinkle. After Albert his The offer,structure the game shifts to to before Barbarawith without the alternative buyermakes with probability p and with the alternative buyer with probability 1 – p. Albert does not know which way the game will go when he makes his decision, but Barbara does. This matches the previous description of the interaction. As always, Albert must start at the end and work backward to find his optimal offer. Consider Barbara’s decision if she lacks the alternative buyer:  

Since her value for rejection equals $4500, she is willing to accept any amount of at least that value. Now consider Barbara’s decision if she has the alternative buyer:  

 

  This time, her rejection value equals $4600. Thus, she must receive at least $4600 to accept this time. With Barbara’s decisions out of the way, consider Albert’s proposal. Offering anything less than $4500 cannot be optimal because it guarantees Barbara’s rejection. In contrast, he could offer $4600 and guarantee the purchase. This will give him a profit of $5000 minus $4600, or $400, which is better than no profit at all. Consequently, offering less than $4500 cannot be optimal. On the other end of the spectrum, offering any more than $4600 cannot be optimal either. Such an amount ensures that Barbara will accept. However, a slightly smaller amount (but still above $4600) still guarantees Barbara’s acceptance but saves Albert some amount of money. So any amount greater than $4600 cannot be optimal. What about any amount between $4500 and $4600, not including $4500 and $4600 exactly? Note that any offer in this range implies that Barbara accepts if and only if she lacks the other buyer. Yet this means Albert could shrink his offer to a slightly smaller amount (but still above $4500) and continue to have Barbara accept only if she has no outside option. This saves Albert some money on the sale, thereby ensuring that any amount between $4500 and $4600 is not optimal. This last point might prove a little tricky, so consider the following example. Can offering $4550 be optimal? Without the other buyer, Barbara accepts because she makes $50 profit on the transaction; with the other buyer, Barbara rejects because she could make $50 more by going to the other guy. Now consider an offer of $4549 instead. Once again, Barbara without the other buyer accepts because she makes $49 in profit by doing so. Barbara with the other buyer still rejects, since the other guy is offering more. So Albert is equally likely to make the purchase as before—he just does so at a lowerbetween price. Therefore, cannot be optimal. this logic applies to any offer $4500 and$4550 $4600, thus ruling out allBut of those possibilities.

 

This leaves exactly two possible offers left: $4500 and $4600. At this point, Albert’s final remaining step is to calculate his expected payoff for each choice and select the one that produces the greater profit. How can he do this? If Albert is risk neutral, the solution is as simple as calculating how much he will earn in each case and weighing that possibility by the likelihood that it occurs. If he proposes $4600, this process is trivial. Such an offer guarantees that Barbara will accept, netting him the remaining value of $400. On the other hand, offering $4500 leaves open two possibilities. With probability p, Barbara does not have another buyer and thus accepts the offer. Albert earns $500 in this case. With probability 1 – p, Barbara has another offer and thus rejects the offer. Albert earns $0 in this case. Consequently, his expected payoff equals (p)($500) + (1 – p)($0), which is the weighted average of these possibilities. Now that we have calculated a value for both cases, we can find when Albert prefers offering the $4500 sales price:   (p)($500) + (1 – p)($0) > $400 (p)($500) > $400 p > $400/$500 p > 4/5   So if Albert believes the chances that Barbara can only make a deal with him are greater than 80%—in other words, he is almost certain that she is vulnerable—he goes aggressive, offers her only $4500, and risks having the deal collapse. If Albert believes those chances are less than 80%—in other words, Barbara is in a strong bargaining position—he plays it safe, offers her $4600, and guarantees the sale. And when the probability is exactly equal to 4/5, he is indifferent between the two. Although simplistic, this first glimpse of uncertainty reveals a powerful result: Albert’s lack of information causes him harm. He has two choices. First, he can make a safe offer, but this causes him to lose out on stealing more from Barbara when she lacks the outside option. Second, he can make the risky offer, but this sometimes causes him to lose out on the car entirely. Neither alternative is particularly attractive. As such, Albert cannot win here; he can only lose less. On the other hand, Barbara can sometimes emerge victorious. Suppose p

 

is less than 4/5, meaning Albert believes that Barbara is somewhat likely to have a second buyer. Now consider what happens in the case where she lacks that buyer. Albert does not know this and still offers her $4600. Without the uncertainty, Albert would just offer her $4500. Thus, in this particular instance, she successfully tricks Albert into offering her more than she would have otherwise received. As such, a party can gain a strategic advantage in bargaining by making the other side believe he or she has outside options even if no such options exist. One might wonder why Albert doesn’t start by offering $4500, wait to see if Barbara rejects, and then offer her $4600 if she does. Albert would obviously benefit here if such a tactic worked because he could always buy the car from Barbara at the absolute minimum and never miss out on a sale. But this fails to work in practice because rejection is not costly to Barbara. Put differently, suppose Albert would offer $4600 if Barbara rejected the first time. Now consider how Barbara should react if offered $4500 initially if she lacks an alternative buyer. Should she accept Albert’s offer? Clearly not  —she could reject the first offer, knowing that a second offer worth $4600 will follow. Surprisingly, this leaves Albert in a worse position than if he offered $4500 and committed to immediately walk away if Barbara rejected.  

 

Application: Negotiating with a Car Dealership Redux The theme from the previous section is that what you don’t know will hurt you. Thus, the primary solution to the problem is to learn about the other side. In particular, if you are a buyer, you want to know the seller’s minimum price. And if you are a seller, you want to know the buyer’s maximum budget. Unfortunately, you cannot know everything in most bargaining situations. However, this type of information is readily available when purchasing a new car. Dealers receive their vehicles from manufacturers at invoice price. price. Ignoring overhead and employee wages, knowing an invoice price essentially gives you the minimum price a dealer can give you without losing money on a deal. Luckily for you, these prices are readily available online; Edmunds and TrueCar are a couple invaluable resources. The invoice price puts you in the negotiating driver’s seat: you know their minimum price, butlook theyup dothe notinvoice know price your for maximum price. talking to any dealership, the vehicle youBefore want. Then call or email potential dealerships to have them bid down one another. If the lowest offer you receive is still substantially greater than the invoice price, you can likely negotiate the price even lower before you go to purchase the car.  

 

Application: Driving Home a Dealership Vehicle Continuing with the car theme, let’s investigate a tactic that dealerships use when you try to purchase a new car. After going through a few options, dealers often encourage prospective buyers to drive their favorite vehicle home and see if it suits them. Taken to the extreme, the dealer may even encourage the perspective buyer to take it home overnight to mull over the decision. This seems harmless enough. So suppose you are in this situation. Assuming that you have no practical reason to do this (like checking to see whether it fits in your garage), should you do it? To answer this question, you first need to understand the dealer’s motivation for offering you the option. After all, the dealer risks having someone drive away the car and never come back. Consequently, the dealer must have something going for himself here. And, indeed, he does. The theory is that taking the car home has a psychological effect onsomost customers. Seeing it in their garage or driveway reframes the situation that the buyer believes the car is already his and will therefore do more to keep it. For the dealer, “do more” here means spend more money on the car. Thus, the dealer sees sending the car home with the customer as a moneymaking tactic. This is good for the dealer and bad for the customer. In turn, it appears as though you should politely decline the dealer’s suggestion. But after having read the last paragraph, you now know that the offer to take home the car is a mind trick. With that knowledge, you should be able to resist paying any more for the car simply because it sat in front of your house for a few Given that you are immune, does it become okay to now accept the minutes. dealer’s offer? Your decision might seem irrelevant here when thinking of your options in a vacuum, but thinking strategically tells a different story. The car dealer is unsure whether your taking the car home makes you willing to pay a greater price to purchase the vehicle. If he suspects that it is likely, the dealer may act tougher during the bargaining process, stand firm on lower offers from you, and propose higher offers instead. In turn, you may end up paying more for the car even though taking it home had no psychological effect on you. All told, agreeing to the dealer’s ruse will never help but certainly will hurt you in some instances. As such, you should politely decline the offer unless you have a pressing need to do so.  

 

Labor Strikes and Screening While rejection leads to inefficient outcomes, sometimes a rejection today can lead to agreement tomorrow. Consider a bargaining problem between a labor union and management. If management believes that employees are willing to accept a small wage or that the union has little money saved to initiate a protracted labor dispute, its optimal proposal given the risk-reward tradeoff is to go aggressive. Since it fully expects the union to cave in, saving a lot of money per employee is worth the risk of occasionally facing a stronger organization and suffering rejection. But suppose the union actually is strong. What happens then? On one hand, it seems like a waste of energy to force all the employees to find work elsewhere when the company could keep them by offering a higher wage. On the other hand, if management always increases its offer immediately, then weaker unions would pretend to be stronger, reject initial offers, and accept the higher amounts. As such, it might seem that management is in a no-win situation. Fortunately, delay between offers can serve as a viable screening mechanism. It is true that the more vulnerable types of unions would not accept initial offers if they knew that management would make a larger offer immediately thereafter. However, bargaining normally takes place in a more deliberate manner; a rejected offer usually leads to both sides spending some time apart before returning to the table. This potential delay convinces weaker types to accept up front. Yes, they could reject the initial offer now and earn more later. But, in the meantime, they will go without a deal, without without moneyWeak coming in. cannot. Strong types—by their veryemployment, nature—can and endure these any hardships. types Thus, labor strikes are nothing more than an attempt by unions to credibly convince management of their strength. And overall, the bargaining process merely sorts the weak from the strong. Just like one-shot bargaining, inefficiency is still a real problem here. On the bright side, bargaining is not  as inefficient as before. Whereas previously rejection led to no deal ever, now it only means delay in agreement.  

 

Application: The 2013 CBS/Time Warner Cable Blackout The logic of uncertainty and information transmission also helps explain the breakdown in bargaining between CBS and Time Warner Cable in August 2013. In the United States, broadcast networks like CBS, ABC, NBC, and Fox are broadcast over the air, meaning anyone with a television antenna can view their programs for free. However, a sizable chunk of U.S. customers receives the broadcast signals through their cable provider. Cable companies pay a retransmission fee to the network for this right. Prior to August 2013, Time Warner Cable paid less than $1 per month to CBS for retransmission. (This section is unfortunately short on specifics because negotiations took place in private. What is known comes from leaks to the media.) CBS, the most watched network in the United States at the time, sought to increase this amount to $2. Time Warner Cable balked at the price, and it blacked out CBS and CBS-owned channels for an entire month. Why didblackout, bargaining Based the actions parties took over the month-long it fail? appears the on companies hadthedifferent beliefs about how customers would react to the spat. CBS began purchasing commercials telling Time Warner Cable customers to call their provider and demand that CBS return to their screens. Time Warner Cable was being too stingy, the ads claimed. After all, ESPN receives $6 per customer. Isn’t CBS, America’s most watched network, worth $2? Failing that, CBS urged customers to switch cable providers or move to satellite. The message was clear: CBS wanted customers to believe that Time Warner Cable was at fault while CBS was the good guy, merely wanting to get its shows back in its customers’ living rooms. to say, Time Warner Cable thought differently. Instead of Needless blocking CBS with a black screen (as the blackout term would imply), Time Warner Cable aired a slide explaining CBS’s disappearance. The company argued that while CBS wanted to charge $2 to Time Warner Cable, that amount would eventually fall to the customers; thus, this was a stand against higher cable prices. Time Warner Cable offered its customers alternatives as well. CBS freely streamed its shows on its website; customers could stay current by hopping online. Alternatively, they could purchase a $15 antenna and watch CBS as normal; that antenna would more than pay for itself over the course of a year if Time Warner Cable caved to CBS’s demands, which would force the customers to pay that additional dollar each month to see CBS via the cable feed. This last point also pushed back on CBS’s

 

comparison to ESPN—while the $6 customers pay allows them to watch ESPN, the $2 to CBS is nothing more than a convenience fee to not have to use an antenna. In any case, the parties reached an agreement a month into the blackout. Over that time, each side’s perceptions of how customers would react was put to the test. With each side’s position established, they could see how customers were reacting and could better understand how solid their bargaining positions were. And with the uncertainty gone, the sides could come to terms just as we would expect them to in a standard model of negotiations without any bargaining frictions.  

 

Application: The October 2013 United States Government Shutdown For sixteen days in October 2013, the United States Federal Government shut down for business. Hundreds of thousands of federal workers went on furlough. National parks shut their gates. Even the World War II Memorial was closed to the public. While several theories explain why the shutdown occurred, uncertainty appears to be an important component. In short, the U.S. Congress could not reach a budget agreement due to the Republicans’ desire to defund the Affordable Care Act, otherwise known as Obamacare. Democrats, meanwhile, wanted to keep Obamacare running and refused to cave in prior to the deadline to pass a budget. Sixteen days later, the Republicans gave up the fight and passed legislation that included funding for Obamacare. What made the whole debacle all the more interesting is that few gained anything from the shutdown. Democrats most wanted to keep funding for Obamacare andpower least wanted shutdown. Moderate Republicans—those with enough voting to pass alegislation—most wanted to defund Obamacare and least wanted a shutdown. Thus, the shutdown represents bargaining failure. Without bargaining frictions, we would expect the parties to reach a negotiated agreement that would leave both better off than had a shutdown occurred. Yet the United States government closed for sixteen days anyway. The parties’ behavior during the shutdown reveals why. The public relations machines on both sides went into overdrive, trying to convince the public that they were in the right. Democrats appeared to believe that the public would blame the Republicans. Republicans appeared to believe the public would Democrats. Blame is important here; the shutdown damaged one blame party’sthe reputation enough, it could have led to aifshift in power following the 2014 election. But given the disagreement, it is easy to understand why a shutdown would occur. Democrats refused to be pushed to the brink because they thought they would win relative to the Republicans if the country fell off the cliff. Republicans also refused to be pushed to the brink because they thought they would win relative to the Democrats if the country fell off the cliff. So both sides willingly jumped. It quickly became clear which theory was correct. A few days after the shutdown began, the first polls came out. While the shutdown negatively affected both sides’ approval ratings, the Republicans took the brunt of the blow. Poll after poll confirmed that the initial numbers were not a statistical

 

fluke and that further debate on the issue was unlikely to significantly alter voters’ opinions. With the uncertainty gone, moderate Republicans agreed to the Democrats’ demands. President Obama signed the deal just before the United States ran out of money to pay its debts, at which point approval ratings would have undoubtedly plummeted even further. The whole ordeal baffled many commentators, who noted that the deal signed sixteen days after the shutdown began could have just as easily been signed at the start. And while this is technically true—the world did not fundamentally change over those sixteen days—it is inaccurate to say that the shutdown had no effect. Indeed, the inefficient process exposed information. And as we have seen, information revelation is an extremely important factor in reaching agreements.  

 

Application: Out-of-Court Settlements If you watch your average television legal drama, cases often last forever and end in a climactic verdict, with a shocking revelation that the accused is guilty or not guilty. In reality, however, legal proceedings are far less dramatic. Indeed, very few cases actually reach a verdict. Most end well before that stage. Bargaining theory can tell us why. The critical insight is that going to court is costly. While we saw this on the small scale in Chapter 2 with the $100 cost of going to trial over security deposits, the same is true on the macro scale. In fact, serious trials can cost millions of dollars because lawyers and legal teams require a hefty fee. As a result, both the plaintiff and the defendant have incentive to reach an out-ofcourt settlement and save on those costs of conflict. Of course, the possibility of a surplus does not always mean parties will successfully divide it. As this chapter has discussed thoroughly, one reason bargaining fails is due to uncertainty over the value of opposing outside options. In the case of courts, bargaining breakdown can occur when one side overestimates its ability to win at trial. This is easy to see in the extreme. Suppose the plaintiff believes he would prevail with certainty, while the defendant accurately understands that the actual chances are a 50/50 coin flip. As the parties bargain pre-trial, the plaintiff would demand a significant sum to settle out of court—after all, he believes that he would win the case for sure. The defendant would ignore these demands, knowing that he could achieve a better outcome in expectation by facing a trial. Butit just because bargaining fails tolitigants succeedoften before a hearing not mean cannot succeed later. Indeed, settle after a does trial has started but before the judge or jury has rendered its verdict. Given that trials begin as the result of overconfidence in the probability of victory, the trial itself can resolve the cause of bargaining failure. This is because hearing the evidence in court transmits information to the overly confident party, which in turn leads him to lessen his demands. Once the parties’ beliefs converge enough, they can reach a mutually preferable settlement and save on the costs of their attorneys’ fees.  

 

Application: Negotiating Peace during War Broadly, all wars fall into one of two categories: absolute absolute   and limited conflicts. In absolute wars, countries fight each other until one side completely defeats the other side militarily. World War I and World War II are classic examples. In limited wars, countries fight for a while before reaching a negotiated settlement. The Korean War, which ended in a military stalemate and a split of the Korean Peninsula into two countries, is an example. Which has occurred more frequently? You might be tempted to say absolute wars, especially since the United States’ most recent involvements in Afghanistan, Iraq, and Libya have all ended with the U.S. and its coalitions removing the opposing government from power. Yet, as it turns out, roughly two-thirds of interstate wars end in negotiated settlement. Limited wars are thus almost twice as common as absolute wars. Why? Uncertainty once again provides the answer. Just about everyone hates war—fighting consumes precious resources and costs countless individuals their lives. Thus, calculating the expected outcome of a war and implementing it before any fighting takes place leaves everyone better off; no one dies, and both sides receive exactly what they would expect if they fought instead. This explains why most countries are not fighting most other countries most of the time. War is the exception, not the rule. That said, wars can erupt when the parties disagree over the relative likelihood they will prevail in conflict. For instance, imagine France thought it would surely beat Germany in a war. Then it would need to receive the lion’s share of a itpeaceful settlement to not want to fight. But require imaginea Germany believed would certainly prevail. Then Germany would very large share of the settlement to maintain the peace. Unfortunately, there is only so much land, money, and influence to go around. These overly optimistic beliefs prevent the parties from reaching a bargain, leaving them to settle their differences on the battlefield. But if wars start because of overly optimistic beliefs, they should end when the sides learn the truth about the actual military balance of power; at that point, there is no reason they cannot negotiate a treaty and avoid the continued costs of war. Fortunately, this is exactly what fighting does—it reveals information about each side’s relative chances of winning. In the example, France thought it would certainly emerge victorious. But imagine that Germany wins in a rout every time its troops take the battlefield. This

 

will inevitably cause French leaders to reconsider their chances of winning. Once France learns that Germany is very likely to win the war, it will be willing to give the concessions necessary to convince Germany to end the fighting. So what is war good for? Strangely, it is good at ending ending war  war by credibly transmitting information to both parties.  

 

Negotiating over Used Cars and the Market for Lemons Information plays such a critical role in bargaining that the 2001 Nobel Prize in economics honored a couple of scholars in this field. The next two sections develop a couple of these classic models, beginning with George Akerlof’s market for lemons. Used cars provide the perfect example of a market for lemons—hence the name—so we continue with Albert and Barbara’s dilemma. This time, the actors face a new source of uncertainty. Whether Barbara has another seller is not in doubt. However, exactly what Albert is purchasing is unclear. Originally, Albert knew he was buying a decently valuable vehicle personally worth $5000 to him. But this is rarely the case in the realm of used cars. Albert needs to wonder whether the car is in good condition, is a “lemon” that frequently breaks down, or is somewhere in between. Moreover, he needs to incorporate the risk of receiving a bad car in the purchase price he proposes. To model Albert’s dilemma, consider a simple take-it-or-leave-it bargaining game. Albert offers some price x for the car, and Barbara accepts or rejects the deal. Bargaining then ends regardless. Uncertainty appears because Barbara’s car could be a valuable “peach” worth $4500 to her and $5000 to him, an average car $2000 to her and $2500 to him, or a lemon worth nothing to Albert and a paltry $100 to Barbara. Barbara, being the owner of the car, knows which type it is. Albert, however, can only guess that each type is equally likely. Before solving for Albert’s optimal offer, note that trades would occur if Albert theand true$5000 condition of the satisfactory. vehicle. In the peach case, offer betweenknew $4500 is mutually Similarly, in theany average car case, any offer between $2000 and $2500 works for both parties. Bargaining only fails when the car is a lemon; Albert is not willing to pay anything for the car, while Barbara needs at least $100 to part with it. Yet, despite how trades theoretically work two-thirds of the time, incomplete information ensures that Albert only makes offers destined to fail. At a minimum, Albert should realize that all non-peach owners wish to sell at a particular price if a peach owner wishes to sell at that price. This is because the peach owner must receive at least $4500 to sell, which greatly exceeds Barbara’s reservation value in the other cases. In addition, if an average car’s owner wishes to sell, so does the lemon owner. Like before, this is because the average car’s owner needs at least $2000 to wish to sell,

 

which is far greater than the lemon owner’s $100 floor. But if this is all Albert can infer, he is in deep trouble. Imagine he naively believes that Barbara will always sell regardless of the condition of her car no matter how much he offers. Then Albert should not propose more than $2500. To see why, consider his expected value given that Barbara will always sell. One third of the time, he will receive the $5000 peach; one third of the time, he will receive the average $2500 vehicle; and one third of the time, he will receive the worthless lemon. As an equation:   (1/3)($5000) + (1/3)($2500) + (1/3)($0) (1/3)($5000) + (1/3)($2500) $2500   So, at most, Albert is only willing to offer $2500 for the car. It might then seem as though Albert will successfully purchase the average vehicle since Barbara needs at least $2000 to sell a car in that condition. But consider that offer from Barbara’s perspective if she owns a peach. She values her pristine vehicle worth $4500. The paltry $2500 is insufficient to induce her acceptance. Thus, Barbara will never sell a peach. If Albert works through this same logic, he then must determine his optimal offer knowing that Barbara will never sell a peach under these conditions. So consider Albert’s revised expected value for the car. Ruling out the sale of a peach, half of these transactions involve the average vehicle worth $2500 to him, and half the transactions involve the worthless lemon. Working through the math yields Albert’s updated expected value of the car:   (1/2)($2500) + (1/2)($0) = $1250   Thus, at most, Albert should offer $1250. However, trouble is brewing once again with Barbara. Suppose she owns an average vehicle. She values it worth $2000. But Albert will not offer any more than $1250 for it. Consequently, when they go to bargain, she will reject. Albert must rethink his calculation one last time. The only type of car that Barbara is willing to sell given Albert’s pricing problem is the worthless vehicle. But Albert has no interest in purchasing it and would therefore

 

ensure that no transaction takes place. Lack of information once again kills the transaction. If Albert knew the quality of Barbara’s vehicle, they would make a transaction if she owned a peach or average car. But without information on vehicle quality, no sale takes place. Thus, unlike the previous situation with uncertainty, Albert’s lack of information never never benefits  benefits Barbara. Fortunately, enterprising businessmen have devised a couple of methods to avoid a complete breakdown in the used car market. The first is used car dealerships. Although used car salesmen have among the worst reputations in the world, they have incentive to sell vehicles at fair prices given the actual quality of the car. When Barbara is selling only one vehicle, she has no incentive to protect her reputation. As such, su ch, if Albert asks whether the car is a peach or a lemon, she has all the reason in the world to tell Albert that it is a peach regardless of its true condition. In contrast, a used car dealer must protect the reputation of his or her business to some degree. If buyers purchased lemon vehicles at peach prices, they would grow very upset. As word spreads, the dealership would eventually lose all business to more trustworthy competitors. Consequently, the used car dealer cannot cheat the buyer as much as Barbara could in isolation. Of course, this same logic applies to all sorts of products, including services like plumbing and construction. Modern technology allows for buyers to keep better track of sellers’ reputations regardless of the business. Websites like Angie’s List, for example, tell users whether a certain provider is any good. In turn, buyers can make purchases for prices commensurate with the quality of the products for sale. The second way society has resolved the used car problem is through CARFAX. CARFAX is a company that keeps detailed records on the history of used vehicles. If Barbara owned a peach, she could show Albert the CARFAX report. Updating his belief, Albert might then be willing to increase the proposed price because he would think the probability the car is a lemon is fairly low. CARFAX presents a second dilemma for Albert: should Albert ever purchase a car without a CARFAX report? Definitely not. Peach owners clearly want to show Albert the report to ensure that Albert offers a fair price for the vehicle. So, in the absence of a CARFAX, should Albert believe the car is average? No. Suppose the vehicle is average. Barbara knows she would

 

have shown Albert the CARFAX had she owned a peach. Thus, if she does not give it to him, Albert will offer a smaller price because he must hedge against purchasing a lemon. Alternatively, Barbara could just show Albert the CARFAX to demonstrate that she owns an average vehicle. Although this reveals that she does not own a peach, not revealing the CARFAX would have indirectly given Albert the same information. As such, she loses nothing by giving him the report. In fact, the only person who has something to gain by not surrendering the CARFAX is the lemon owner. After all, giving Albert a lemon CARFAX reveals the worst possible information. But because everyone else is willing to give Albert the CARFAX, not revealing the CARFAX still tells Albert that Barbara owns a lemon. Put simply, the lack   of of information is often all the information you need to know. The next section applies this concept to negotiating wages when an employer is uncertain of a potential employee’s level of competency.  

 

Costly Signaling and the Value of a Useless College Education Michael Spence wrote the second classic model of uncertainty covered in this chapter. He was interested in explaining how incurring costs that provide no direct gains can lead to indirect benefits. More precisely, Spence’s model showed that going to college can rationally lead to higher wages from employers even if college does not make those individuals any more intelligent . To see why, imagine Albert enters the workforce and is trying to decide between two firms. As in the similar situation from two chapters ago when Albert was an artisanal cupcake designer, the two firms must submit bids to Albert; Albert will accept the larger offer. Unlike before, however, the firms face uncertainty. Before, the firms knew how productive Albert was. This time, they do not. Albert could either be a highly productive worker or a less productive worker. The firms speculate that less productive workers make up five-sixths of the population, leaving only one-sixth of the population as highly productive. Guessing correctly matters, too. Because highly productive workers bring the firm additional revenue, both firms are willing to spend up to $50 per hour to hire such an individual. On the other hand, the firms are only willing to spend up to $20 per hour to hire a less productive worker. Without any additional information, how much should the firms offer Albert? If a firm hires Albert, they expect to receive $20 in value per hour from him five-sixths of the time (if he is unproductive) and $50 per hour from him the remaining one-sixth of the time. Consequently, the firms’ expected valuation for Albert equals:   ($20)(5/6) + ($50)(1/6) $100/6 + $50/6 $150/6 $25   As such, without knowing anything else about Albert, the highest wage they are willing to offer him is $25 per hour. Fortunately for Albert, he can rest assured that both will offer him that $25 wage. This is the same logic as from artisanal cupcake design model that lacked uncertainty. If one firm offers anything less than $25, the rival firm can outbid the first and sign

Albert. But then the first firm has incentive to outbid the rival since they

 

expect him to be worth more than the current wage. The outbidding process escalates until both have bid their maximum valuation for Albert. Compare this result to how the firms would behave had they known whether Albert was a highly productive worker or not. Again, the firms’ optimal offers come from the similar model from two chapters ago. If both know that Albert is highly productive, then they both ultimately offer him $50 an hour. But if both know that he is not productive, they both bid $20 an hour. As such, Albert benefits from the firms’ uncertainty when he is unproductive—rather than signing for $20 per hour, he receives $25. In contrast, Albert suffers greatly when he is highly productive—rather than signing for $50 per hour, he receives $25 just like the unproductive type. This is because the firms have to hedge against the risk that Albert is unproductive, which is substantially more likely. Consequently, highly productive versions of Albert would greatly benefit if they could convey their competency to the firms. But simply declaring “I am a productive worker” is insufficient. If the firms responded to such a declaration by increasing their offers, the unproductive types would lie to receive the benefit as well. In turn, the firms must ignore any such declarations, knowing that unproductive workers want to mimic a highly productive worker’s behavior. To make the communication credible, the highly productive workers need to do something that the unproductive workers either cannot or will not duplicate. A college degree might satisfy this requirement. Although the sticker price of a college education is the same for highly productive and unproductive workers alike, the amount of effort necessary to successfully graduate plausibly differs. A highly productive individual—being inherently bright—might cruise through his courses. But an unproductive individual might labor through classes, pulling all-nighter after all-nighter just to keep up. As a result, such an unproductive worker would not bother earning a college degree even if it meant higher wages. This is where Michael Spence’s education model comes into play. Rather than skipping straight to the firms’ bidding stage, suppose the interaction began with Albert deciding whether to obtain a degree or not. The highly productive and unproductive types have different opportunity costs for attending college. If Albert is highly productive, he finds education to be relatively inexpensive; he translates the sum of his efforts to be the equivalent of making $10 per hour at one of the firms. In other words, Albert would be

 

willing to attend college if it bumped up his ultimate wage by at least $10 per hour. On the other hand, the unproductive Albert would put forth that much effort only if he could foresee earning $40 per hour more. With that in mind, consider Albert’s decision whether to attend college or not. Note that regardless of his actual level of productivity, Albert receives a wage of $50 in the best case scenario. (This requires the firms to be certain that he is highly productive.) In his worst case scenario, Albert receives a wage of $20. (This requires the firms to be certain that Albert is unproductive.) But these facts imply that college is not worth the investment if Albert is unproductive. To understand why, note that Albert could write “I AM AN UNPRODUCTIVE WORKER ” on the top of his résumé, and he will still receive a $20 wage. Going to college may fool the firms into offering Albert $50 instead, but his net pay per hour drops to only $10 after subtracting out his equivalent $40 per hour cost of college. Consequently, the unproductive version of Albert would not go to college under any circumstances. Now consider Albert’s decision if he is highly productive. He knows that the unproductive workers will not have a college degree. His choices are either to mimic their behavior or separate himself by obtaining a degree. If he opts out of college, the firms learn nothing about whether Albert is productive or not and will therefore offer the same $25 wage as when college was not an option for anyone. But if he goes to college, the firms know he must be a highly productive worker and will both offer $50 per hour. After subtracting out the highly productive Albert’s $10 cost of college, his net wage equals $40 per hour, a substantial upgrade over the $25 he would have earned otherwise. Spence’s education model highlights the usefulness of costly signaling. The phrase “talk is cheap” comes to mind. Unproductive versions of Albert cannot convince employers that they are highly productive by merely claiming to be. Thus, declarations like “I am highly productive” fall on deaf ears. Rather, for the employers to believe Albert, he must take actions that unproductive versions find prohibitively costly. This signals that Albert is highly productive in a way that unproductive types cannot adequately fabricate. (Incidentally, this is the same lesson résumé coaches give. Do not say “I am a great leader” but rather add something verifiable to the résumé that demonstrates the leadership qualities. Again, the goal here is to put

 

something on the résumé that less qualified people cannot do without risking the consequences of being caught in a lie.) Before moving on, two notes are in order regarding Spence’s education model and college in general. First, keep in mind that the model treats colleges very myopically. Bytheir assumption, colleges notis increase the intelligence or ability level of students. In practice,dothis not the case. Firms pay college graduates larger wages because of the increased competence and usefulness of the major. This is part of the reason why science, technology, engineering, and mathematics majors earn higher wages than other majors. But this is not a black mark on the model. Rather, the model demonstrates that a college education influences employers in more ways than just the obvious one. Either way, Spence’s education model explains why college degrees are not as valuable as they used to be. The cliché “American dream” is that a person can go to college, earn a degree, and be guaranteed high-paying employment for the rest of his or her life. This might have been true in 1960, when less than 10% of Americans had a degree. But fast forward to 2010, and that figure tripled to almost 30%. Spence’s education model shows that part of the value of a college degree is that it signals greater competence than an ordinary worker. Yet when such a large percentage of the workforce has a degree, a simple B.A. or B.S. no longer differentiates the strong from the weak. Consequently, degrees no longer promise a life of comfort. Fortunately, that last paragraph comes with a number of important caveats. First, degree holders remain higher wage earners than those with only a high school diploma. Moreover, even during the height of the Great Recession, the unemployment rate for college graduates was substantially lower than the unemployment rate for high school graduates. As such, a college degree acts as an employment insurance policy—not perfect, but often worth the cost to hedge against risk. Second, not all college degrees are created equal. A degree from Harvard, for example, signals much more competence to employers than a degree from the University of Phoenix. Employers understand that admission to Harvard alone is a considerable achievement and pay larger amounts because they expect greater competency. And while a degree from the University of Phoenix tells an employer that its holder is more competent than someone with a high school diploma, it does not signal that he or she is substantially more competent. Consequently, wise students carefully examine a school’s

 

return on investment before enrolling. Lastly, given the proliferation of college degrees, the master’s degree has become the default prerequisite an individual obtains to signal a high level of competency.  

 

Conclusion This chapter investigated the role of asymmetric uncertainty on the bargaining outcomes. Two central themes stand out. First, unlike when parties are equally informed, this kind of incomplete information can cause the parties to fail to reach negotiated agreements even when both parties would truly prefer some deal to outright failure. The mechanism that leads to bargaining breakdown is the same as what drives poker players to bluff. Bargainers in weaker positions want to bluff their strength and hide the fact that they are desperate to reach an agreement. Consequently, screening out the weak from the strong requires offering an amount that only the weak type is willing to accept. Second, what you do not know will hurt you. If you are uncertain whether the opposing party is strong or weak, neither of your available strategies is ideal. Either you offer enough to convince both types to accept or you offer enough to convince only the weaker type to agree to the deal. In the first case, you are paying more to the weak type than you needed to. In the second case, you are forgoing the gains from a successful deal with the stronger type. In other words, you are damned if you do and damned if you don’t. You therefore have to choose the better of the two terrible options.  

 

Appendix: Uncertainty with Continuous Type Spaces and the RiskReturn Tradeoff  This chapter’s models have used discrete type spaces. In other words, Barbara had some quality with probability p and had some other quality with probability 1 – p. While this makes the math considerably easier, an arguably more natural assumption is that Barbara comes from a continuum of types. For example, the smallest wage Barbara is willing to work for is x, where $10 ≤ $x ≤ $20. Barbara knows this value, but other actors can only guess where Barbara’s minimum acceptable wage lies. The downside of modeling types in this manner is that one cannot narrow the possible range of optimal offers to two choices and then check which one maximizes the proposer’s payoff. Instead, the proposer must calculate his optimal offer using differential calculus. This requires significantly greater mathematical knowledge, which is why the main chapter avoided continuous types. The upside is tremendous. Recall that for optimal offers to exist, receivers must accept with certainty when indifferent between accepting and rejecting. With a continuous type space, however, the probability the receiver is indifferent in practice is 0. (This is because the probability of drawing any single type from a continuous probability space is 0.) Thus, indifference does not come into play in any meaningful way. Now for the continuous type model. The ultimatum is the same as always. Albert offers x to Barbara, which Barbara accepts or rejects. Accepting divides the good according to Albert’s proposal: x for Barbara and 1 – x for Before, Albert. Rejecting payoff of of 0 to Albert This and atime payoff of b to Barbara. there weregives only atwo types Barbara. Barbara’s reservation value can fall anywhere on the interval between 0 and 1. Put differently, there are infinitely many types of Barbara. To keep things simple, suppose b is uniformly distributed on the 0 to 1 interval, meaning all values of b between 0 and 1 are equally likely. Solving for each type of Barbara’s optimal move is easy. Just as before, she receives x for accepting and b for rejecting. Therefore, she accepts if x > b. Albert’s optimal strategy requires deeper thinking. Previously, Albert reasoned only one two compare offers could optimal for him. He could calculate that his payoff forofboth, the be two, and select the offer size then that

yields him the greater payoff. Here, Albert cannot make the same simplifying

 

shortcut, as he must deal with infinitely many types of Barbara instead of just two. Nevertheless, Albert can calculate his expected payoff for any any offer  offer sized x. There are only two components to his calculation. First, he receives 1 – x multiplied bythethe probability Barbara rejects. accepts. multiplied by probability that Barbara As Second, a formula:he receives 0   (1 – x)Pr[accept] + (0)Pr[Reject] (1 – x)Pr[accept]   Since the 0 cancels out the probability of rejection, Albert’s payoff reduces to simply his share of the bargain times the probability Barbara accepts. The tricky part here is that Barbara’s probability of accepting depends on the offer size. Fortunately, the uniform distribution permits an explicit probability. In fact, the probability Barbara accepts is equal to x. For example, if Albert offers 0, Barbara is certain to reject. But if he offers .25, a quarter of the time Barbara’s value for b will fall below .25 and she will accept. Making that substitution yields the following:   (1 – x)Pr[accept] Pr[accept] = x (1 – x)x x – x2   At this point, Albert’s best strategy is a straightforward optimization problem that calculus can solve. The algorithm has three steps: (1) take the first derivative, (2) set it equal to 0, and (3) solve for x. The corresponding x value is the choice that maximizes Albert’s payoff. Letting f(x) represent Albert’s payoff function, those steps are as follows:   f(x) = x – x 2 f’(x) = (x – x2)’ f’(x) = 1 – 2x 1 – 2x = 0 2x = 1

x

1/2

 

  Thus, Albert optimally offers x = 1/2. Types of Barbara with values b less than 1/2 accept and types with values greater than 1/2 reject. What about the type with a b value equal to exactly exactly 1/2?  1/2? As it turns out, Barbara’s decision is irrelevant Albert’s proposal.ofThose familiar with basic probability theory will recalltothat the probability drawing a particular value from a continuous probability distribution is the integral of the probability density function from that value to that value. But the integral from any value to itself equals 0. Consequently, the probability that Albert faces Barbara when b = 1/2 is 0. In turn, the action that Barbara takes if her b value equaled 1/2 is irrelevant. The fact that there is a zero percent chance that Barbara values the good at 1/2 has a second implication. Before, we would often have to rely on the assumption that a receiver would accept when indifferent between accepting and rejecting. However, this is an artifact of non-continuous type spaces. Once we adopt continuous type spaces, everyone takes actions that are uniquely optimal, and we do not have to worry about the awkward cases of indifference.

 

Chapter 8: Commitment Problems Throughout this book so far, we have assumed that accepted deals become binding. For example, if Albert agrees to work for $30 per hour, then the company gives him a check for $1200 (minus applicable taxes) after his forty-hour week ends. There is good reason to assume this. Most countries have laws to ensure enforcement of contracts. If Albert’s employer were to withhold wages, for instance, he could seek a legal remedy. The threat of civil court, fines, and damages deters the company from breaking the agreement. Unfortunately, some bargaining situations lack a third-party to uphold agreements. This chapter investigates such scenarios. Ultimately, despite the existence of mutually preferable alternatives and no uncertainty, we will see inefficient outcomes due to one actor’s inability to credibly commit to taking cooperative actions in the future. On the bright side, bargains can sometimes work without the enforcer if the threat of punishment in the future can convince parties to cooperate in the present.  

 

Thinking like a Criminal: Walter White on Breaking Bad  Breaking Bad is Bad is a television show about a high school chemistry teacher named Walter White. After being diagnosed with cancer, Walt becomes a crystal meth cook to create an inheritance for his family. Although the show has many recurring themes, one pertinent topic for this book is how Walt adapts to business transactions in the criminal underworld. In the beginning, Walt was accustomed to taking partners at their word. He slowly learned that he should do the opposite. We can generalize Walt’s situation with the following interaction. Walt begins by choosing whether to invest in a partnership. This might include purchasing the supplies necessary to cook crystal meth, spending the time actually creating the product, and bearing the risk of being caught in the interim. If Walt does not invest, the interaction ends; if he does, his shady business partner decides whether to compensate Walt or not. The game then ends regardless. To simplify this as much as possible, let’s replace monetary payoffs with a simple ordering of preferences. Walt most prefers making the investment and having his partner compensate him. He least prefers making the investment just to be screwed over. Not investing is his middle outcome. Consequently, Walt wants to make the investment if and only if he believes the other party will properly compensate him. Meanwhile, Walt’s shady business partner most prefers having Walt invest and then not compensating him; that maximizes the partner’s monetary payoff. The second best outcome is for the partner to compensate Walt; this still results a profit. The partner’s outcome because the in partner receives no moneyworst in that case. is for Walt to not invest Using 3 to represent each individual’s best outcome, 2 to represent the middle outcome, and 1 to represent the worst outcome, here is the game tree:  

 

  Before solving for how Walt and his shady business partner should act, first note that the parties would make plenty of money if they could create a legally enforceable business contract. Specifically, compare the outcome in which Walt quits (essentially meaning no business transaction takes place) to the outcome in which they equitably split the cash:  

Both parties prefer the outcome in which the shady business partner shares the revenue. As with the standard bargaining models, this is necessary for the parties to even think about negotiating with one another; otherwise, one or both would immediately quit. However, the issue here is a matter of timing. Working backward, consider how the shady business partner would act if Walt invested:  

 

  If the shady business partner keeps all the money, he reaches a better outcome for himself than if he pays off Walt. This is intuitive. The business partner is shady after all and wants to keep as much money as he can. Walt, still largely the innocent high school chemistry teacher, cannot credibly threaten any retaliation. Consequently, the business partner has no reason not to take the cash. (This changes in later seasons when Walt develops a ruthless reputation for partners who cross him.) Now let’s take that information and apply it to Walt’s decision:  

Because Walt prefers quitting rather than being screwed over, he should opt not to invest. However, this leads to an unfortunate outcome—Walt and the shady business partner make no money even though they would both profit if the partner could credibly commit to playing fair. Hence, we call this a commitment problem; problem; one party’s inability to credibly commit causes both to be worse off. Walt’s lack of strategic foresight cost him greatly in his early extralegal adventures—he would consistently trust shady individuals and expect the best of them. After being burned way too many times, he eventually wised up and engaged in business transactions that were self-enforcing and lacked

incentives to cheat. (Initially, his solution was to just be conservative and

 

avoid entering agreements whenever possible. Eventually, it evolved into the threat and use of physical violence to force partners to uphold their end of the deal.) It is worth emphasizing that bargaining fails here because of a lack of credible enforcement of a negotiated solution.toIfdraw Walt’s were legal, he could simply go to his attorneys up abusiness contract.operation Then, if the shady guy violated the agreement, Walt could seek legal remedy in court. However, going to court to resolve contractual disputes is not an option for a meth cook, leading to all sorts of inefficient outcomes. And although Walt’s forays are entirely fictional, commitment problems occur in many other situations. The next sections look at a few.  

 

Negotiating with the Police Readers familiar with my previous books or YouTube series will recall my run-in with the police in Texas. This is a great story that also conveniently reinforces the lesson from the Breaking Bad situation Bad situation above, so I will recount it now. Before moving to Rochester, New York, to pursue a PhD in political science, I lived in San Diego, California. So when it came time to say goodbye to the sun and sand, I packed all of my possessions into my compact Honda Civic and covered the backseat with a sheet to hide everything from the prying eyes of a thief. My journey would take me across the entire country, with a detour through Texas to catch a baseball game in Arlington. However, trouble struck in El Paso, which is only a few miles away from the New Mexico/Texas border and also situated on the international border between the United States and Mexico. Driving on one of the two eastbound lanes of the highway, I found a slow-moving highway patrol vehicle in front of me in the right lane. I carefully put on my turn signal, glided into the left lane, and passed the patrol car, careful not to exceed the speed limit in the process. My plan was to spend another few moments in the left lane before moving back to the right so as not to cut off the highway patrol. In the meantime, another car sped from behind, moved to the right lane, cut in front of the highway patrol, zoomed past me, and then cut off my car. The highway patrol car’s lights went on. “Good,” I thought. The other guy was speeding, executing dangerous passes, and doing it all blatantly in front of the highway patrol. Except the highway patrol wanted to pull me me over.  over. After a minute of conversation, the reason became clear. “We noticed you were from California,” the patrolman told me, referring to my license plate. “We also saw something suspicious in the back of your car as you passed us,” he continued, referring to the sheet. “You may not know this, but El Paso is in the middle of a major drug war. I’d like you to step out of your vehicle so I can search it.” “I am just a grad student moving across the country,” I protested. “The blanket was meant to hide everything from thieves. Everything I own is back there.” The patrolman seemed uninterested in the explanation. “I still need you to

step out of your vehicle.”

 

I refused. “No. You do not have reasonable cause for a search.” This was true, and the patrolman knew it. So he started to negotiate with me. “Look, you can either let me conduct a quick search of your vehicle, or we can wait here for a while until a K-9 unit arrives to sniff around. It is hot out would can here. be on Itour way.”be better for both of us if you let me do the search so we I had taken a single quarter of game theory at the time. While I found it interesting, I was not yet convinced that it was particularly useful. This fateful day on the side of the road in El Paso changed my perspective. As the patrolman began to bargain with me, the game tree appeared in my head:  

The officer’s efforts to negotiate were reasonable—a quick search is mutually preferable to waiting for a K-9 search. However, as the commitment problem in Breaking Bad illustrated, Bad illustrated, the officer faced a credibility problem. Once I allowed the officer to conduct a search, nothing bound him to his promise for speed. Instead, he could choose whatever type of search he wished. Since extensive searches produce the best results, he would choose that. In turn, I was not truly deciding between a K-9 unit and a quick search  —I was choosing between a K-9 and an extensive search. Thus, despite the temptation to negotiate a better solution, I waited for the K-9. The unit eventually arrived; the dog sniffed around the vehicle and then gave up. The officers sent me on my way. Once again, the lack of third-party enforcement doomed our attempts to bargain. As mentioned above, that third-party enforcement is usually a country’s legal system. Thus, given that I was dealing with a police officer, it would seem that we should easily have negotiated a solution.

 

While we certainly could have reached a credible agreement, the time and resources simply would not have been worth the effort. Indeed, I would have needed to call a lawyer to supervise the whole affair, which would have been even more time-consuming than waiting for the K-9 unit and certainly more costly. Because these alternatives werestuck eveninmore costly than suffering out in the sunenforceable for a half hour, we remained the commitment problem.  

 

Commitment Problems in Newly Independent Countries The same commitment problem logic applies to civil order in newly independent countries led by oppressive regimes. Consider the plight of an ethnic minority living in such a country. The leader of the country promises to uphold equality for all citizens and insists that civil war will only be detrimental to everyone. The minority has a choice: rebel now or wait and see. The option to rebel is fleeting, however. Once the leader has consolidated power and created an effective police force, resistance will be futile. At this point, the minority has no choice but to tolerate whatever new terms the leader wishes to impose. If the minority waits, the leader consolidates power and decides whether to oppress or be nice the minority. Payoffs are as follows. The minority primarily wants peace. However, rebelling and paying the costs of civil war is preferable to ultimately suffering through oppression. For the leader, war is the worst outcome. But because he is evil, he would rather oppress the citizens than play nice. With those actions and payoffs in mind, here is the game tree:  

As should be evident, this game is structurally identical to the previous two cases. Consequently, the leader will oppress if he gets the chance, inducing the minority to engage in rebellion as a preventive measure. The leader’s inability to credibly commit to playing nice is ultimately detrimental to everyone, as both parties prefer the peaceful outcome. Of course, from a practical perspective, not all leaders are this abusive. Some are, however, and this can poison peace for everyone else. Indeed, minorities might suspect that a large percentage of the time the leader is not

evil but instead prefers maintaining equality. Yet waiting is still a risky

 

strategy—by rebelling, the minority can at least ensure a minimal amount of safety. In contrast, waiting is a gamble. As a result, war may occur even though the both parties would rather not fight. How can majorities and minorities avoid this unnecessary conflict? One solution to monitoring invite international peacekeepers into the country. Peacekeepers can helpisby compliance to preexisting agreements. If someone attempts to take advantage of shifting power and cheat, the peacekeepers can alert international authorities about the problem. Other countries might then sanction the violator or stage a military intervention. The threat of such an action forces the parties to comply with the terms of the original agreement.  

 

Post Civil War Commitment Problems Not all civil wars begin due to commitment problems, but many face a troubling commitment problem toward the end of the war. After the rival parties fight for a while, they often attempt to negotiate a peaceful resolution to the conflict. This often requires the losing side to put down its arms and reintegrate with the country—a country is not a country, after all, if it continually has two militaries with different agendas. Yet this leaves the losing side in a vulnerable position. The only thing that drives the rival to the bargaining table is the coercive power of weapons and the threat of continued war. Absent that strength, the winner has little incentive to uphold its promises. Thus, suppose you are a rebel group, and you realize you are very unlikely to win the war. The government approaches you to settle. If you do, you must then hope that the other side abides by the treaty and (more importantly for your sake) forgives you for starting the rebellion. What should you do? Consider the game tree:  

Of course, this is the same tree as before. If the rebels give up, the government has strong incentives to kill the former rebel leaders. After all, it serves as a deterrent to future potential rebels and guarantees that the current rebel leaders can never take up arms again. Moreover, the former rebel followers cannot do much to stop it because they have already surrendered their weapons. Anticipating this, rebel leaders want to keep the war going, hoping for a miracle that will allow them to win the war. The outcome is

inefficient since both parties would prefer the outcome in which the

 

government upholds the peace treaty. While this commitment problem is bad enough, the interaction is even more perverse than it seems. We know from above that the rebel group should continue fighting here. So suppose it does. Moreover, suppose that the miracle comes through, theunlikely rebels are now and in the seat.peace. The government believes it isand very to win, so driver’s it sues for Consider think about the long-term effects:  

We are right back where we started, except the government and rebels have switched places. For the same reasons as before, the rebels have little reason to uphold a peace treaty once the government has laid down its arms. Anticipating this, the government continues the fight and hopes for a miracle of its own. All told, these perverse effects commonly sabotage peace efforts between civil war combatants. As a result, civil wars last substantially longer than interstate wars; countries maintain their arms after fighting wars, so they do not have this same incentive to cheat on peace agreements. In the rare instance that civil war combatants do reach a treaty, it is often because a third party is willing to act as an external enforcer. Unfortunately, few countries are willing to play a major role in peacekeeping operations once a major war has started, leaving most civil wars to drag on indefinitely.  

 

Application: Yelp, Angie’s List, and eBay’s Reputation System Fortunately, technology has discovered one way to overcome some commitment problems. Imagine that you purchased a Game Theory 101 textbook online. The vendor has your money. Now she must decide whether to actually ship your book. There is incentive to not follow through—after all, she could keep the book in her inventory and pocket your money. Maybe she never even had the book in the first place. Similarly, suppose you found a contractor to add a loft to your house. You pay her a large sum of money up front. She does most of the work but then ignores your phone calls about adding the finishing touches you originally agreed on. Again, her incentive to cheat is obvious—she already has your money. Any time spent on your finishing touches is time she cannot spend obtaining new business from a different client. Despite these obstacles to cooperation, one solution is to extend the time horizon of the interaction. That is, you could potentially resolve the problem if you could convince your business partners that a good outcome for you today could result in more business for them in the future; a bad outcome, on the other hand, would result in less business. Indeed, this is one way that Walter White resolved his problem earlier by repeatedly working through the same distributor; the distributor had incentive to pay him each time so that he would keep producing more meth to sell. However, extending potential business in this manner is not easy. To wit, if you only wanted a single loft in your house, you would be hard-pressed to reward a job well done with further business. Your potential solution is to lie about your desire for additional lofts, but those lies may not be entirely convincing—there are only so many lofts you can add to a single house. Luckily, the Internet has found a solution. Yelp is a website that allows visitors to search for restaurants (and other stores) nearby. But the search function is of second importance. More critically, Yelp allows its users to write reviews of the local businesses. Thus, if you are looking for the tastiest shawarma around, a five star average might point you to one place over another. As mentioned in the last chapter, Angie’s List is a website with a similar concept except that it specializes in service companies and contractors. So imagine you wanted to find contractor to build the loft woulddiscard live upthe to the commitment. Then youa could browse through the who reviews,

names of contractors who put in the minimal effort, and rest assured that you

 

will get the loft you want. eBay’s reputation system works in a similar way. Many vendors sell similar products for similar prices. But some vendors are unknown quantities, and you might not want to risk having your money disappear. The reputation system resolves this potential fortrustworthy bargaining to breakdown, it allows to find which businesses have been customers as before. And,you in the worst case scenario, it gives you a means to punish bad behavior. Of course, knowledge of the system is critical for the system to work. For example, imagine you went to an auto mechanic for the first time. Because of auto mechanics have specialized skills, you as a customer are in a vulnerable position. And while you might have picked that mechanic based on strong reviews, he may still give you a raw deal if he does not suspect you will punish him for it. Thus, a good rule of thumb is to state explicitly that you found the mechanic on Yelp or a similar website. This shifts bargaining power in your favor—if he treats you poorly, he knows you have the know-how to punish him with a bad review. So while he might otherwise be tempted to swindle you for a few extra dollars, the possibility he might lose a great deal more business based on your bad review could deter unethical behavior. Indeed, in the extreme, people occasionally claim that they buy Yelp shirts just to trick businesses into giving them better service even though those people do not actually use the website. In addition, businesses have a major temptation to abuse the ratings systems on their own. For completely hypothetical example, consider the plight of an author of a book on bargaining. He would like to sell as many copies of the book as possible to earn larger royalty checks. He knows that potential buyers of a book often look to reviews to justify the purchase. Consequently, he may wish to hire shady companies that will use fake accounts to write fake five-star reviews. Despite this incentive, customers have a couple of reasons to not panic. First, any reputable website would view such behavior as a violation its terms of services. Business owners thus might not want to risk losing the ability to make sales on that website just to increase their ratings a few points. Second, some websites restrict the pool of reviewers to users who purchased from the particular vendor. Similarly, Amazon marks reviews from verified buyers with special flair.

 

 

The Kidnapper’s Dilemma While commitment problems normally lead to bad outcomes for everyone, sometimes the potential for a commitment problem can be beneficial. To wit, this section explains that the number of kidnappings that occur each year is fewer fewer   than it would be if commitment problems did not exist. Consider a simple kidnapping scenario. Our friendly neighborhood criminal Albert begins by choosing whether to kidnap a victim. If he does, Barbara chooses whether to pay to pay a ransom to release her daughter. Regardless of whether Barbara pays the ransom, Albert must choose whether to release the daughter or murder her. Preferences are as follows. Albert primarily wants to receive the ransom money. However, he is also completely evil and does not want to get caught. Thus, he most prefers receiving the ransom and murdering the victim; releasing her increases the chances of the authorities tracking him down. (This is a critical assumption—if Albert has even the slightest hint of morality and would prefer to release the victim after receiving the ransom, the results that follow will no longer hold.) His second best outcome is to receive the ransom and let the daughter go. Because kidnapping is needlessly risky if it does not net a cash payoff, canceling the whole project before it starts is his middle outcome. Albert’s second worst outcome is to kidnap Barbara’s child and then kill her without the ransom. Finally, his worst outcome is to kidnap the child and then let her go, again reflecting how he wants to avoid getting caught. Meanwhile, Barbara most prefers avoiding the mess entirely. Absent that, she would like to get her kid back. Thus, her second best outcome is the safe return of her daughter without   paying the ransom, while her third best outcome is the safe return with the ransom paid. Her worst outcomes are if Albert murders her daughter, with murder and ransom paid being the worst possible outcome. With those sensible preferences in mind, consider the game tree:  

 

  This is a complicated interaction, but we can still solve for each player’s optimal strategy by working backward. Consider Albert’s two release or kill decisions:  

Note that regardless of whether Barbara pays the ransom, Albert’s best plan is to kill the victim. The reason is simple. As outlined earlier, whether Albert has received a payment is irrelevant to the fact that he is less likely to get caught if he murders the victim. Consequently, if Barbara pays the ransom, Albert kills her daughter; Albert obtains his best possible outcome. Meanwhile, if Barbara refuses to pay, Albert still kills the victim; Albert is not happy here, but he cannot do better given Barbara’s decision. Now use that information to backtrack to Barbara’s choice:  

 

  If Barbara pays, Albert kills her daughter. If she refuses to give the ransom, Albert still kills her daughter. Neither alternative is appealing to Barbara, but not paying is the best of a bad situation—at least the villainous Albert will not profit from his misdeeds. So Barbara should not pay. Both parties end up in the second worst outcome possible. Before moving to Albert’s decision whether to kidnap, note the commitment problem that exists here. Compare the realized outcome versus the outcome in which Barbara pays Albert and he releases the victim:  

This has the classic markings of a commitment problem. The actual outcome sees both parties receive a payoff of 2. However, both parties are better off if Barbara pays the ransom and Albert releases her daughter; Albert grows richer and Barbara saves her loved one. Unfortunately, the actors cannot reach the mutually preferable outcome because Albert would cheat on the deal and kill the victim once he received his money.

Nevertheless, the commitment problem ultimately benefits Barbara. To

 

see why, consider Albert’s initial move given the existence of the commitment problem:  

If Albert quits the kidnapping enterprise, he receives his middle outcome of 3. If he kidnaps the child, Barbara does not pay, and Albert ultimately kills her. This gives him a payoff of 2. Since crime does not pay in this case (literally), Albert has no reason to break the law and will simply retire. Note that this is Barbara’s best outcome—her child avoids all trauma and death. However, suppose that Albert and Barbara could somehow create an enforceable contract that would guarantee the child’s safe return if Barbara paid the ransom. Consider Albert’s decision in this alternate universe:  

 

  Suddenly, kidnapping becomes optimal! Before, Albert avoided kidnapping anyone because it would not be profitable. Given credible commitment, however, will pay off Albert, which in turnfrom induces Albert to kidnap her Barbara child. Consequently, Barbara benefits the commitment problem—it prevents Albert from receiving any money, causing him to stay out of the enterprise and allowing Barbara to obtain her best outcome. At least commitment problems can occasionally lead to some good.  

 

Conclusion This chapter relaxed the assumption that negotiated deals are inherently binding. While a culture of contracts forces parties to follow through on agreements, removing that guarantee can lead to disastrous inefficiency. Indeed, the negotiating parties can recognize that a bargained settlement would be mutually profitable for everyone involved and yet not actually sign such an agreement. The critical barrier is that one party would prefer to pull out of the deal at a later date, which causes the bargaining opponent to suffer greatly. Anticipating that the deal will eventually fall through under worse conditions, the vulnerable party ensures that bargaining fails up front. Of course, not all of these situations are doomed. Even without the assistance of a government, police force, and court system to enforce contractual agreements, parties may sometimes negotiate mutually acceptable solutions to their bargaining problems. The key is structuring agreements so that neither party would want to defect over time. There are two main ways to accomplish this. The first is to structure agreements ag reements so that long-term benefits from cooperation exceed the short-term bonus for defection. The second is to hire a third party who will punish violations of the agreement. In either case, negotiating partners no longer need to worry that the other will attempt to abuse the agreement because they no longer have incentive to do so.

 

Chapter 9: Alternating Offers Bargaining Recall back to Chapter 4’s model with a single counteroffer. We saw that the receiver of the first offer could leverage the threat of a counteroffer to obtain a better deal immediately. However, that model cut off negotiations after a single counteroffer. If the point of adding a counteroffer is to make the model more realistic, why stop there? If Albert rejects Barbara’s counteroffer, why can’t Albert then make a second counteroffer? And why not allow Barbara to make a third counteroffer if she rejects Albert’s counteroffer? Why limit the total number of counteroffers at all? Why not just let Albert and Barbara make alternating offers to one another until they reach an agreement? This chapter builds on that logic. To keep things simple, we start small and build incrementally. The first section explores a model that allows for two possible counteroffers. The second section allows for three. At that point, we will see a pattern forming which will allow us to analyze what happens for any number of counteroffers. With that as a baseline, we can then solve a model that allows for infinitely many counteroffers. Perhaps surprisingly, if both actors are extremely patient, the result is fair: each takes half of the surplus. We will then see that patience is a form of bargaining power: a more patient individual will obtain a better share. This is the “tragedy of bargaining” mentioned earlier—the more someone needs a deal, the worse of a deal they receive.  

 

Bargaining with Two Counteroffers We begin with the simplest extension. Consider the bargaining game with one counteroffer from Chapter 4. Previously, if Albert accepted Barbara’s counteroffer, his payoff was reduced by a factor of δ (a value between 0 and 1) because of the delay. If he rejected her counteroffer, the interaction ended and both parties earned 0 for that outcome. This time, instead of the interaction ending, Albert makes a second counteroffer, which Barbara accepts or rejects. Rejection here ends the interaction, again giving both parties a payoff of 0. Accepting the agreement leads to an even further delayed consumption of the surplus. In the single counteroffer game, we represented this delay by multiplying everyone’s payoff by δ. Here, however, Albert and Barbara have postponed reaching an agreement twice. As such, we multiply their payoff by δ 2, one δ for the first counteroffer and a second δ for the second counteroffer. Drawing an extremely large game tree is overkill—the same logic from previous chapters works here. Consider the end of the game. It is identical to a one-shot ultimatum, just like the baseline example from Chapter 3. Because Barbara receives 0 if she rejects, she is willing to accept any offer. Albert therefore optimally leaves her nothing and keeps the entire surplus for himself; offering any more is a needless concession to Barbara. Consequently, Albert takes everything and Barbara receives nothing. There are three offers to consider here, so using a chart to keep track of the optimal moves will prove useful. In non-discounted payoffs, the deal reached in the third round is as follows:  

Now consider Barbara’s offer in the second round. Looking at the table above, she knows Albert receives the entire surplus in the third round. However, after including the discount factor for the time between rounds two and three, Albert’s value for rejecting in round two is (δ)(1), or simply δ.

Consequently, Barbara must offer him at least δ to induce his acceptance.

 

Offering him any less is a mistake because he will reject. This causes her to receive 0, which is less than the 1 – δ Barbara would receive if she offered δ instead. Offering any more is a needless concession to Albert and is therefore not optimal either. In turn, Barbara offers δ in the second round and Albert accepts. Barbara receives the remainder. Let’s put that information into the chart:  

Finally, consider Albert’s offer in the first round. From the table, he knows Barbara receives 1 – though, δ in thethat second she rejects After including the discount factor, valueifdrops to δ(1 –his δ). offer. As such, he must offer her at least δ(1 – δ) to induce her acceptance. If he does offer her that amount, Albert receives the remainder, or 1 – δ(1 – δ). Is offering her less than δ(1 – δ) better for Albert? No, because Barbara rejects in that case. The table shows he earns δ as a result. For Albert, this is worse than offering Barbara δ(1 – δ), convincing her to accept, and receiving 1 – δ(1 – δ) if:   1 – δ(1 – δ) > δ 2

δ ++δδ 2 > 11––2δ  >δ0 (1 – δ)(1 – δ) > 0   Note that (1 – δ) and (1 – δ) are both strictly positive since δ is between 0 and 1. The product of two strictly positive values is also strictly positive. Therefore, the inequality holds. Albert prefers offering Barbara δ(1 – δ) and having her accept to offering her less and forcing her to reject. Finally, offering any more than δ(1 – δ) is not optimal for Albert. Doing so provides her a needless concession, so Albert could improve his payoff by adjusting downward toward δ(1 – δ). Consequently, Albert offers δ(1 – δ) and Barbara accepts. This leaves

him with 1 – δ(1 – δ), or 1 – δ + δ 2.

 

 

Before moving on, it is worth comparing this outcome to the outcome when there was only a single counteroffer. That time, Albert received 1 – δ and Barbara received δ. By giving Albert the final say with the second counteroffer, Barbara’s payoff decreased by δ2 and Albert’s increased by δ2. This amount is not a coincidence. Albert takes everything in the final round. After applying the two periods of discounting, the present value of that equals δ2, which is the amount Albert recovers by having one more offer. You may also notice a pattern emerging as we increase the number of possible counteroffers. If not, the following section with a third counteroffer should make it obvious.  

 

Bargaining with Three Counteroffers As always, we start at the end. With three total counteroffers, this means we must start at the fourth overall round of bargaining. In even rounds, Barbara makes offers to Albert. If Albert rejects, he receives nothing. He is thus willing to accept all offers. In turn, in the last round, Barbara can keep everything for herself and leave nothing for Albert. Let’s note that in the same way as before:  

Now we can work on the third round. Here, Albert proposes a division to Barbara. If Barbara rejects, she receives the entire good. However, a period passes in between. Thus, after factoring in the discount, she values rejection only worth δ. Consequently, she is willing to accept any offer that is at least δ in size. Since Albert’s payoff is decreasing in the amount he gives to Barbara, he should select the minimally acceptable amount. That quantity is δ. As such, Barbara receives δ and Albert receives 1 – δ.  

Moving to the second stage, Barbara offers a division to Albert. If Albert rejects, he receives 1 – δ but discounted by δ. After he multiplies through, Albert’s value for rejection equals δ – δ2. In turn, Barbara must propose at least that much to induce Albert to accept. Because offering any more only hurts her, she should give δ – δ 2 to Albert and leave 1 – δ + δ 2 for herself.  

 

  The first round features Albert delivering an offer to Barbara. Barbara receives 1 – δ + δ2 if she rejects, discounted by δ. Thus, her present value for rejecting equals only δ – δ2 + δ3. Consequently, Albert offers that amount to her, as any more is an unnecessary concession. She accepts. He receives the remainder, or 1 – δ + δ 2 – δ3.  

This is the solution to the interaction. Once more, it is worth comparing this outcome to the outcome when there were only two counteroffers. That time, Albert received 1 – δ + δ 2 an  and d Barbara received δ – δ2. By giving Albert the final say in the third round, Albert’s payoff decreased by δ3  and Barbara’s increased by δ3. Again, this amount is not a coincidence. Adding the final stage allowed Barbara to recover 1 discounted three times, or δ3, which is the amount her payoff increased by. Indeed, in each version we have seen, giving a person one last offer has increased that person’s payoff. But if having one last offer is always beneficial, then why would either side be willing to cut off negotiations at an arbitrary point? The next section investigates the obvious next step: when there are an arbitrarily large number of offers.  

 

Bargaining with n Counteroffers Now we will look at a model with n total offers, where we imagine n to be some very large number. If we continued with the standard way of solving for optimal strategies—starting at the end and working our way backward—  we would have to spend an absurd amount of time to go through all n rounds. Fortunately, we can use a shortcut to work around the problem. With just three counteroffers, an obvious pattern emerges. Let t represent a generic period. Each additional odd round adds δ t-1 to Albert’s payoff, while each additional even round subtracts δt-1 from his payoff. Barbara receives the remainder. For example, consider the ultimatum game, in which the total number of periods equals 1. According to the rule, Albert receives δt-1  as his payoff. Substituting t = 1 makes the exponent zero. Any value to the power zero equals 1, which was his payoff in the ultimatum game. With one counteroffer, we have two total periods. Albert carries his payoff of 1 over from the ultimatum game but then subtracts δ t-1  from his payoff. Substituting t = 2 yields δ. So Albert’s payoff equals 1 – δ. Sure enough, this was his payoff from the corresponding game in the previous chapter. With two counteroffers, Albert carries over the 1 – δ from the previous game. Now he adds δt-1 to his payoff. Substituting t = 3, his payoff increases by δ2, to 1 – δ + δ 2  in total. Note this was his payoff when we derived the result step-by-step two sections ago. Finally, with three counteroffers, Albert carries over the 1 – δ + δ 2  and now subtracts δt-1. Substituting t = 4, his payoff decreases by δ3. And, indeed, the previous section showed he receives 1 – δ + δ 2 – δ3 for that game. This process could repeat an arbitrarily large number of times, but the pattern holds. Fortunately, this pattern is well-known in mathematics as a geometric series. series. Even better, the limit of a geometric series as t approaches infinity converges to a finite number. And best of all, that number is easily calculated. In fact, the string of payoffs converges to 1/(1 + δ). This may seem like black magic at first, so running through an example might prove useful. If δ = .5, then the formula says the infinite series equals 1/(1 + .5), or 2/3. Does this work in practice? Consider a series of four offers. From before,

Albert’s share equals 1 – δ + δ 2  – δ3. Substituting .5 leads to the following

 

reduction:   1 – .5 + .52 – .53 1 – .5 + .25 – .125 .625   After just four iterations, Albert’s share is very close to 2/3. Adding a fifth round moves it even closer. Recall that he adds δ 4  to his payoff in this round. As such, his share becomes:   1 – .5 + .52 – .53 + .54 1 – .5 + .25 – .125 + .0625 .6875   And, indeed, his share is closer to 2/3. Note that after four rounds this payoff was less than 2/3; this time, it is above 2/3. This is because we alternate between adding to and subtracting from his payoff. The important thing is that his payoff draws closer and closer to 2/3. Now to add the sixth period:   1 – .5 + .5 2 – .53 + .54 – .55 1 – .5 + .25 – .125 + .0625 – .03125 .65625   Once again, Albert’s share is closer to 2/3.  

 

Conclusion In this chapter, we have observed that adding more counteroffers benefits the last actor to make such a counteroffer. We thus would naturally suspect that actors would always want to allow for the possibility of negotiations to continue. After all, why would one party want to sacrifice another offer when its presence would yield a stronger bargaining position? Unfortunately, we cannot fully explore these implications here. A model with n counteroffers still has an arbitrary end point to the game. We will relax that restriction to allow for potentially infinite counteroffers in the next chapter, which introduces game theory’s canonical model of bargaining.

 

Chapter 10: Rubinstein Bargaining The previous chapter showed that as the number of possible counteroffers grows arbitrarily large, the agreements become increasingly equitable. However, even if the bargaining protocol allows for 96 trillion counteroffers, such a model still has an arbitrary end to negotiations. What if bargaining lacked a cutoff entirely? Or what if participants could never foresee the exact end of bargaining? To that end, this chapter delves into  Rubinstein bargaining, bargaining, named for economist Ariel Rubinstein. Rubinstein was the first to show that an infinite horizon bargaining game has a relatively simple solution despite the seemingly endless complexity of the environment. And, moreover, the result is identical to the counteroffer games as the number of potential counteroffers becomes increasingly large. Before analyzing this model, a warning is necessary: Rubinstein bargaining gets very technical, very quickly. I have included it in this book because it is economics’ canonical model of bargaining. No book on the subject would be complete without it. However, the logic and mathematical formulae might prove difficult for someone who has not practiced their algebra in a long time.  

 

Infinite Horizon Bargaining Rubinstein’s setup is straightforward. In all odd periods, Albert proposes a division of the good to Barbara. Barbara then accepts or rejects it. Accepting implements Albert’s offer and ends the interaction. Rejecting transitions the game into the subsequent even period. In the even periods, the roles are reversed. This time, Barbara proposes a division of the good to Albert and Albert accepts or rejects it. Accepting implements Barbara’s offer and ends the interaction, while rejecting transitions the game into the subsequent odd period. If Albert and Barbara ever come to terms, they discount their shares of the division by the number of periods that have passed by. If they bargain forever but fail to reach an agreement, they receive 0. Before solving for the players’ optimal strategies, a few comments are in order. First, Rubinstein bargaining has a very intuitive feel. At a flea market, you might hear a buyer and a seller take turns proposing prices over a good. The buyer might suggest $5; the seller could counter with $15. The buyer might come back with $8, while the seller could reconsider and offer $11. Then the buyer might accept. Similarly, in Rubinstein bargaining, the parties keep exchanging offers until one of them agrees to a division, which could conceivably take a very long time. Second, one potential criticism of this model is that, in practice, people cannot bargain forever. Eventually, Albert and Barbara will have something better to do with their lives. And, if nothing else, one of them will die before they can exchange a trillion offers. However, a better interpretation of the infinite horizon setup is not that the interaction will truly go on forever but rather at no point can either player be sure that they will have the last offer. The previous models of bargaining have all contained a certain end to the offers. But this is strange. Why should bargaining end after two periods? Or 87? Or ten million? Fortunately, the discount factor models some external probability that bargaining breaks down for reasons unknown—perhaps because a meteor destroys North America, someone crashes into the car they are bargaining over, or Albert’s son calls and needs to be picked up from soccer practice. In any case, the infinite horizon does not truly require that the parties keep

negotiating until the end of time, just that no one knows that they will have the final offer. Finally, finding the optimal strategies in the infinite horizon setup will

 

not be easy. Before, optimal strategies in the present depended on optimal strategies in the future. The solution involved solving for those future actions and then working backward to the present actions. The infinite horizon setup provides a new challenge: it lacks a definitive end to work backward from. For example, Albert cannot start by considering what happens during round 1,000,000 of bargaining because his optimal proposal depends on what would occur should the parties reach round 1,000,001. But Albert cannot start working backward from round 1,000,001, as that round depends on round 1,000,002. And so forth. As a result, the logic required to find the optimal strategies is significantly more complex than anything previous. That said, Rubinstein’s model has become so influential than anyone with a serious interest in bargaining theory ought to understand its solution. Others may (understandably) want to skip this part. For those individuals, the takeaway pointshas areexactly as follows: despite the infinitely many periods, the game one optimal planpotential of actionforfor both players. Bargaining ends in the first period when Barbara accepts Albert’s offer of δ/(1 + δ). Thus, Albert receives 1/(1 + δ) and Barbara receives δ/(1 + δ). In other words, the outcome is exactly the same as when the finite number of counteroffers grew arbitrarily long, as we saw at the end of Chapter 9. For those still interested in understanding why those strategies are optimal, the following analysis makes a simplifying assumption. Rather than search for strategies that involve taking different actions at different times—  for example, Albert offers some amount for his first few proposals before switching to a different amount at a later time—we instead search for stationary strategies. stationary  strategies. A stationary strategy is just a plan of action that remains the same in every period. This is a sensible simplification. The structure of the interaction during period three is no different than the structure of the interaction at period one. As such, there is no compelling reason to believe that Albert’s proposal should be any different. Moreover, one of Rubinstein’s most interesting results is that the only only   optimal strategies in an infinite horizon setup are stationary. Proving this requires yet even more involved logic, so we will stick to just showing that the stationary strategies are indeed optimal. Let’s now find those strategies. Recall that the major problem of solving

an infinite game is that no end point exists. Consequently, to solve for his optimal strategy, Albert must think more holistically about the bargaining

 

environment. Consider Albert’s decision at some generic odd period. For the moment, Albert does not know how much he needs to offer Barbara to induce her acceptance. To take that into account, let V B  be Barbara’s continuation value   for rejecting Albert’s offer in this odd period and moving the value negotiations into the following even period. Her continuation value is simply how much she expects to receive if she rejects the offer. Again, for now, Albert does not know exactly what V B equals. However, he can still conclude two things about it. First, Albert can always offer enough to induce Barbara to accept. To see this, note that V B must be at least 0 but no more than 1. This is because there is only one unit of surplus for the players to divide. It is thus impossible for Barbara to reject and ultimately receive more than 1. Meanwhile, the bottom is capped at 0 because it is impossible for Barbara to receive a negative value. We can conclude a second important point as well: Albert always prefers inducing Barbara to accept an offer in the odd period to inducing Barbara to reject. To see this, note that if Barbara’s continuation value equals VB, Albert’s continuation value cannot be greater than 1 – V B; this is because there is only 1 unit of surplus to go around. After factoring in the discount, he can only receive δ(1 – V B) at most by inducing rejection. In turn, Albert has two options. First he can offer Barbara δVB, induce her to accept, and receive 1 – δV B. (Offering more than δV B  cannot be optimal for Albert, as that extra amount still induces Barbara’s acceptance but is an unnecessary concession.) Alternatively, he could offer Barbara some amountprefers less than δVB, force Barbara to reject, and receive at most δ(1 – V B). Albert offering δVB if:   1 – δVB > δ(1 – V B) 1 – δVB > δ – δVB 1>δ   This is true. Therefore, Albert prefers offering δV B  and inducing Barbara’s acceptance. Let’s recap. We started not knowing whether the parties would make a

deal. Solving for Albert s optimal strategy showed that in any odd period   including the first period—Barbara accepts Albert’s optimal offer. However,

 

we do not know the value of V B. And because Albert offers δVB, we do not know Albert’s offer strategy until we pin down V B. And we still know nothing about Barbara’s offer strategy. Fortunately, all these problems solve themselves. Consider Barbara’s offer strategy in the even period before Albert offers δVB. Barbara knows that Albert receives 1 – δVB  if he rejects her offer. After accounting for the discount factor, Albert therefore rejects any offer less than δ(1 – δV B) and accepts anything at least as large. In turn, Barbara can offer an amount smaller than δ(1 – δVB), induce Albert to reject, and receive δV B as her payoff. Alternatively, she could offer δ(1 – δVB), induce Albert to accept, and receive 1 – δ(1 – δV B). (As before, offering any amount greater than δ(1 – δVB) is a needless concession and therefore is not optimal for Barbara.) She prefers offering the optimal acceptable amount if:   1 – δ(1 – δVB) > δVB 1 – δ + δ2VB > δVB 1 – δ > δV B – δ2VB 1 – δ > δVB(1 – δ) 1 > δVB   VB is no greater than 1 and δ is less than 1, so this holds. Consequently, B Barbara optimally offers receives the remainder, or 1Albert – δ(1 –δ(1 δV B–). δV ) and Albert accepts. Barbara Note that Barbara’s strategy requires knowing the value of V B  to implement. But the process of solving for Barbara’s optimal offer also yields her continuation value. Recall that Barbara’s continuation value is the amount she receives if she rejects an offer in an odd period. The last few paragraphs showed that if Barbara enters an even period, she induces Albert to accept her offer and receives 1 – δ(1 – δVB). But this is exactly what we were missing: Barbara receives 1 – δ(1 – δVB) if she rejects an offer in the odd period because she earns 1 – δ(1 – δVB) in the subsequent even period.

At first glance, this realization may seem irrelevant. After all, 1 δ(1   δVB) contains the missing critical value of VB. But since 1 – δ(1 – δVB) is

 

equal to VB, we can set up an equation and solve for V B:   VB = 1 – δ(1 – δV B) VB = 1 – δ + δ 2VB VB – δ2VB = 1 – δ VB(1 – δ2) = 1 – δ VB(1 + δ)(1 – δ) = 1 – δ VB(1 + δ) = 1 VB= 1/(1 + δ)   Thus, Barbara’s continuation value equals 1/(1 + δ). Recalling that Albert optimally offers δVB  in all odd periods, substituting for VB  shows that Albert’s optimal offer equals δ/(1 + δ). We still need to solve for Barbara’s optimal proposal strategy. Recall that she offers δ(1 – δV B) in all even periods. Since we now know the value of VB, we can substitute it into the figure and solve for it:   δ(1 – δVB) VB= 1/(1 + δ) δ[1 – δ/(1 + δ)] δ – δ2/(1 + δ) δ(1 + δ)/(1 + δ) – δ 2/(1 + δ) (δ + δ2)/(1 + δ) – δ2/(1 + δ) δ/(1 + δ) + δ2/(1 + δ) – δ2/(1 + δ) δ/(1 + δ)   So Barbara’s optimal offer strategy is the same as Albert’s. This makes sense. After all, if Barbara rejects Albert’s offer in the first period, the infinite horizon game essentially resets and places Barbara as the first player. Consequently, her optimal offer ought to be the same as Albert’s optimal offer. The proof demonstrated exactly that.  

 

The First Mover’s Advantage Before moving forward, it is worth spending a moment to understand the outcome of the infinite horizon bargaining model. In every period, the proposer offers δ/(1 + δ) and the other side rejects offers less than δ/(1 + δ) and accepts anything at least that large. Thus, in practice, Barbara accepts Albert’s first offer. Despite the potential for the interaction to last forever, bargaining surprisingly exhibits a no delay property—the delay property—the sides come to terms at the first possible moment. Moreover, the shares that each party ultimately receives—1/(1 + δ) for the first proposer and δ/(1 + δ) for the other player—  match what the players would receive as a finite length game grows arbitrarily long. Notice, though, that the divisions are unequal. Again, Barbara receives δ/(1 + δ) while Albert keeps the remainder, or 1/(1 + δ). Since δ is less than 1, Albert’s share is larger than Barbara’s. This is consistent with the ultimatum game’s result that proposal power is a form of bargaining power. But the distributive outcome in the infinite horizon interaction is not nearly as extreme as in the ultimatum game. Before, when Albert made a single ultimatum to Barbara, Albert extracted the entire good. But here, Barbara is receiving closer to a fair share. This is because the significance of the first move diminishes as the potential length of the game increases. But how equal that share is depends largely on the value of the discount factor. When δ equals 0, the players do not care at all about the future. In essence, the game might as well be an ultimatum, as everything that happens afterward is completely worthless from their perspective. But this means that Barbara’s share of δ/(1 + δ) is worth 0—just like in the ultimatum game. However, the more patient the players are, the closer δ/(1 + δ) moves to 1/2. This is consistent with the counteroffer game’s lesson that patience is a form of bargaining power. Finally, note that as δ approaches 1, both players’ payoffs do in fact converge to 1/2. Put differently, as the players become infinitely patient, the surplus is evenly distributed. This makes sense. If Barbara feels virtually no hardship from delaying agreement for a period, Albert’s first-proposal advantage becomes unimportant since Barbara can just reject his first offer at very little cost. Of course, because Albert is also extremely patient, he can do

the same thing when Barbara proposes a division. All told, the first offer advantage becomes irrelevant, and the payoffs correspondingly become equal.

 

 

 

How the Rich Get Richer The idea that patience is power has been a central theme of this book. However, the games studied thus far have modeled patience in a rudimentary way. Indeed, both sides have been equally patient throughout. For example, in the infinite horizon game, both players utilized the same discount factor δ. Yet believing that each player has a different level of patience is entirely legitimate. For instance, suppose the two actors were businesses bargaining over the surplus gained from using each other as a supplier and a distributor. The supplier could be a nationwide firm but the distributor might be a small business. In this case, reaching an agreement is less important for the supplier. After all, being a nationwide company, the supplier’s welfare is distributed over a large number of business transactions. As such, this single transaction means relatively little to the supplier since it can sustain itself in the absence of the agreement. On the other hand, the distributor might rely on this single transaction. Failure to reach an agreement might mean no profits, which could cause the company to default on loans or go out of business. As mentioned in the introduction, a “fair” resolution would give more of the benefits from trade to the small business because it relies more heavily on the profits to stay afloat. However, bargaining leads to a cutthroat outcome. The nationwide company turns the small business’s desperation into weakness. Ultimately, the nationwide company receives a disproportionately large share of the benefits from agreement. To understand the differences in patience, the following model makes a simple modification to the previous infinite horizon game. Rather than having a common discount factor δ, there are now two: δA  and δB, where the subscript denotes the corresponding actor. For now, we will leave these general. Later, we will compare how the results change as one discount factor decreases. Fortunately, modeling separate discount factors does not alter the method of finding the players’ optimal strategies. Again, no definitive end exists, so the actors must instead consider the bargaining situation more holistically. Consider Albert’s offer strategy in any odd period. Barbara’s continuation value remains VB. Although Albert does not know the exact value of V B, it

still must be between 0 and 1. Consequently, Barbara is willing to accept any offer at least as large as δBVB. Anything greater than δBVB  is a needless

 

concession to Barbara, so Albert’s optimal offer is δ BVB. Now consider Barbara’s offer to Albert in the previous round. If Albert rejects, he will offer Barbara δ BVB  in the next round. He will receive the remainder, or 1 – δBVB. But because of the delay, he is willing to accept any offer at least as large as δA(1 – δBVB). Like before, offering any more than that is a needless concession, so Barbara’s optimal offer is δ A(1 – δBVB). She receives the remainder, or 1 – δ A(1 – δBVB). As in the previous section, we can use this value to solve VB, the continuation value Barbara receives for rejecting an offer from Albert. Remember that VB  is simply the amount Barbara receives if she rejects an offer. But if she rejects an offer from the previous stage, she proposes δ A(1 –  δBVB) and receives 1 – δA(1 – δBVB). So VB  must be equal to 1 – δA(1 –  δBVB). Working through the algebra yields the solution to V B:  

VB = 1 – δA(1 – δBVB) VB = 1 – δA + δAδBVB VB – δAδBVB = 1 – δA VB(1 – δAδB) = 1 – δA VB = (1 – δA)/(1 – δAδB)

  Remember that this is Barbara’s continuation value, not how much she earns. Recall that Albert offers her δBVB in the first round, and she accepts. Therefore, she receives δB(1 – δA)/(1 – δAδB). Albert takes the remainder, or 1  – δB(1 – δA)/(1 – δAδB), which simplifies to (1 – δB)/(1 – δAδB). As before, if the discount factors are very close to each other, Albert takes slightly more of the surplus. For example, substituting .9 for both δ A and δB  gives Albert (1 – .9)/(1 – .81), or 1/1.9. This leaves Barbara with .9/1.9. The additional .1/1.9 Albert receives is his first mover’s benefit. Now suppose Barbara is more patient than Albert. Let δ A = .8 and δB = .9. Here, Albert receives (1 – .9)/(1 – .72), or .1/.28. Thus, Albert takes just over a third of the surplus. Barbara earns the remainder, or .18/.28. As such, Barbara comes out ahead this time.

More generally, note that Albert s payoff increases as his discount factor increases and Barbara’s discount factor decreases. In other words, Albert

 

benefits from his own patience and preys on Barbara’s impatience. Meanwhile, Barbara’s outcomes run opposite—she performs better as she is more patient and receives a smaller share of the surplus as Albert becomes more patient. As alluded to previously, this is sometimes called the rich get richer roperty of bargaining  bargaining  or the tragedy of bargaining. bargaining. One way to interpret patience is how much an actor does not   need to make a deal; the less the need, the higher the discount. Like before, imagine Albert’s firm needed to make a deal with Barbara immediately to stay in business, while the deal mattered very little to Barbara. If the world were fair, Albert would receive a greater share of the surplus, as his business needs the greater profit margin to stay afloat. However, the opposite occurs. Barbara owns the richer of the two companies, and she can leverage the fact that she does not need the deal to obtain greater surplus. richer. Indeed, On the bright aside, noteshare that of thethepoorer doConsequently, not get poorertheinrich thisget context. Albert benefits from the relationship as well—it is just the amount he gets richer by is considerably smaller than the amount Barbara gains.

 

Chapter 11: Understanding Bargaining Understanding bargaining is not easy. Few of us bargain frequently enough to make strong inferences about negotiations in general, leaving us with fleeting anecdotal recommendations as our best guidance. For most situations, that unreliable information simply is not good enough. This book investigated an alternative way to learn about bargaining: abstraction. By modeling negotiations between two parties, we could observe the underlying mechanics of negotiation and draw inferences that apply to a broad class of situations. In this manner, our understanding is not based on persuasive rhetoric but rather cold, hard logic. Of course, at some point, we must step away from the comfort of models and begin applying the lessons to our own lives. In that regard, I leave you with the six fundamental lessons of the origins of bargaining power.   Proposal Power.  From our very first model, it became clear that the ability to make demands is critical to constructing a profitable deal. While perhaps surprising at first, the logic makes sense. If all you can do is say yes or no, your bargaining partner only needs to give you the bare minimum necessary to satisfy your demands. Unfortunately, this leaves you only slightly (or not at all) better off than if bargaining were impossible. Fortunately, obtaining proposal power is easy in real life. Simply stick up for yourself, be heard, and do not be afraid to make some demands.   Patience. Impatience is weakness in bargaining—the more you need a deal, the worse the deal will ultimately be. This is because the other side can recognize how desperate you are to reach an agreement and withhold most of the benefits from you. However, without any better options, you still must accept the lowball proposal. Again, though, the solution is simple—be proactive and engage in negotiations before you absolutely have to. And if you are desperate, do everything you can to hide that fact. Moreover, do not forget “the tragedy of bargaining”—that is, those who

need a deal the most receive the fewest of the benefits. Undoubtedly, this property of bargaining creates wealth inequality. In recent years, political groups have accordingly fought for legal protection for such individuals.

 

While you alone cannot determine such political policies, you can control your own finances to some degree. Financial advisors stress that people need to save a larger portion of their income to save for retirement and protect themselves from unforeseen large expenses. Bargaining theory suggests the same; a better financial position at the start of negotiations leads to even better financial positions after negotiations end.   Outside Options. You are only as good as your next best alternative. If you can only negotiate with the person in front of you, you are not going to get a good deal. On the flip side, leveraging multiple potential partners against each other allows you to send more money your way, especially if ou are ou  are the only person they they can  can negotiate with. In other words, competition for their business is bad for you but competition for your business is good for you. And no circumstances should you intentionally limit yourself to a single carunder dealership.   Credible Threats.  We would all like to threaten our rivals into submission and successfully demand that they give us the entire surplus. Unfortunately, most of these threats will only fall on deaf ears—if you do not have incentive to follow through on your threat, your words are worthless. However, your words will have an audience if you can tie your hands and force yourself to take a firmer stand at the bargaining table. The process may limit your future options, but sometimes relinquishing the initiative is the only  way to come out on top. Information.  The more you know, the better off you are. We rarely know the other person’s bottom line in bargaining, yet that information is critical to driving them down and increasing your share of the surplus. Gathering this information is rarely easy because it often exists only in the brains of the bargainers. Nevertheless, you can sometimes gain an advantage over businesses that sell standardized items—much of that pricing information is freely available online. Use that information to your advantage, tailor your offer accordingly, and profit. On the other hand, if you know something that the other side does not, be

careful in how you release that information. Sometimes revealing yourself will be helpful to everyone. Sometimes, it will only harm you. And

 

sometimes you will have to pay a deep price for the other side to believe you. Understand that you might have to spend some money to make some money.   Commitment.  Despite the existence of deals that are better than breakdown for both parties, bargaining may fail if the actors cannot credibly commit to uphold an agreement over time. The critical lesson of commitment problems is to not fall victim. Again, talk is cheap—words only matter if your opponents will follow through on their promises. If you are concerned you might be trapping yourself in a corner, stop for a moment and think about the situation from the other side’s perspective. If they have no incentive to commit to their claims, you should be careful going forward.   Let the bargaining begin!

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF