Optimal Matching, or How to arrange stable marriages?

Posted in Game Theory on August 16, 2010 by dray

In my previous post, I had talked about the optimal strategy to maximize your probability of finding the perfect match. Now I’ll move on to the problem of how we might construct a social mechanism to find the best matches for people. First we need to answer what “best match” means. I’ll assume that the most efficient set of matches, from a social-optimal point of view, are stable matches. A stable match is the one where there is no incentive for individuals to leave their current pairing and move on to a better match.

Let’s provide a very simple setup for the problem: There are N (say 100) men, and N women (we could assume unequal numbers, but the argument would still hold), and each man and woman ranks the others from 1 up to N. For example, we would allow a man to rank the women from 1 to 50, and the rest could be “No Match”, i.e. the man would rather go unmatched than be paired with any of them.

An unstable match would be one where there are men and women who are not married to each other, but prefer each other to their actual mates, therefore having an incentive to break their current match. An unstable matching leads to a situation where Brad is paired with Jenn, and Mike is paired with Angelina. But Brad prefers Angelina to Jenn, and Angelina prefers Brad to Mike, thus giving an incentive for Brad and Angelina to leave their current partners and run off with each other. A stable matching is one where there are no unstable matches.

Now, is it always possible to find a stable match given everyone’s preference rankings? The answer is, surprisingly, “Yes”. The mechanism for achieving it is simple, and was initially proposed by mathematicians Gale and Shapley. The iterative matching algorithm (or procedure) proceeds as follows:

  1. In the first stage, each man proposes to his favorite woman. Each woman who receives more than one proposal rejects everyone except her favorite among the men who have proposed to her. However, she does not accept the proposal, but keeps him on a string to allow for the possibility that someone better might come along later.
  2. The men who were rejected now propose to their second choices. Each woman chooses her favourite from the group consisting of the new proposers and the man on her string, if any. She rejects the rest and keeps the, possibly new, favorite in suspense.

If we iterate this process, eventually every woman would have received a proposal. As soon as the last woman gets her proposal, the “courtship” is declared over, and every woman accepts the man on her string.

The resulting match from this process is provably stable. Suppose Brad and Jenn are not married to each other, but Brad prefers Jenn to his own wife.  Then Brad must have proposed to Jenn at some stage and been rejected in favour of someone that Jenn liked better. Thus Jenn prefers her husband to Brad and there is no instability.

It turns out, however, that there are several possible stable matches and the (male-proposal) Gale-Shapley algorithm just finds one of them. We could have also constructed a similar algorithm with females proposing. It turns out that the stable matching generated from females proposing would be different from the one generated when men propose.

In fact, when men propose, the matching is Male-Optimal. This means, among all stable pairings, every man finds a woman who is the highest on his list that he can feasibly get paired with. This matching is not optimal for women. When women propose, the resulting matching is female-optimal. Interestingly enough, we cannot find a matching that is simultaneously male and female-optimal.

Real world match-making, of course, is a lot more complicated, but formulating it as this simple game has found applications that people care about. One example is the US Medical-Residency matching program. Students submit preference ranking for hospitals, and the residency programs, in turn, have preferences for the students. Students apply to the programs, and the hospitals accept matches, in iterative stages. The hospitals can accept multiple offers. A version of the algorithm above developed by Al Roth – a Harvard Economics professor – is currently used for the US residency-matching program.

Match-making algorithms find use in a broad range of Economic activities. In fact, markets can also be viewed as a large-scale match-making service, matching buyers with sellers (often through double-auctions). Here the goal is to develop an efficient market exchange mechanism such that goods would be exchanged between the traders who value it the most. In this setting however, the matching algorithms described above are no longer valid due to the presence of an externality, money. That’s why the algorithm above does not produce stable matches in the presence of dowries, where the parties can offer money to incentivize someone to propose to, or accept, a non-favorite choice.

Another issue that arises in designing such mechanisms is whether the system can be “gamed”. Going back to our example of stable marriages, can Ms. Machiavelli manipulate her choices in order to secure a higher-ranked match for herself? Is it possible for a residency applicant to game the system by manipulating his reported ranking? It’s an argument that I’ll keep for a later post.

Optimal stopping, or When to stop searching for your perfect match?

Posted in Behavioural Economics, Probability and Statistics on April 9, 2010 by dray

This problem inspired me to write again after a long hiatus. It deals with finding the perfect match. Suppose you’re dating in order to find a spouse; equivalently, from a matching point of view, say you’re a company interviewing candidates for a position. There are N possible candidates, and you would like to pick the best one among them. You wouldn’t know the best candidate a priori; only after you’ve seen all of them.

The choice here has to be sequential. Once you see a candidate you’re allowed to say yes or no. If you say yes, you are not allowed to search anymore. If you say no however, you cannot get the candidate back, even if she were the best one you saw.

So there is an inherent dilemma: if you choose someone too early, they may not be the best match out there, and it would have been better to explore more. On the other hand, the best could have been the “one that got away”. The more you check out new candidates, the greater the likelihood that the best one slipped away.

So the question is, how many people should you date before finding the one to marry? Or how many people should you interview before hiring? If you are willing to see up to N people, when is the optimal time to stop exploring and settle for someone?

Here is a simple example to get started on this: suppose you’re willing to date 100 people before marrying one. They are all ranked from 100 to 1 (worst to best) but you have no knowledge beforehand about the overall ranking of your current date. Furthermore, suppose the dates arrive in random order, irrespective of quality. Can you find a strategy for choosing in order to pick the best one (rank 1) with at least 25% probability?

Here is a simple strategy: don’t pick anyone in the first 50 dates. Then pick the first date who is better than all the ones you have seen so far. There is a 1/2 chance that the best match was in the first 50, and the second best is in the last 50. And there is a 1/2 chance that the second best was in the first 50, and the best is in the first 50. Thus, this strategy picks the best with 25% probability.

You can prove mathematically that the optimal stopping time for exploration is N/e (e ~ 2.718), or after 37 candidates, if N = 100. So the optimal strategy is to use the first 37% for exploration, and then pick the first candidate that comes along who is better than the ones you have seen so far.

How effective is this optimal strategy? It turns out that the probability of finding the best match is 1/e (for any N), or about 37%, which is the best you can do for this problem. Compare this to choosing the first one that comes along: the probability of choosing the best one is 1/N (or 1% for our example)!

As students of human behaviour, we might ask, do people actually use this strategy when they need to find a suitable match? As Herbert Simon, the ’78 Economics Nobel Laureate, said “The Rational Man of Economics is a Maximizer – he will settle for nothing but the very best”. So for a Rational Econ, using the first 37% for exploration is the best way of finding the No. 1 match. But most Humans are Satisficers – they choose a threshold of quality, and if a choice passes that threshold, they’ll be happy enough (even if their match is a rank 5 or a 15). Many experimental studies in operations research have found that most people stop searching too soon…

Slumdog Wins at the Oscars!

Posted in Behavioural Economics, Probability and Statistics with tags , , on February 18, 2009 by dray

Spoiler Alert: With the Oscars merely 5 days away, critics are hotly debating the merits and demerits of the past year’s favorite films: Slumdog Millionaire, Benjamin Button and the Reader. If instead, you prefer economic theory’s take on the matter, the answer is already (almost) determined. 

 

According to the prediction market, Intrade, the probability that Slumdog will win the best picture award at the Oscars is roughly 87% http://play.intrade.com/jsp/intrade/trading/t_index.jsp?selConID=322796 (the bid is 86.5 and ask is 87.4). In comparison, the second most likely winner, Benjamin Button, has a 9% probability of winning!

 

Prediction markets “knew” of the Obama victory months before the election results. Following the price trend is interesting, especially to see how the market reacted to the news of Sarah Palin’s nomination: 

http://data.intrade.com/graphing/jsp/closingPricesForm.jsp?contractId=409933

 

Prediction markets are known to be highly efficient at aggregating information. Their prediction of a future outcome is more reliable than a voting system, which is susceptible to  voter biases. For example, if we take a poll, some people might prefer the unique storyline of Benjamin Button, or Slumdog’s sass, or the dramatic elements of the Reader. However this just reflects their personal opinion, and is susceptible to biases (their own tastes, what their friends think of it etc.). In prediction markets, you have to put your money where your mouth is. So if you think that Slumdog is over-hyped, then you’d want to exploit this bias and bet on the other films. Strategic bettors would try to make money by exploiting these biases, and spread their bets on the other films (the payoff for a Frost/Nixon win is about $100 for every $1.30 bet). If there is a sizable number of strategic bettors, then this decreases the price of Slumdog. In this way, inefficiencies in the market are removed, and the price adjusts to reveal accurate information. Since over 92,000 trades took place on Slumdog alone, I have strong reason to suspect that strategic bettors are in the market, and have exerted a correcting influence over the price.

 

At the end of the day, the Oscar committee can still pick an unlikely winner, based on their own biases. For me, the suspense is unfortunately over. But if anyone wants to wager on this year’s Oscar winner, let me know – you know what my money will be on!

Libertarian Paternalism

Posted in Behavioural Economics, Political Science with tags , on February 1, 2009 by dray

Last week I participated in a round-table seminar in London, UK, on “Policy and the neuroscience of human decision-making”. It was a debate amongst neuroscientists, behavioural economists, politicians, policy-makers, and journalists, on how insights from our understanding of human behaviour can be used to guide and shape public policy.

The discussion centered on the theme of Libertarian Paternalism. The term, as oxymoronic as it sounds, was coined by Richard Thaler, and is the topic of his new book, Nudge. The idea was also introduced by Colin Camerer under the name of Asymmetric Paternalism, around the same time.

Staunch libertarians argue for a minimalist government. A strong case is made for the individual’s ability to make the right choices, and for the market system to correct inefficiencies. Paternalists on the other hand, argue for a top-down system of regulations that ensure fair distribution of resources.

Both ideologies, however, come with their share of faults. In order for true libertarianism to work, individuals need perfect rationality, information, and willpower. Pure paternalism, on the other hand, ignores the wisdom of the crowds. Laws and regulations, set by the government, are after all made by fallible human beings who cannot necessarily come up with a “magical set of rules” to maximize welfare in society.

Libertarians value the freedom of the individual to choose according to his or her preferences. But “people do not always know what they want”. Libertarians can embrace paternalism by realizing that in many domains, people lack clear, stable or well-ordered preferences. What they choose is a product of framing effects, reference points and default values; rendering the very meaning of preferences unclear. In this light, private and public institutions can be authorized to steer or “nudge’ people in directions that promote their welfare. Consider a simple case of personal savings, given in Thaler’s book:

“Some employers provided their employees with a novel option: Allocate a portion of their future wage increases to savings. Employers who choose this plan are free to opt-out at any time. A large number of employees agreed to try the plan, and very few of them opted out later. The result produced significant increases in savings rates (from 3 to 11%).”

This simple default option worked since people are more rational about future earnings, and this type of future allocations to savings do not decrease their current take-home pay. Examples also include issues in healthcare where providing slight incentives, such as a small amount of money, dramatically increases the number of people willing to sign up for medical check-ups.

The role of the government can be seen as moving away from a nanny state, to that of a gentle guide: provide defaults and incentives to the general public, while those who have obtained information, and given due thought to their decisions, can have the freedom to make unusual choices. Some leeway should also be given, even if it has short-term costs to the individual, as a means for them to explore and discover their preferences.

The conference was chaired by Matthew Taylor, the chief executive of the RSA (and erstwhile chief strategist to Tony Blair). He is one of the most articulate people I’ve met, and keeps his own blog. Matthew is heading a new initiative on the Social Brain: the British government’s adoption of scientific approaches to public policy formation.

It was an interesting day of thought-provoking talks, including one by Peter Bossaerts, Professor of Finance and Computation and Neural Systems at Caltech, on financial market regulations guided by insights from artificial asset-market experiments. The suggestions were mainly counter-intuitive, and shows how regulations can be quite arbitrary and unproductive if they weren’t tested out on empirical evidence first.

Peter Bossaerts also made an interesting comment about Risk-management, as practiced by Investment Firms, that I’ll repeat here: Typically, when a wealthy investor shows up with a few million dollars, the firm provides a questionnaire to gauge their level of risk preference. The firm creates a portfolio according to this estimate. But this doesn’t mean that it’s the right thing for the investor. “This is like watching a child stick his hand into electric outlets and inferring that’s what the child wants, and then filling the room with outlets”.

The evening ended with a public keynote at the Royal Society by Colin Camerer, Professor of Behavioural Finance and Economics at Caltech. The talk, titled “Cognitive Neuroscience and Regulatory Paternalism: more or less?” is  available as an online podcast from the RSA website.

Regret

Posted in Behavioural Finance, Neuroeconomics with tags , on January 26, 2009 by dray

In return for your hard work at an Investment Bank, your company decides to reward you with a sizable bonus. You’re given two options:

(A) Take a sure $1 million. Or,

(B) Play a lottery where you get $1 million with 89% probability, $5 million with 10% probability, and nothing ($0) with 1% probability.

Given the emotionless genius that you are, you calculate that the expected value of option B is $390,000 higher than option A, and pick option B without any hesitation whatsoever.

Most people, however, (emotionless geniuses are still a minority in the population) would pick option A, if real money were at stake. This type of behaviour (Allais paradox) is very puzzling from an economic point view as it violates expected utility maximization, one of the central tenets of rational decision-making. 

 

Regret avoidance is a plausible explanation for such behaviour: By picking option B, you are exposed to the unlikely (1%) event where you get nothing. That would lead to a deep feeling of regret “If only I had picked A, I could have walked away with a sure $1 million”. Regret is indeed a strong emotion (more songs have been written about the road not taken than about love), and people would give up expected utility in order to not experience it in the future. 

This can account for other irrational behaviour such as buying lotto tickets, and not getting examined for a health-related disorder. Even though people are aware that the expected value of a lotto ticket is negative, they still buy them since if their lucky numbers were chosen, they’d feel regret (and the Lotto company plasters the happy faces of the winners everywhere). In fact, a regular lotto-player in Liverpool committed suicide in 1996 after he came to know that he missed out on £2 million prize: his lucky combination (14,17,22,24,42,47) won out, and he had incidentally not renewed his ticket for that week!

This is also an issue in healthcare. People avoid going for a check-up, even when they know that the information can only help them, since finding out that they have a disease would lead to the experience of regret afterwards.

So if regret makes us inefficient decision-makers, why did this emotion evolve in humans? Would we be better off if we could somehow remove this emotion? 


No, since regret is a very useful component of learning. Learning through rewards and punishment developed quite early on the evolutionary path. Rats and birds learn through pleasure and pain as well, but they learn slowly. Regret on the other hand, is learning after the fact: comparing “what is” with “what might have been” (counterfactual learning). Compared to disappointment, which is felt when a negative outcome occurs which had nothing to do with our decision, regret is strongly associated with a feeling of responsibility. This provides more signal than just rewards and punishment alone, and learning is much faster.

If perfect knowledge of outcomes were available, a genius would be able to compute the long-term values of all their actions, and choose the most utile one. However, most decisions are made under incomplete information, and knowledge of the better outcome arises after the event. So even geniuses need to feel regret to learn! 


This was demonstrated in a recent Neuroeconomics study, published in PNAS May 2007, by Lohrenz and Montague. 20 subjects were asked to play a stock-market game inside a MRI scanner. The initial price of the stock was $100, and players could bet a percentage (0 to 100%) of their earnings at every round. They would get to keep all their earnings at the end. Unbeknownst to the players, these “simulated” price movements were actually taken from “famous” days in stock markets, like the great crashes of the 80s and 90s. Players experienced regret when they bet on stocks whose prices went down, or when they didn’t make a bet and the prices went up. These regret signals correlated strongly with the neural activity in the ventral caudate area (where a horizontal line through your eyes, and through your ears, would intersect).

In fact the activity in the ventral caudate strongly predicted how players learned from these errors, and their actions in the future stages of the game. A tragic result however, was that when the market was booming, as in a bubble, there were no regret signals available. Players only received positive reinforcement for their actions, and kept reinvesting (sometimes 100% of) their earnings back into the stock. When the bubble finally burst, these players lost their earnings (and, of course, received a big regret signal from their brain). 

The beneficial role of regret in learning was also shown by Camille et al. in Science magazine in May 2004. Normal subjects and patients with damage to the orbitofrontal cortex played a gambling task. Some gambles offered higher amounts but the payoffs were given out with lower probabilities, whereas some gambles had lower payoffs but paid off more consistently. Players would learn if they won the gamble, and occasionally receive feedback on how well the other gambles would have paid off, had they chosen it.

Normal players reported satisfaction and disappointment when they won and lost, but also felt regret (as measured by skin conductance tests) when an alternate gamble paid off more. Through counterfactual learning, they were able to pick the most advantageous gambles.

Patients with damage to the orbitofrontal cortex area show poor decision-making skills in their social and personal lives. In this game, the OFC patients felt satisfaction or disappointment when their gambles did or did not pay off. But they never felt regret when they learned about the outcomes of the other gambles! Moreover, they did not learn to avoid the gambles which had a low probability of paying off. 


So, only omniscient gods, operating under complete information, can do without regret. A genius has to take actions that maximize expected utility, and embrace the possible regret afterwards!

Particles that Think

Posted in Behavioural Finance with tags , , on January 11, 2009 by dray

A recent article in Nature magazine by Jean-Philippe Bouchaud titled “Economics Needs a Scientific Revolution” grabbed my attention. Prof. Bouchaud, who heads research at a hedge fund and teaches physics in Paris, blames the recent (and ongoing) financial crisis on the economists’ lack of changing their theories in light of empirical evidence. 

He argues for a physicist’s approach to Economics: the ability to observe and probe (human) nature, and modify our theories to accord to empirical evidence. He states that “Reliance on models based on incorrect axioms has clear and large effects” and he goes on to question the assumptions of rationality and profit maximization (covered in previous topics in this blog).

Another simplifying assumption that has been used in Economics is that prices deviate from their mean according to a Normal (Guassian) distribution. This was highlighted in a recent bestseller by Nasim Taleb titled “Fooled by Randomness“. A lot of variability in nature can be explained by a Normal distribution. The mean height of an adult male is around 170 cms, and a height of 185 cm will be observed in less than 10% of adults. One would not find someone who’s 1000 cm tall, for example. But the Normal distribution is not so good at explaining variability in social phenomenon, such as wealth, number of relationships etc. Although the mean income might be $35,000 with a standard deviation of $25,000, there would many people with incomes of $500,000, $1 billion etc. That’s like people being 170cms tall on average and occasionally finding someone who’s 10m or even 1km tall!!

This type of wrong assumption led to the collapse of Long Term Capital Management, the biggest hedge fund in the 1990s, and required a Fed bailout of $4 billion (miniscule in today’s numbers). The models created by their team of Economists, which included 2 Nobel Prize winners, and thus considered “too smart to fail“, never predicted the kinds of price fluctuations that were seen in the market. According to their models, such craziness could have occurred only once in the lifetime of 2 universes!. Many social phenomenon, including price of assets, are better modeled as Fractals or Power distributions that Physicists use regularly. Incorporating power laws into the financial models would predict large price fluctuations more regularly, and prescribe more conservative strategies for risk management.

But before we all hail Physicists and let them take charge of the Fed, we should realize that modeling Economic phenomenon is far more complex than modeling many natural phenomenon. Imagine how complex nature would be if particles could also think!

Perhaps a promising direction is to marry the complex models of decision-making from Robotics and Computational Neuroscience, with notions of game theory, to account for some of the phenomenon observed in financial markets. Then we can simulate a market with interacting agents equipped with different cognitive abilities, utilities, and emotional biases (such as risk aversion, loss aversion, regret avoidance etc). This can account for phenomenon that still lack reasonable explanations from economic theory, such as financial bubbles, herding, collapses etc. Hedge funds can then have better predictions of how the markets will react to their strategies, and those with less sinister and more paternalistic aspirations could simulate the effects of policies and regulations, to curb bubbles and collapses.

Finally, a silly joke at the expense of (classical) Economists:

Q. How many economists does it take to change  a lightbulb?

A. None. If the lightbulb needed changing, then the market forces should have already changed it!

Beauty Contests and Cognitive Limitations

Posted in Behavioural Economics, Behavioural Finance, Game Theory with tags , on January 7, 2009 by dray

beauty_b  John Maynard Keynes, the most brilliant economist after Adam Smith (and a very mysterious character), likened the stock market to a newspaper beauty contest. Keynes was also a highly successful investor and married a  beauty  queen (perhaps the analogy was inspired by his personal life). To quote Keynes directly:

   “professional investment may be likened to those newspaper contests in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view.

 It is not a case of choosing those which, to the best of one’s judgement, are the really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.”

The Beauty Contest game, described below, creates a simple abstraction of this phenomenon:

The Beauty-Contest game: Every player is asked to pick a (whole) number between 0 and 100 (and write it down on a paper). The prize goes to the one who picks a number that’s closest to 2/3-rd of the average of the numbers picked by everyone.

So what number would a Game Theorist pick? He or she would reason that a naive approach would be to randomly pick a number between 0 and 100 (say, based on the last 2 digits of one’s social security number). Thus the average using the naive approach would be 50, so reasonable players ought to pick at most 2/3-rd of that, i.e. 33. Reasoning one step further, one ought to choose 2/3-rd of 33, i.e. 22. By carrying this iterated reasoning all the way, the rational answer would be 0 (the only Nash equilibrium).

The Nash equilibrium solution predicts that all players would pick 0 (and share the prize equally). But what happens when this game is conducted with human subjects? It seems that most people choose numbers between 22 and 33 (they are doing just 1 or 2 steps of iterated thinking). This game was played across a wide cross-section of the population: CEOs, Investment bankers, Educators, PhDs in Engineering, and numbers close to 0 are rarely picked. Even reasoning upto 3-4 levels is rare. While these experimental results should not seem too surprising, it contains a prescription for the emotionless genius: do not engage in too many steps of strategic thinking. Clearly, acting according to Game Theory (and choosing 0) would not win her the prize in the Beauty contest game.

In everyday social interactions, people do engage in strategic thinking, but not to arbitrarily high levels. The 0-level thinkers (duds) would interpret all actions as random. The 1-level thinkers (naive) accept everything at face value “she gave me a gift since she likes me”. The 2-level thinkers would reason “she gave me a gift so that I’d think she likes me”. A 5-level thinker reasons “she gave me a gift so that she could think that I’d think that she’d think that I’d think that she likes me”; but most people do not perform their actions with such in-depth thinking. In fact thinking beyond 2 or 3 levels is over-mentalizing and ascribing intentions to people that they may not have (strategic people have a tendency to be more cynical!).

How can we figure out the “strategic level” of one’s partner or opponent? One can probe their partner to figure it out, but that is a hard task since a strategic opponent knows that their partner is trying to figure out their level. An opponent with a higher strategic level would already figure out your strategies and also intentionally act naive in order not to reveal their strategic-ness. Thus, the actions of a higher-level player remain mysterious: “One cannot figure out the mind of god“.

Some of the lowest numbers in the beauty contest game, roughly 9 to 14 (corresponding to 3 to 4 steps) were picked by Caltech students (who get the highest average scores on standardized quantitative tests among all US universities). So IQ could be a strong determinant of strategic level, although I think this is neither necessary nor sufficient: there are many people with high levels of intelligence who might not be adept at strategic thinking, and vice versa. It would be great to create a verbal version of the Beauty contest game to take away some of the emphasis on quantitative reasoning (although most people should be able to take averages and fractions). An aim of cognitive neuroscience is to identify the regions of the brain that are involved in strategic thinking; then we can have a test for strategic thinking ability: an increased activity in those regions would correlate with more strategic ability. As far as I know, this has not yet been done successfully.

Social interaction involves figuring out dyadic relationships which makes the task even more difficult: “I think that Tom thinks that Jane thinks that Rob would like to buy an iPhone since Tom thinks that Jane thinks that Rob thinks that buying an iPhone would impress Mary”. That is the basis of the Social Brain hypothesis: the reason that humans developed such a big brain was to handle the complex computations that are required for living in a society! 

I’ll discuss my opinions on the consequences of over-mentalizing (or doing too much strategic thinking) in financial decision-making, and its effects on stock markets in a later post. 

Those of you who were misled to this post by the title, can see the right video here: http://in.youtube.com/watch?v=WALIARHHLII

Ultimatums and Fairness

Posted in Behavioural Economics on January 3, 2009 by dray

I’ll describe an experiment whose results should seem obvious to most people, but have kept Economists awake at night (android sheep don’t help).

The Ultimatum game: human subjects (usually students taking Economics or Social science courses) are brought into a lab and divided into anonymous pairs. One is assigned the role of a Proposer and the other, a Responder. The Proposer is given x dollars (x is usually 10 to 30, high enough to matter to most students). He or she is asked to make an offer, c dollars, to the Responder. If the Responder accepts, the Proposer keeps x-c and the Responder gets c dollars. If the Responder rejects, both get 0.

If you were the Proposer, and were asked to split 10 dollars, how much would you offer to the Responder?

Although the experiment is simple in design, it is an abstraction of many real-world economic situations, especially for Bargaining and Negotiation. Suppose an item costs 10 dollars for a shopowner, and you offer 20 dollars, although you are willing to pay up to 30 dollars for it. If the shopowner accepts, you both gain 10 dollars. If he rejects, you both get nothing. Or, you are in the job market and estimate that your earning potential is $100,000 a year (you’re a student in a top Economics dept). The employer offers you $80,000 a year although you know that your services would gain him at least $300,000 a year. Will you take it? Most real-world settings allow some haggling, but the Ultimatum game is one-shot: take it or leave it! (repeated versions of the game are more realistic).

How much do people offer in these experiments? Unsurprisingly, most Proposers offer between 40-50% of their earnings. Most offers below 20% get rejected.

How much will an economist offer? Our Proposer will take the perspective of the Responder. A rational, selfish Responder should accept any amount, even 1 cent, since something is better than nothing. So a rational Proposer will offer the lowest amount possible: 1 cent. Thus our economist friend, unaware of these empirical results, will always leave with 0 dollars (unless he’s playing against another economist).

Clearly, our rational analysis fails in this simple case. However such behaviour makes sense if you repeatedly interact with the same person. By rejecting low offers, the Responder signals that what he or she has to offer is quite valuable. And by offering high amounts the Proposer signals that he’s generous and it’s in your interest to deal with him in the future. Even though the game is one-shot, players act (subconsciously) to gain a reputation for themselves: in effect, the Responder, by rejecting the low offer says “Take your miserable offer and shove it!”

So what should an emotionless genius do to negotiate effectively? Learn the appropriate social “Fairness” norm (through acute observation, probing, trial-and-error), and work around that norm! Although even splits are considered “Fair”, chances are an offer of something slightly less than the fair price would still be accepted. The suggested tip is 18% in California, but the waiter won’t complain if you leave 15% (unless you’re going to visit the restaurant again, in which case you should leave a 20% or greater tip, for better service next time – the cost incurred to earn the reputation of being a generous customer will repay itself through good service in subsequent visits).

Fairness norms vary across cultures. A very extensive cross-cultural study (over 15 different cultures across the world) was conducted by Boyd along with Camerer and Fehr (published in American Economic Review, May 2001). In Europe and American cultures, the Fair point is usually near the 50-50 split. Surprisingly, the fairness norms in many tribal societies (such as the Quichua of Ecuador, Machiguenga of Peru and Hadza of Tanzania) were closer to the rational analysis! The mean offers were around 20%, and offers as low as 10% were also accepted!

This raises an anthropological question about how these Fairness norms evolved. Why are unfair offers accepted in these tribal societies? Is it because a history of hardship and desperation makes people accept an unfair deal? Does a “Finder’s Keepers” norm exist here – the Proposer has a right to the money since he was “chosen” to possess it?

Can western corporations exploit these people knowing that bad deals will be accepted by them? In terms of globalization policy, it is worth asking whether we should impose Western cultural norms of fair practice when corporations set up businesses in Asia, South America and Africa…

Human Beings and Emotionless Geniuses

Posted in Behavioural Economics, Game Theory on January 2, 2009 by dray

In terms of a “Power-to-Intellectual Ability” ratio, Economists are near the top of the rank (arguably a few army generals, presidents and dictators do better, and physicists and philosophers happily occupy the bottom). The pearls of wisdom that fall out of their mouth trickle down to the rest of society. Economists dictate macroeconomic policies, set interest rates, buy and sell shares in banks, all of which ultimately affect everyone in society except for the modern-day loin-cloth wearing hermits and cavemen (they chose to be free of the shackles of the economy and in-turn have little or no influence over modern society).

Game theory forms the foundation of Economics. It is based on the assumption that all Players (Life is just a game after all) in the Economy are perfectly rational and selfish. These players will see the consequences of their action far into the future, weigh their actions based on their monetary value, and always choose the one that maximizes their wealth. Players anticipate the influence of their actions on other players, who in turn influence the player with their actions. But since every player is also perfectly rational and selfish, some optimum strategy can be calculated with no incentive to deviate from it (a Nash equilibrium).

Game theory is designed for emotionless geniuses. So it came as a big shock to Economists when experiments started suggesting that many human beings deviate from rationality and selfishness (fortunately these experiments started gaining momentum only in the 80’s, when a body of economics had already been developed). Could it be that some of the observed anomalies in the financial markets and ineffectiveness of economic policies were due to “irrational” human beings? This picture of Greenspan reveals the sentiment of many economists who finally conceded that humans left to their own devices in the free market were prone to folly: http://www.nytimes.com/2008/10/24/business/economy/24panel.html.

So what should Game theorists do? Well, there seem to be 3 alternatives:

a) Use all the Game theory developed so far normatively, when emotionless geniuses interact with each other.

b) Build descriptive models that describe and predict how human beings act in different situations. But first we’ll need to conduct experiments and probe their reactions to different effects.

c) Create a prescriptive theory to guide emotionless geniuses in their interactions with human beings. This requires learning how normal humans behave; observation and experimentation are key tools in this endeavour. And then using this knowledge to effectively deal with sub-rational, unselfish, emotional behaviours.

This blog will explore some of the experiments and theories that have emerged from the field of Experimental and Behavioural Economics and recently from Neuropsychological studies, and infer guiding principles for the emotionless genius in his or her struggle for economic equilibrium.