People who care about Game Theory tend to spend a lot of time thinking about the Prisoner’s Dilemma.  It’s a pretty simple game with a couple of neat theoretical properties.

So here’s the basic setup.  There are two players who must both simultaneously decide to play one of two moves.  They can each either choose to “Defect” against the other player or “Cooperate” with him.  The matrix of results looks something like this:

 Player 1 Cooperates Player 1 Defects
 Player 2 Cooperates  (3, 3)  (5, 0)
 Player 2 Defects  (0, 5)  (1, 1)

The way to read this table is that, if both players cooperate, they each get 3 points.  If Player 1 defects and Player 2 cooperates, Player 1 gets 5 points and Player 2 gets 0.  It’s symmetric, so the same thing happens in reverse if Player 2 defects into Player 1’s cooperation.  And if both sides defect, they each get 1 consolation point.

The reason why this is called the “Prisoner’s Dilemma” is that it’s typically described from the perspective of the cops having captured two criminals and they’re offering them the chance to turn State’s evidence against their former compatriot.  If they “cooperate” with their partner, they both get off on a minor charge.  If one turns against the other, he goes free and the other guy gets the whole sentence.  And if they both turn, they both get a long but slightly reduced sentence.  In theory, since they’re both captured, they don’t have the chance to negotiate beforehand or afterward: this is a one-shot deal.  Knowing this, what do you do?

Well, the obvious way to start going about it is to look at it from one player’s perspective.  You’re sitting in jail with a “Player 1” nametag on your orange jumpsuit.  We’re assuming that there’s nothing you can do to make the other player change his mind for now, so it makes sense to assume his action is independent of your own.  So, in the world where he cooperates with you, what should you do?  That means that we’re on the top row of the matrix.  You can get 3 points if you cooperate and 5 if you defect.  5 > 3, so defect is the obvious play, right?  Now, do the same thing, but assume he’s going to defect this time.  You’re on the bottom row, which means you can get 0 points for cooperating or 1 point for defecting.  1 > 0, so defect is again the obvious play.

Now, since the game is symmetric, presumably the other guy is going to run the same logic and defect.  You each get 1 point.  The funny thing about this is that there’s an obviously superior option on the board.  If you guys were going to do the symmetric thing, you could have just symmetrically cooperated and came away with 3 points.  And 3 > 1.  This is compelling because the C-C equilibrium is so unstable (any defection by either party breaks it) compared to the D-D equilibrium (where each party is worse off individually if they were to defect: this is called the Nash Equilibrium) but it’s more lucrative overall.

So people have come up with a lot of variants to the game.  A popular one is the Iterated Prisoner’s Dilemma, where you play the game for many rounds against the same opponent.  In this case, the idea is that you can potentially punish a defection on the last turn with subsequent defections, and that threat can be sufficient to establish long-term cooperation.  It actually turns out that the two-party Iterated Prisoner’s Dilemma game is a special case of the Ultimatum Game[1], which means that each player can theoretically set the outcome for the other one by setting a particular strategy.  A popular winning strategy is called “Tit for Tat”, which cooperates on the first play and then does whatever the opponent did on the last turn.  Functionally, this strategy caps the opponent’s score at no more than 5 points greater than your own score, since they can’t get you to cooperate again without giving back the 5 points they got by defecting the first time, but your actual score is completely determined by the other player’s strategy.  Interestingly, though, you can’t ever “win” a round playing Tit-for-Tat.  The best you can do is tie any given round.

It turns out that a strategy’s success rate in an Iterated Prisoner’s Dilemma tournament (where each entry plays a bunch of rounds against different competitors and are ranked by the total points scored over the event) depends heavily on the strategies that are popular in the environment.  If lots of “Always Defect” bots are entered into the game, then your “Always Cooperate” entry is going to do really badly.  But if there are a lot of “Tit for Tat” players in the pool, “Always Cooperate” scores very well.  Competitive multiplayer gamers, playing games like Starcraft or League of Legends, are familiar with this idea of the “metagame” shaping your strategy.  If there isn’t one dominant strategy and the game is at all interesting, then the key to victory is to play the counter to what’s popular right now.

So a while back some people came up with the idea of adding genetics to a Prisoner’s Dilemma tournament.  What happens if the successful strategies (by overall points scored) replicate more in their environment, making up more of the next generation?  Then the metagame is shaped directly by the kinds of strategies that were successful in previous rounds.  Winners over time will be not only strategies that compete well against random opponents, but that must also do well against opponents optimized for success.  The ideal competitor will be one that scores well against itself, since if it doesn’t, its success in one generation will be limited in the next generation, when more of the opponents are made up of that same previously successful strategy.

Doing this, people found that Tit-for-Tat did pretty well.  If there was only one Tit-for-Tat instance in a pool of defectors, then the Tit-for-Tat would get choked out because it would come in 5 points behind all of its competitors.  But if there were just a couple other cooperators, then Tit-for-Tat could latch onto them and score a whole lot of points, which meant there would be more Tit-for-Tats in the next generation, until eventually all of the defectors got crowded out.  The metagame entirely changes from “Defect all the time” to “Cooperate all the time”.

OK, that’s a lot of background.  Now we’re ready to get into my idea.  Allow me to introduce you to the Society Game.

  1. Set up two Prisoner’s Dilemma tournaments, each with 20 participants.  Call each tournament a “society”.  Note that this is not an Iterated Prisoner’s Dilemma tournament.  Each player plays each other player exactly once.  For simplicity, we’ll say that there is no possible communication or memory, so there is no reputation effect or ability to punish for past defection.  This is the well-known one-shot game.
  2. After each tournament, rank the players in order, 1-20, based upon the score they individually earned in the tournament.
  3. Add up all of the points scored in each of the entire tournaments.  This is the “society” score for each tournament.
  4. In the second round of the competition, the society scores are compared.  The society with the higher number is deemed the winner.  If there is a tie in this number, skip to step #8.
  5. Set the score for each player in the losing society to 0, regardless of what they might have scored in the first round.
  6. Divide the society score for the losing society into 20 tranches with sharply deescalating chunks, like a poker tournament.[2]  Each player in the winning society gets a tranche based on what place he came in.
  7. If there are ties, add up the points that would have went to each spot in the tie and then divide equally.  So, if there’s a 3-way tie for 5th, take the rewards for 5th, 6th, and 7th, add them up, and divide by 3.
  8. Randomly fill the 40 slots (from both the societies) with new copies of each participant strategy, but proportionally based on the total fractions of points earned.  Round up to 1 if the fraction is less than 1/40th, to ensure that there is at least one copy of every strategy that scored some points.  Otherwise, do the proportional best-fit rounding.
  9. Play the game again.

What makes this game so interesting to me is that in the first round, in the Prisoner’s Dilemma part of the game, the obviously dominant strategy is to defect.  This was true before since it’s the one-shot version of the game.  But it’s doubly true now.  Not only do you score more points, but relative ranking within a society is super-important for the second round payoff, and the only way to do better than a rival and move up the ranking list is to defect.

However, for the society as a whole, cooperation is the pure dominant strategy.  In an exact mirror of the standard game, it’s better for you to cooperate regardless of what your opponent does.  The team gets 6 points if you both cooperate, 5 if one of you does, and only 2 if you both defect.  And it’s really important to not lose in the society round, since if your team loses, you lose everything.  Being #1 in the losing society is worse than being #20 in the winning society.  Or, put more poetically, it’s way better to serve in Heaven than rule in Hell.

So there’s a tragedy of the commons sort of action happening here.  Each player really, really wants to defect.  But in defecting, they open up the possibility that they could lose everything.  And the more people defect, the worse it gets.  Even if you extended the game to allow coordination to punish a defector, would it even be worth it?  Hard to say.

The other fascinating feature of this game is that the genetic transmission happens at the individual player level, not at the society level.  If there’s one defect bot in a group of pure cooperators in a victorious society, the defector will pull in the winning share and make up maybe 30% of the next generation, counting his winnings in the first round and the quarter of the take from the defeated rival team.  But, similarly, if everyone’s a defector, the appearance of just one cooperate bot will make it so that his team always wins.  And being on the winning team ensures that you’ll at least persist to the next generation.  So if all you have are defectors and cooperators, and there’s a big enough pool of societies (more than just the 2 described) you’ll get this whipsaw effect between the relative preponderance of defectors: they make up more and more of the population until they run into a group with enough cooperators, which will bring them down.

If you had a team that somehow was entirely made up of pure cooperators, then they’d roll over everybody.  They’d always win (or tie) the society round, evenly split the winnings, and keep rolling.  But like the C-C equilibrium in the standard game, it’s both superior and very unstable.  It’s very unlikely to get there from a random start (since you’d need a winning society to spawn at least 20 cooperators and then randomly shuffle them onto the same society), and if there’s any noise in the genetic transmission (like, say 1% of the time a defector births a cooperator and vice-versa) it’s really hard to maintain.

The pattern this little game produces reminds me a lot of real history.  New conquerors explode on the scene with high Asabiyyah (or solidarity), get rich and powerful, then increasingly grow decadent and divided until some other high Asabiyyah group comes along to knock them over and take their stuff in turn.

OK, that’s cool in and of itself.  But is there any possible better strategy than just “Always Cooperate” or “Always Defect”?  There’s a mixed strategy, of course, where you act like a cooperator some fraction of the time and a defector the rest of the time.  That’s neat because it is interesting way to try to sidestep the tragedy of the commons problem.  Instead of being a pure defector, you can just defect 1 time out of 10, on average.  This hurts the team less, but still has the chance to move you up the leaderboard.  Winning your society by 1 point gets you at least 10% of the total take from the losing team.  It’s a big deal.

The other idea I had was a strategy that does the following: cooperate against every player with a lower player ID and defect against a player with a higher ID.  Presumably when joining the society tournament a player would have to be assigned one of the 20 slots.  So use that.

What happens if you do this?  Of course, it depends on the metagame.  If there’s only one of them in a given group, it’ll act like a mixed strategy bot, ranging from a pure cooperator (if he’s ID #20) to a pure defector (if he’s ID #1).  Just like the Tit-for-Tat case, it’s not too interesting as an extreme minority.  But, if two or more of them get together, what happens?  The higher-ranking one gets free defection into cooperation points.  This means that the team gets 5 points (minimizing the hit on the commons) and the higher-ranking one climbs the leaderboard.  The more players there are that run this strategy, the more likely it is that one of them gets into the big points at the top, which means that there will probably be more of them in the next generation.

In the steady state, when the metagame is such that most of the competitors are running a strategy like this, it’s more robust to pure defectors.  A defector has to luck into ID #1 to actually win the society tournament.  If he’s #2, he ties with the #1 (since they mutually defect and get 5 points from every other match).  And if he’s #3 or lower, he loses to at least the #1 guy.  This is a lot different than the pure cooperator equilibrium, where one defector anywhere immediately takes the top prize and explodes in prevalence.

Here’s the crazy thing: from these simple rules, we’ve basically redeveloped the medieval Great Chain of Being.  There’s no moral content or anything to the arbitrary ID number assigned at entrance to the tournament, just as there isn’t necessarily any compelling reason why guy X should be King instead of guy Y.  But if everyone shows deference accordingly, the end result is way more stable than the egalitarian alternative, where everybody could theoretically make a play for the top spot.

 

 

[1] http://www.pnas.org/content/109/26/10409.full

[2] Here’s a possible reward distribution, for reference:
1st Place: 27.5%
2nd Place: 17.5%
3rd Place: 11.5%
4th Place: 8.5%
5th Place: 7.25%
6th Place: 5.75%
7th Place: 4.5%
8th Place: 3%
9th Place: 2%
10th Place: 1.5%
11th-15th Place: 1.2%
16th-20th Place: 1%

Advertisements