Your Partner's Other Partners

Guidelines and strategies for good relationships with your partner’s other partners


“Do unto others 20% better than you would expect them to do unto you, to correct for subjective error.”

–Linus Pauling

One of my favorite books about human cognitive biases isHow We Know what Isn’t So: The Fallibility of Human Reason in Everyday Life, by Thomas Gilovich. As the title suggests, it’s essentially a catalog of the various cognitive traps, reasoning errors, and other fallibilities that lead to people to “knowing” things that aren’t true.

The book talks about a lot of reasoning problems that you might expect: confirmation bias, for example, and intuitive problems with randomness (the human brain is so highly optimized for seeking patterns that it’ll find patterns even in random data, and on top of that, random data do not necessarily “look” the way we expect them to, and in particular aren’t fractal in nature; it’s perfectly reasonable to flip a coin a bunch of times and see heads come up five or six times in a row…but I digress).

It also discusses people’s tendency to make evaluations based on missing or absent data; if, for example, you believe that people who have less than a certain GPA should not be admitted to college, because they would tend to do poorly, so you admit only people with a high GPA and you notice everyone doing well, you might be tempted to believe smugly that your decisions are good and your impulses correct, but you are missing the very data that would confirm or deny your hypothesis. Because you refuse to let people with a low GPA in, you have no way to know if their performance would be poor or not.

And it discusses self-fulfilling prophesies, in the context of the Prisoner’s Dilemma.


A quick overview for anyone who may be unfamiliar with the Prisoner’s Dilemma: a Prisoner’s Dilemma problem is any situation where two people can choose to cooperate with one another or to work against one another, such that if they both cooperate they both gain something, if they both work against each other (called “defecting” in the language of game theory) they both lose something, but if one cooperates and one defects, the one who cooperates loses catastrophically and the one who defects gains tremendously.

It got its name from the way it was originally couched: Suppose you and a complete stranger, who you’ve never had any contact with before, get together to commit a crime. You are caught in the middle of committing the crime and whisked away by police to separate cells, with no opportunity to speak to one another. The police interrogate you separately, and tell you that each of you is being given the following choice: you can keep quiet, or you can testify against the other.

If you both keep quiet, there is only circumstantial evidence against you, and you’ll both likely be convicted of lesser charges and go to prison for two years each.

If you both testify against each other, each of you can expect a ten-year prison sentence.

If one of you keeps quiet and the other testifies, the person who keeps quiet will be imprisoned for twenty years, and the person who testifies will get off scott-free. What do you do?

The best possible outcome is for both of you to shut up. However, if you have any doubt whatsoever about your partner’s willingness to shut up, then you better testify, because if he testifies and you don’t, you’re in big trouble! Since he is using the same reasoning as you are, then if he doubts you, his best course of action is to testify. Since both of you can’t predict the action of the other, what will probably happen is that both of you will testify (and spend the next decade behind bars) rather than both keeping your mouth shut (in which case you’d both be freed in one-fifth the time).

Things change a bit if you play this game iteratively. Suppose that you have money and you wish to buy something from someone else, but because of the nature of what you want to buy (maybe it’s illegal, maybe selling it exposes the other person to risk), you cannot simply hand him the money and have him hand you the goods. So you set up an arrangement: the two of you will never meet or speak, but every week at the same time you will leave a bag under the old stone bridge with money in it, and he will leave a bag behind the mill with the goods in it.

Again, you have a choice. You can leave the money or you can leave an empty bag. He can leave the goods, or he can leave an empty bag. If you leave an empty bag and he leaves the goods, bonanza! That week you got the goods for free. If you leave money and he leaves an empty bag, suck! You lose the money. If you both leave empty bags, well, you still have your money, but what you really wanted was the goods. And so on.

Knowing that you will be repeating this transaction every week changes the situation a bit. Do you always leave the money? Sometimes leave money and sometimes leave an empty bag?


Gilovich, who is a psychology researcher, has run Prisoner’s Dilemma problems on a large number of volunteers, and writes about the trends he’s noticed. People who naturally tend to be cooperative spot that tendency toward cooperation in others, especially during iterative Prisoner’s Dilemma problems, and quickly adopt an all-cooperation policy. Every week they leave the bag of money, every week the other player leaves the bag of goods, and everyone is happy.

On the other hand, people who tend to defect—to turn against the other player—will force cooperative players to start to defect themselves, out of self-preservation. If the other person leaves you an empty sack two weeks in a row, you’re likely to stop leaving money in your sack, simply because you no longer trust the other person. In other words, his actions have made you begin playing a defecting strategy, even though your desire was to play a cooperative strategy.

This has a lot of bearing on polyamory. Hang on, I’m getting to that.


Gilovich writes that people who start off adopting defecting strategies tend to have a worldview that says other people are basically bad, the world is basically an evil place, others are motivated by selfishness and greed, and people on the whole generally suck. This worldview is then confirmed by the fact that all the partners they play the game with defect, and start leaving empty bags. In other words, he says, a Prisoner’s Dilemma strategy of defecting against the other player is rooted in a self-fulfilling prophesy. You go into the game believing that the other player will defect, so you start out by defecting, and then—surprise!—the other player starts to defect. Even if his impulse was to cooperate. Your actions created the situation you expected, and thus, your view that the world is a hostile place is confirmed.

The connection between a hostile Prisoner’s Dilemma strategy and self-fulfilling prophesies was new to me, and naturally, it instantly created a new connection between Prisoner’s Dilemma problems and polyamory in my head.


I tend to see a lot of people in poly relationships who are very uncomfortable with the idea of meeting a lover’s other lovers. This is one of the most common sources of angst I’ve noticed for people who are polyamorous, especially if they’re fairly new to polyamory.

Meeting a lover’s other lover presents a host of opportunity for cooperation or defection. You can reach out to the other person and try to make that person feel welcome; you can be closed up and defensive to that person; you can even be actively hostile to that person. And, of course, your lover’s lover has similar choices.

Reaching out to someone makes you vulnerable. If two people both reach out to one another, then things will tend to go more smoothly; but if one person reaches out and the other is defensive or hostile, the consequences for the person who reaches out can be pretty dire. A strictly rationalistic approach might suggest that the best strategy is to be defensive, because if you’re defensive, you have nothing to gain but nothing to lose either, whereas if you reach out, you might gain something—but you might lose a great deal, as well.

And there’s no question that your expectations about the other person, and your behavior upon meeting that other person, can easily become a self-fulfilling prophesy.

Suppose you start out, prior to the meeting, by believing that your partner’s new love is a conniving, self-centered bitch (or bastard), determined to undermine your relationship and to take your partner away from you. If you go into your first meeting with this belief, I guarantee it’s going to show. Your partner’s other partner is going to be able to tell that you don’t trust him (or her), that you’re looking for reasons to dislike him (or her). So that person is likely to behave defensively, even if the first impulse might otherwise have been to reach out to you. You look at the defensive reaction, and say “See, look! I told you this person was bad news!”

And that’s before we even get to the issue of confirmation bias, which is a whole ‘nother can of worms altogether.


So are we left, then, with the grim conclusion that the only rational way to meet a lover’s new lover is to be defensive, even knowing that this defensiveness is likely to trigger the very thing we believe we’re defending against?

Thankfully, no.

In 1980, a pair of researchers studying Prisoner’s Dilemma problems sponsored a tournament. Attendees were invited to write computer programs that played iterative Prisoner’s Dilemma games against each other. Each could use whatever algorithm the attendees liked. The goal was simple: maximize one’s profits.

Now, at first blush it might seem that an all-defect strategy is the best way to do this. Problem is, if everyone adopts this strategy, then nobody profits at all. So people submitted a number of strategies, some of them very complex. Start out cooperating, then start defecting if the other program cooperates, and you’ll get the booty for free. Or cooperate much of the time, but randomly defect. Some programs attempted to analyze the other programs, looking for patterns in their moves and then computing a strategy for maximizing profits against those patterns.

In the end, the program that won, consistently, was also one of the simplest. It employs a basic strategy that can be summed up in the words “cooperation” and “forgiveness,” and it goes by the name “Tit for Tat.”

Tit for Tat is an incredibly simple approach: On the first round, cooperate. On each additional round, do whatever your partner did last time. If your partner cooperated, continue to cooperate. If your partner defected, defect, then see what your partner does this round; if your partner’s defection was a one-time deal and he cooperates, go back to cooperation.

This is an amazingly resilient strategy. Against a player who defects all the time, it defects all the time; against a player who always cooperates, it cooperates. Against a player who occasionally defects, it occasionally defects, but it doesn’t hold a grudge; if the other player returns to cooperation, it returns to cooperation. Simple as it is, in iterative Prisoner’s Dilemma problems, it’s virtually unbeatable.


That’s a good strategy for life, too. I’ve seen, and been involved in, relationships that were wrecked because one partner entered a relationship with another partner from an attitude of suspicion and mistrust, and then created the same suspicion and mistrust in the other partner. Tit for Tat shows that when we start from a spirit of cooperation, we both come out ahead; winning does not need to be done at the other person’s expense, and indeed, in an ongoing relationship, there is no winning at the other person’s expense. Either both players win or both players lose.

Tit for Tat is not naive; if the other person defects, so does Tit for Tat. But Tit for Tat doesn’t hold grudges either; if the other person then begins cooperating, then Tit for Tat does, too. Forgiveness is simply sound strategy.

In a very literal sense, you make the social environment you live in. People take their cues from you. Even in a world of people who adopt a hostile, defecting strategy, it is possible to do well. On your first move, cooperate. Open yourself. Invite this other person into your life. Only if it is not reciprocated—only then do you become defensive, and stay that way only for as long as the other person is defensive.


Of course, relationships are complex, and there are all sorts of potential gotchas. People don’t always have perfect information. We may see hostility where none is intended. We may believe that we are reaching out to someone else, but we’re doing it in a way the other person can’t recognize, because we don’t have the same communication style. Our attempts at reaching out may be clouded by expectation or fear, or misunderstood as attempts at manipulation. Social interactions aren’t as simple as leaving a bag behind the old mill; either the bag is full or it isn’t, but social interactions aren’t that obvious.

The study of Prisoner’s Dilemma problems has sought to find strategies that work in complex environments, too. For example, researchers have asked questions like “What happens if two programs are sometimes fed faulty information, so that they occasionally believe the other program has defected when it has not?

It turns out that even in complex situations, the simplest strategies tend to work the best. In fact, consistently, the programs that were most successful were nice, meaning they never defected before the opponent; retaliating, meaning they would defect if the opponent did, but only to the extent that the opponent did; forgiving, meaning they cooperated and forgave if the opposing program stopped defecting; and non-envious, meaning they did not attempt to score greater gains than the other program.

These, I think, make for good guidelines for poly relationships. When dealing with a partner’s other partner, be nice, don’t be envious, withhold cooperation only to the extent that the other person does, and forgive.

Indeed, when it comes to human interactions, where information is not perfect, the rule of withholding cooperation only to the extent that the other person does quickly falls by the wayside. In situations where two programs are given occasional flawed information, a better strategy than Tit for Tat emerges: Tit for Two Tats. Tit for Two Tats will let the first perceived defection slide, and begin withholding cooperation only if it sees the other side defect twice in a row. Hence the Linus Pauling quote “Do unto others 20% better than you would expect them to do unto you, to correct for subjective error.”

Naturally, this strategy starts with the assumption that the other person is not deliberately seeking to take advantage of you. But then, assuming good intent is a cornerstone of good relationships with anyone; if you can’t assume good intent on the part of the other person, you’re not likely ever to have a good relationship. Sometimes, it’s the simplest approaches that work the best.

Last updated: Sun Nov 22, 2020