Cooperation is essential for successful organizations. But cooperating often requires people to put others’ welfare ahead of their own. In this post, we discuss recent research on cooperation that applies the “Thinking, fast and slow” logic of intuition versus deliberation. We explain why people sometimes (but not always) cooperate in situations where it’s not in their self-interest to do so, and show how properly designed policies can build “habits of virtue” that create a culture of cooperation. TL;DR summary: intuition favors behaviors that are typically optimal, so institutions that make cooperation typically advantageous lead people to adopt cooperation as their intuitive default; this default then “spills over” into settings where it’s not actually individually advantageous to cooperate.

Life is full of opportunities to make personal sacrifices on behalf others, and we often rise to the occasion. We do favors for co-workers and friends, give money to charity, donate blood, and engage in a host of other cooperative endeavors. Sometimes, these nice deeds are reciprocated (like when we help out a friend, and she helps us with something in return). Other times, however, we pay a cost and get little in return (like when we give money to a homeless person whom we’ll never encounter again).

 

Although you might not realize it, nowhere is the importance of cooperation more apparent than in the workplace. If your boss is watching you, you’d probably be wise to be a team player and cooperate with your co-workers, since doing so will enhance your reputation and might even get you a promotion down the road. In other instances, though, you might get no recognition from, say, helping out a fellow employee who needs assistance meeting a deadline, or who calls out sick.

A major aim of just about any organization is to promote cooperative behavior amongst its members: in general, companies (and governmental organizations) perform better when their employees work together rather than single-mindedly pursue their own personal goals. Managers who understand this fact institute policies that incentivize cooperation (e.g. through bonuses, promotions, or public recognition) and disincentivize defection (e.g. through fines, demotions, or public shaming)—the goal being to make it worth employees’ while to cooperate.

 

But, of course, these policies can only do so much: even with such incentives in place, somebody looking to exploit the system could find plenty of opportunities to free-ride without getting caught, thereby undermining the organization’s success. A key challenge for managers and policy makers, therefore, is to encourage cooperation even in the absence of institutional carrots and sticks.

In a recent paper published in the Proceedings of the National Academy of Sciences, we present a formal mathematical model that explores this relationship: how does incentivized cooperation relate to “pure” cooperation that occurs beyond the reach of incentives?

 

In this model, virtual agents interact with each other and receive various payoffs based on how they, and those with whom they interact, behave. As in the real world, our agents encounter a variety of situations in which they could pay a cost to cooperate, or could instead defect. In some of these situations, agents are rewarded for cooperating (and punished for defecting), whereas in other situations, agents always get a higher payoff from defecting. In other words, this first case models situations where employees are explicitly incentivized to be team players (e.g., with public recognition or the promise of a promotion); the second case, conversely, models situations where employee can help each other, but won’t get “credit” for doing so.

Unlike classical economic models, we incorporate a more sophisticated take on decision-making from behavioral economics and psychology (recently popularized by Nobel-prized winner Daniel Kahneman). Instead of always carefully reasoning their way through their decisions, our agents sometimes use intuition – a generalized “gut feeling” (or heuristic) about the best way to act that doesn’t depend on the specifics of the situation being faced. These intuitive responses have the advantage of being quick and not requiring much cognitive effect; but the limitation of being insensitive to situation.

 

When, on the other hand, agents do choose to think carefully, or “deliberate”, they realize whether it is in their self-interest to cooperate or not, and get to choose accordingly. But deliberation comes at a cost: thinking takes time and effort. And it can even damage your social reputation if you come off as a “calculating” kind of person.

 

We then use game theory to figure out the best strategy for our agents. The answer, crucially, depends on the institutional environment.

 

First, consider institutions that rarely provide incentives for people to cooperate (i.e., defection is the payoff-maximizing option in most social interactions) – for example, employees in companies that only reward individual achievement and don’t penalize back-stabbers. Under such institutions, the optimal behavior is to develop a selfish non-cooperative gut response, and to always go with that gut response (i.e. to never stop and consider whether future consequences exist, because they typically don’t). This lack of deliberation means that these agents won’t even cooperate in the (relatively rare) instances in which it could be payoff maximizing for them to do so.

The results are very different, however, for institutions that do typically provide incentives for people to cooperate. Not only is it optimal to have the opposite heuristic—intuitive cooperation—but it is also sometimes worth deliberating. In other words, the best-performing strategy cooperates by default, but occasionally checks whether it’s in a situation where it can get away with defecting.

 

Interestingly, the more the institution incentivizes cooperation, the less its worth bothering to deliberate, and the more likely agents are to just stick with their cooperative gut response. So strengthening institutional incentives to cooperate doesn’t just make people more likely to cooperate when these incentives are present, but also makes people more likely to intuitively cooperate when these incentives aren’t present and people could get away with defecting.

This isn’t just theoretical: in a paper recently published in Management Science, Rand and co-author Alex Peysakhovich present experiments that provide empirical evidence of these effects. Participants were given money, and chose whether to keep it for themselves, or give it up to benefit another participant. In the first part of the experiment, they were assigned to one of two institutions: a “strong” institution that created future consequences (people were likely to interact again in the next round with their current partner), and a “weak” institution where participants could get away with being selfish (partners were remixed frequently, and not informed of each other’s behavior with previous partners). In the second part of the experiment, participants had the chance to pay to help anonymous strangers. Participants who were habituated to cooperating by the strong institution were much more cooperative, altruistic, and trustworthy in the second stage compared to participants who got used to being selfish under the weak institution. Furthermore, this change in willingness to help without future reward in stage two was much more pronounced in participants who tended to rely on intuition; deliberative participants were relatively unaffected by their experiences in stage one.

Taken together, these theoretical and experimental results demonstrate the immense power that institutions can play in establishing norms of cooperation. When institutions foster these norms, they don’t just compel people to cooperate when it’s in their self-interest to do so; they—by shaping heuristics—lead to people cooperating even when it’s not in their self-interest to do so. In other words, if employees work at an organization where teamwork is encouraged and frequently rewarded, these employees will also be more likely to help colleagues out even when such acts go unnoticed.

Adam Bear is a 3rd-year Ph.D. student in Psychology at Yale. His main research explores the interplay between unconscious, intuitive mental processes and conscious, deliberative processes across a variety of domains, including cooperation, choice, and visual perception. His current work considers not only how the mind makes use of both kinds of cognition, but also why we would evolve to do so in the first place.

David Rand is an assistant professor of Psychology, Economics, and Management at Yale University, and director of Yale's Human Cooperation Laboratory. His research combines theoretical and experimental methods to explain the high levels of cooperation that typify human societies, and to uncover ways to promote cooperation in situations where it is lacking. He has argued that intuitive processes play a key role in supporting cooperation, that social incentives like recognition and reputational benefits are powerful tools for increasing cooperation, and that leniency and forgiveness are smart strategies for success in our accident-prone world.