This sixth activity comes after students are to have listened to a lecture (slides) about trust and exploitation (see also this post).

I asked the students to make more or less the same decision faced by the actors in the formal model discussed in that lecture, though their choice was bit simpler because only one of the actions involved risk.

The optimal strategy here was not to cooperate at all, since the expected utility for cooperating with any given state is 0.8 (0.4*2 + 0.6*0) and choosing not to cooperate brings 1 point for certain.

Because didn’t want a repeat of the near-revolt that resulted from the last activity, I decided to talk through the optimal strategy at this point. That did them no good in terms of the outcome of the first half of the activity, but helped ensure that everyone made the best choice possible in the second half. Unsurprisingly, I found that many of them identified the correct strategy here but for the wrong reason. They thought the reason that it was unwise to cooperate with any of the states was because the probability that they’d cooperate back was less than 0.5. Only a handful were thinking in terms of expected utilities. So I then re-explained what those are, how they’re calculated, and why you’re always better off thinking in those terms than using simple heuristics based on raw probabilities (or under best case/worst case logic, for that matter).

I then used an online number generator to determine which of the five countries, if any, were willing to cooperate with those who had *A* cooperate with them. As it happens, *B* and *C* did, which of course means 40% of the countries cooperated, as we’d expect given a probability of 0.4. So those who behaved optimally did better than those who cooperated with every country. However, there were some students who did even better still. A small number chose, for whatever serendipitous reason, to cooperate with *B* and *C* but no other country. These very lucky students earned 7 points.

In the second half, I had them make a very similar decision with just two small (but crucial) details changed. On the second run, I told them that the probability of any given state cooperating with them had dropped to 0.2, but the value of mutual cooperation had increased to 10 points. That changes the optimal strategy from cooperating with none to cooperating with all, since the expected utility from attempting cooperation with any given state is now 2 points (0.2*10 + 0.8*0). As expected, most of them chose to cooperate with all five states this time, as they should.

And I think some of them even grasped that this helps them understand why we saw in the lecture they were to have listened to ahead of time that trust is a stronger predictor of whether states cooperate with one another economically when the potential benefits of cooperation are low, but has no effect once the stakes get high, since even states that don’t trust one another much are willing to take a risk when the upside is huge.

Unfortunately, at this point, the internet cut out. So I was unable to draw random numbers for the second part. None of the students could get online using their phones, laptops, or tablets either. So that was nice and embarrassing. Didn’t make me feel like an amateur *at all*. I told them we’d assume one country cooperated back, so those who cooperated with all got 10 points and those with none got 5. (This time around, they realized that it didn’t make much sense to pick and choose.) Next time, I’m bringing my laptop to class and using Stata to generate random numbers instead of the internet.

“Unsurprisingly, I found that many of them identified the correct strategy here but for the wrong reason. They thought the reason that it was unwise to cooperate with any of the states was because the probability that they’d cooperate back was less than 0.5.” But with a utility of two points for mutual cooperation, they did identify the correct threshold for making this the rational choice. There’s no telling if it was for the wrong reason, as they may have just made the calculation implicitly. Maybe if the payoffs were set to 3 points vs. 1 point instead, it would be easier to see whether they correctly identified the crucial factor in making this decision (likelihood times utility rather than just likelihood alone).

I think you might have misunderstood. They chose the option that had an expected utility of 2 points in the second half of the activity, AFTER I reminded them how to calculate expected utilities following the first part of the activity. As for having no way of knowing what their reasoning was – I asked them why they didn’t cooperate in the first part, and some of them told me that it was because the probability was less than 0.5, and that they would only have cooperated if it was greater than 0.5. I have no way of knowing exactly how many students were in that boat, but I know that some were. And past experience suggests that those who explicitly stated that they were using incorrect reasoning likely spoke for a good chunk of the class.

What I meant was that under this setup, there’s no way to tell if they found the correct decision rule for this specific situation (do not cooperate unless your opponent’s likelihood of reciprocating is >0.5, because mutual coop only yields twice the payoff of defection) or if they fell back on a general heuristic (do not cooperate unless it is more likely than not that your opponent will do so as well), because they come down to the same thing. Adjusting the payoff scores would make it possible to discriminate between these reasons and find out more about how your students arrived at their decision.

But I did exactly that. there were two halves to the activity. In the first, the expected utility of cooperating was 0.8, and most did not cooperate. I asked why, and some said because the probability that the other side would cooperate wasn’t high enough. In the second, the probability of the other side cooperating went DOWN, but the expected utility went UP. If they were relying on the simple heuristic, they would again choose not to cooperate. But they overwhelmingly chose to cooperate, after I explained to them that simple heuristics would often bring them to the wrong answer and re-explained expected utility.

In the second part, yes. I was just struck by the way you interpreted the most common response in the first part, because it’s exactly the same one I would’ve given, though based on an actual calculation rather than heuristic. When someone gives an answer like that, they might just have not considered it necessary to point out that the 0.5 threshold only applies to this exact payoff structure, so it might be mistaken for not considering anything beyond the likelihood of reciprocating. Anyway, it’s a minor point.

Fair enough. But fwiw, when I say “many”, I do not mean for that to be interpreted as “all”. In other words, nothing I wrote above is inconsistent with the possibility that a nontrivial number of students identified at the right answer for the *right* reason. I’m quite sure that a decent number did. All I meant is that a nontrivial number also did so for the wrong reason. And while I’m happy to admit that I don’t know for sure what was going through the minds of any particular student, including the ones who said that they based their decision on the probability, I have given assignments in the past where students are asked questions that are designed to punish the use of simple heuristics by ensuring that their use will bring an incorrect answer, and a substantial portion of students always gets those questions wrong. I am assuming that this class is not fundamentally different than others I’ve taught in the past. And I’m pretty comfortable making that assumption, because on all the assignments and activities that they’ve completed so far, they have, in aggregate, behaved pretty much exactly as previous classes have.