In general, researchers should be honest with their research participants.  However, in some studies researchers are not honest. Sometimes researchers intentionally withhold critical information and sometimes they outright lie.  Deception of participants can be quite controversial, yet researchers continue using deception because it can offer the opportunity to study behavior that would otherwise be difficult to observe.  In this post, I’ll describe both the ethical issues raised by studies using deception and how these issues can be partially mitigated by debriefing.  After that, I’ll describe some of the scientific — as opposed to ethical — issues raised by deception.  Finally, I’ll provide a quick checklist you should use before starting a study using deception.

Ethical concerns about deception

The ethical issues around deception can be illustrated by a recent study conducted by OK Cupid, an online dating website.  When OK Cupid suggests matches between users, they include an algorithmically generated compatibility score.  Being a dating website, OK Cupid was naturally very curious about how well their matching algorithm was working.  One pattern that OK Cupid noticed was that the higher the compatibility score, the more likely the users would exchange emails (something that OK Cupid sees as a sign of successful match).  This pattern suggests that the algorithm is working well: higher scores, more email exchanged.  But that same pattern is also consistent with a self-fulfilling prophecy.  That is, participants might believe that the algorithm is working well and therefore exchange more emails, even if the algorithm was not working well.

Therefore, in order to measure how much their matching algorithm was a self-fulfilling prophecy, OK Cupid randomly provided some users with false information about their match score in order to see what would happen.  For example, a pair of users with a 30% match score could have been told that they had  30% match score, a 60% match score, or a 90% match score.  OK Cupid could then measured how the false match score impacted the probability of email exchange.

What OK Cupid found (summarized in the table below) is that no matter what the actual level of compatibility of users, displaying higher levels of compatibility increased the probability of meaningful exchange of emails.  But, it was also the case that the algorithm was doing something for real: for a given displayed comparability users were more likey to have a meaningful exchange if they had higher levels of algorithmically estimated compatibility.


So, should OK Cupid have lied to people, possibly sending them on bad dates?  The ethical system established by the Belmont Report emphasizes that there must be a balance between the possible harm that comes to participants and the benefits of the research.  If OK Cupid used these results to improve their algorithm, then users in general would have benefited.  But, some people in the experiment may have been harmed, possibly through wasted time.  Ultimately, I do not think that this case is clear-cut, and the trade-off between harm and benefit must be made on a case-by-case basis.

However, if you decide to use deception in your research, there is something that you can do afterwards that makes the deception a bit less problematical ethically: debriefing.  That is, once the experiment is over you can tell your participants what you have done.  Telling people the truth afterwards is consistent with the idea of respect for persons and it also offers you a chance to minimize any harm that the deception might have caused.

In the OK Cupid example, they did debriefed users once the experiment was over.  Here’s what they wrote:

Dear [nameA]

Because of a diagnostic test, your match percentage with [nameB] was misstated as [%]. It is actually [%]. We wanted to let you know!


Some people have criticized the language in the debriefing:

But, that language seems to have been chosen very carefully.  Cristian Rudder, OK Cupid co-founder and data scientist, wrote

“Because ‘experiment’ has become such an emotionally loaded word, we used the more neutral phrase ‘diagnostic test,’ which we felt had the same meaning,”

As this example highlights, there can be a concern that debriefing can actually do more harm than good.  Again, this must be decided on a case-by-case basis.  And, there are many field experiments where ethical review committees have allowed deception without debriefing.  For example, audit studies to measure discrimination in the labor market routinely create fictitious applicants for jobs and then do not debrief employers.

An additional challenge when doing debriefing in online experiments is timing.  In traditional lab experiments, it is customary to debrief participants immediately after the experiment is complete.  However, in online experiments, immediate debriefing means that your debriefed participants can tell other people about the deception, potentially limiting the effectiveness of the deception on future participants.

Scientific concerns about deception

In addition to the ethical concerns raised by deception, there are also scientific concerns.  These concerns are so great that they caused experimental economists to largely ban research using deception in their field.

There are two main scientific concerns about deception.  First, how do we as researchers really know that all participants are fooled?  If some participants are less fooled than others, the results of the experiment will be difficult to interpret.  Second, does all of this deception hurt our ability to do non-deceptive studies in the future?  That is, if participants are repeated lied to during experiments, will they start to doubt everything researchers say to them even when there is no deception?  For example, imagine that you do an experiment involving deception on Amazon Mechanical Turk (or some other online labor market).  Your deception has the potential to impact the behavior of participants in future experiments, even if those experiments are not using deception.

A deception checklist

If you are considering using deception in your research, you should ask yourself:

  1. Can I learn the same thing without deception? There are often clever research designs that will enable you to learn similar things without deception.  If so, try these other research designs.  Deception should be avoided whenever possible.
  2. How much harm can be caused to my participants by my deception?  If the answer is more harm than they could experience in their daily life (e.g., more than minimal risk), then you should probably reconsider.
  3. Are there specific subsets of participants that might be especially harmed by my deception?  If the answer is yes, then you should try to prevent them from participating.
  4. Will my deception harm other researchers? In addition to the effect of your deception on participants, you should also consider the effect of your deception on future researchers.  Would you be upset if someone else did the experiment that you are considering?
  5. Should I debrief participants?  Note that debriefing itself can be harmful if not done properly.  For more on techniques for debriefing see: Mills (1976) “A Procedure for Explaining Experiments Involving Deception.” Personality and Social Psychology Bulletin.
  6. Have I talked to colleagues about this? Researchers in many universities are required to propose their procedures to formal ethical review panels, but these panels don’t exist inside of many companies (e.g., OK Cupid).  In either case, it is helpful to informally describe your proposed research design to colleagues.  If they find your proposed use of deception ethically troubling, then you should probably reconsider.

Further reading

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s