On psychologist Simine Vazire’s always-excellent blog, sometimes i’m wrong, there is an excerpt from John Doris’s forthcoming book that reacts to #repligate. Doris makes many important points about how philosophers should respond to this episode in psychology, such as not relying too much on any single study, including any single replication.
However, I want to take issue with one parenthetical remark. Doris writes “(Less cynically: if scientific findings weren’t surprising, why would we need experiments and publications?)”. Although this may just be a throwaway remark for Doris, I actually think it might be a somewhat common thought. The thought is that experiments get some of their value from surprisingness — i.e. disconfirming some intuitive thought. Or, put it in the reverse direction, if people were able to reliably predict whether an experiment would confirm or disconfirm a commonsense belief, then there would be less reason for us to do such an experiment.
I don’t think that’s the right view about experiments. Duncan Watts’s book Everything is Obvious gives an easy way to see why. As Watts points out, many of our commonsense beliefs appear in tension with one another. Absence makes the heart grow fonder. But out of sight is out of mind. So you might imagine that, if the result of an experiment went one way, it’d be framed as “Counter-intuitive finding! Absence does not make the heart grow fonder!” for publication, but if the result of the same experiment went the other way, it’d be framed as “Counter-intuitive finding! Out of sight is not out of mind!” for publication.
Given that many of our commonsense beliefs appear in tension with one another in this way, whether an experiment is counter-intuitive or not hardly has any connection to its value. Instead, as Watts points out, what is valuable about scientific experiments is that they can delineate the scope of such commonsense beliefs. And I might add, they can estimate the magnitude of the highlighted causal relationships. From a scientific perspective, it’s not that interesting to find out whether “out of sight is out of mind” is true or not, but it is interesting to find out in which cases it holds and in which case it doesn’t, and to what extent it holds.
I think this view of why do experiments actually coheres with some important methodological lessons that came out from #repligate. There should be less emphasis on the yes/no question of whether p is above or under 0.05. Instead, there should be more emphasis on effect sizes, which measure the magnitude of a causal relationship, including giving confidence intervals for effect sizes. Moreover, there should be more emphasis on “scope” — covering aspects ranging from moderating factors to conditions under which the effect is expected to replicate. For example, as I’ve mentioned before on The Experimental Philosophy Blog, psychologist Dan Simons suggests that publications of studies should come with an explicit method section on “Limits in scope and generalizability”. At the same time, Many Labs 2 and Many Labs 3 are now investigating, respectively, replication across sample and setting and replication across timing of experiments.
Why do experiments? The answer has nothing to do with intuitiveness, but everything to do with magnitude and scope of causal relationships that are of interest to some of our aims.
[x-posted at The Experimental Philosophy Blog]