This was originally posted on blogger here.
A new paper by Florian Hutzler has been published online at Neuroimage which claims to show that reverse inference is not as problematic as has been claimed in my previous publications (TICS, 2006; Neuron, 2010). I had previously reviewed this paper for another journal (I signed my review so this is not a surprise), and I’m happy to see that some of my concerns about the paper were addressed in the version that was published at Neuroimage. However, I still have one major concern about the general framing of the paper.I would first like to be clear about what I said about reverse inference in my 2006 paper:“It is crucial to note that this kind of ‘reverse inference’ is not deductively valid, but rather reflects the logical fallacy of affirming the consequent…However, cognitive neuroscience is generally interested in a mechanistic understanding of the neural processes that support cognition rather than the formulation of deductive laws. To this end, reverse inference might be useful in the discovery of interesting new facts about the underlying mechanisms. Indeed, philosophers have argued that this kind of reasoning (termed ‘abductive inference’ by Pierce [8]), is an essential tool for scientific discovery [9].”Thus, while I did point out the degree to which reverse inference reflects a fallacy under deductive logic, I also pointed out that it could be potentially useful under other forms of reasoning; it’s a bit of a stretch to go from this statement to using the term “reverse inference fallacy” which has started to pervade peer reviews. This is unfortunate in my view, if only because authors must often think that I am the culprit! (I assure you all that I would never use this phrase in a review.) The potential utility of reverse inference has been further cashed out in the Neurosynth project (Yarkoni et al, 2011). I say all of this just to highlight the fact that I have never painted reverse inference as wholly fallacious, but rather have tried to highlight ways in which its limited utility can be quantified (e.g. through meta-analysis) or its power improved (e.g., through the use of machine learning methods).The Hutzler paper applies reverse inference in a much more restrictive sense than it has usually been discussed, which he calls “task-specific functional specificity.” The idea is that given some task, one can compute (e.g., using meta-analysis) the reverse inference conditional on that task (which I had noted but not further explored in my 2006 paper). I have no quibbles with the paper’s analysis, and I think it nicely shows how reverse inference can be useful within a limited domain (in fact, Anthony Wagner and I made this pointin 2004 in regard to left prefrontal function). My general concern is that the situation described in the Hutzler paper is fairly different from the one in which most reverse inference is performed. Here is what I said in my initial review of his paper, which still holds for the published version:If it is true that reverse inference is helpful within the context of a specific task, then that’s perfectly fine, except that in the wild reverse inference is rarely used within the same task. In fact, it’s almost always used in task domains where one doesn’t know what to expect! See my recent Neuron paper for examples of these kinds of reverse inferences; rarely does one see a reverse inference based on prior data from very similar tasks. Thus, the paper basically makes my point for me by showing that the procedure is only effective in very specific cases which are outside of the standard way it is used.In summary, while I agree with the analysis presented by Hutzler, I hope that readers will go beyond the title (which I think oversells the result) to see that it really shows the success of reverse inference in a very limited domain.