If you polled 100 scientists at your next conference with the single question, “Is there publication bias in your field?” I would predict nearly 100% respondents to reply “Yes.” How do they know? Did they need to read about a thorough investigation of many journals to come to that conclusion? No, they know because they have all experienced publication bias firsthand.
Until recently, researchers had scant opportunity to publish their experiments that didn’t “work” (and most times they still can’t, but now at least they can share them online unpublished). Anyone who has tried to publish a result in which all of their main findings were not “significant,” or who has had a reviewer ask them to collect more subjects in order to lower their p-value (a big no-no), or who neglect to submit to a conference when the results were null, or who have seen colleagues tweak and re-run experiments that failed to reach significance only to stop when one does, knows publication bias exists. They know that if they don’t have a uniformly “positive” result then it won’t be taken seriously. The basic reality is this: If you do research in any serious capacity, you have experienced (and probably contributed to) publication bias in your field.
Greg Francis thinks that we should be able to point out certain research topics or journals (that we already know to be biased toward positive results) and confirm that they are biased- using the Test of Excess Significance. This is a method developed by Ioannidis and Trikalinos (2007). The logic of the test is that of a traditional null-hypothesis test, and I’ll quote from Francis’s latest paper published in PLOS One (Francis et al., 2014):
We start by supposing proper data collection and analysis for each experiment along with full reporting of all experimental outcomes related to the theoretical ideas. Such suppositions are similar to the null hypothesis in standard hypothesis testing. We then identify the magnitude of the reported effects and estimate the probability of success for experiments like those reported. Finally, we compute a joint success probability Ptes, across the full set of experiments, which estimates the probability that experiments like the ones reported would produce outcomes at least as successful as those actually reported. … The Ptes value plays a role similar to the P value in standard hypothesis testing, with a small Ptes suggesting that the starting suppositions are not entirely correct and that, instead, there appears to be a problems with data collection, analysis, or publication of relevant findings. In essence, if Ptes is small, then the published findings … appear “too good to be true” (pg. 3).
So it is a basic null-hypothesis significance test. I personally don’t see the point of this test since we already know with certainty that the answer to the question, “Is there publication bias in this topic?” is unequivocally “Yes.” So every case that the test finds not to be biased is a false-negative. But as Daniel Lakens said, “anyone is free to try anything in science,” a sentiment with which I agree wholeheartedly. And I would be a real hypocrite if I thought Francis shouldn’t share his new favorite method even if it turns out it really doesn’t work very well. But if he is going to continue to apply this test and actually name authors who he thinks are engaging specific questionable publishing practices, then he should at the very least include a “limitations of this method” section in every paper, wherein he at least cites his critics. He should also at least ask the original authors he is investigating for comments, since the original authors are the only ones who know the true state of their publication process. I am surprised that the reviewers and editor of this manuscript did not stop and ask themselves (or Francis), “It can’t be so cut and dried, can it?”
Why the Test for Excess Significance does not work
So on to the fun stuff. There are many reasons why this test cannot achieve its intended goals, and many reasons why we should take Francis’s claims with a grain of salt. This list is not at all arranged in order of importance, but in order of his critics listed in the JMP special issue (excluding Ioannidis and Gelman because of space and relevance concerns). I selected the points that I think most clearly highlight the poor validity of this testing procedure. This list gets long, so you can skip to the Conclusion (tl;dr) below for a summary.
Vandekerckhove, Guan, Styrcula, 2013
- Using Monte Carlo simulations, Vandekerckhove and colleagues show that when used to censor studies that seem too good to be true in a 100% publication biased environment, the test censors almost nothing and the pooled effect size estimates remain as biased as before correction.
- Francis uses a conservative cutoff of .10 when he declares that a set of studies suffers from systematic bias. Vandekerckhove and colleagues simulate how estimates of pooled effect size change if we make the test more conservative by using a cutoff of .80. This has the counter-intuitive effect of increasing the bias in the pooled effect size estimate. In the words of the authors, “Perversely, censoring all but the most consistent-seeming papers … causes greater bias in the effect size estimate” (Italics original).
- Bottom line: This test cannot be used to adjust pooled effect size estimates by accounting for publication bias.
- Francis acknowledges that there can be times when the test returns a significant result when publication bias is small. Indeed, there is no way to distinguish between different amounts of publication bias by comparing different Ptes values (remember the rules of comparing p-values). Francis nevertheless argues that we should assume any significant Ptes result to indicate an important level of publication bias. Repeat after me: Statistically significant ≠ practically significant. The fact of the matter is, “the mere presence of publication bias does not imply it is consequential” and by extension “does not warrant fully ignoring the underlying data” (Italics original). Francis continues to ignore these facts. [as an aside; If he can come up with a way to quantify the amount of bias in an article (and not just state bias is present) then maybe the method could be taken seriously.]
- Francis’s critiques themselves suffer from publication bias, invalidating the reported Ptes-values. While Francis believes this is not relevant because he is critiquing unrelated studies, they are related enough to be written up and published together. While the original topics may indeed be unrelated, “The critiques by Francis, by contrast, are by the same author, published in the same year, conducting the same statistical test, to examine the exact same question.” Hardly unrelated, it would seem.
- If Francis can claim that his reported p-values are accurate because the underlying studies are unrelated, then so too can the original authors. Most reports with multiple studies test effects under different conditions or with different moderators. It goes both ways.
Johnson, 2013 (pdf hosted with permission of the author)
- Johnson begins by expressing how he feels being asked to comment on this method: “It is almost as if all parties involved are pretending that p-values reported in the psychological literature have some well-defined meaning and that our goal is to ferret out the few anomalies that have somehow misrepresented a type I error. Nothing, of course, could be farther from the truth.” The “truth is this: as normally reported, p-values and significance tests provide the consumer of these statistics absolutely no protection against rejecting “true” null hypotheses at less than any specified rate smaller than 1.0. P-values … only provide the experimenter with such a protection … if she behaves in a scientifically principled way” (Italics added). So Johnson rejects the premise that the test of excess significance is evaluating a meaningful question at all.
- This test uses a nominal alpha of .10, quite conservative for most classic statistical tests. Francis’s simulations show, however, that (when assumptions are met and under ideal conditions) the actual type I error rate is far, far lower than the nominal level. This introduces questions of interpretability: How do we interpret the alpha level under different (non-ideal) conditions if the nominal alpha is not informative? Could we adjust it to reflect its actual alpha level? Probably not.
- This test is not straightforward to implement, and one must be knowledgeable about the research question in the paper being investigated and which statistics are relevant to that question. Francis’s application to the Topolinski and Sparenberg (2012) article, for example, is wrought with possible researcher degrees of freedom regarding which test statistics he includes in his analysis.
- If researchers report multiple statistical tests based on the same underlying data, the assumption of independence is violated to an unknown degree, and the reported Ptes-values could range from barely altered at best, to completely invalidated at worst. Heterogeneity of statistical power for tests that are independent also invalidates the resulting Ptes-values, and his method has no way to account for power heterogeneity.
- There is no way to evaluate his sampling process, which is vital in evaluating any p-value (including Ptes). How did he come to analyze this paper, or this journal, or this research topic? How many did he look at before he decided to look at this particular one? Without this knowledge we cannot assess the validity of his reported Ptes-values.
- Bias is a property of a process, not any individual sample. To see this, Morey asks us to imagine that we ask people to generate “random” sequences of 0s and 1s. We know that humans are biased when they do this, and typically alternate 0 and 1 too often. Say we have the sequence 011101000. This shows 4 alternations, exactly as many we would expect from a random process (50%, or 4/8). If we know a human generated this sequence, then regardless of the fact that it conforms perfectly to a random sequence, it is still biased. Humans are biased regardless of the sequence they produce. Publication processes are biased regardless of the bias level in studies they produce. Asking which journals or papers or topics show bias is asking the wrong question. We should ask if the publication process is biased, the answer to which we already know is “Yes.” We should focus on changing the process, not singling papers/topics/journals that we already know come from a biased process.
- The test assumes a fixed sample size (as does almost every p-value), but most researchers run studies sequentially. Most sets of studies are a result of getting a particular result, tweaking the protocol, getting another result, and repeat until satisfied or out of money/time. We know that p-values are not valid when the sample size is not fixed in advance, and this holds for Francis’s Ptes all the same. It is probably not possible to adjust the test to account for the sequential nature of real world studies, although I would be interested to see a proof.
- The test equates violations of the binomial assumption with the presence of publication bias, which is just silly. Imagine we use the test in a scenario like above (sequential testing) where we know the assumption is violated but we know that all relevant experiments for this paper are published (say, we are the authors). We could reject the (irrelevant) null hypothesis when we can be sure that the study suffers from no publication bias. Further,
through simulationMorey shows that when true power is .4 or less, “examining experiment sets of 5 or greater will always lead to a significant result [Ptes-value], even when there is no publication bias” (Italics original).
- Ptes suffers from all of the limitations of p-values, chief of which are that different p-values are not comparable and p is not an effect size (or a measure of evidence at all). Any criticisms of p-values and their interpretation (of which there are too many to list) apply to Ptes.
The test of excess significance suffers from many problems, ranging from answering the wrong questions about bias, to untenable assumptions, to poor performance in correcting effect size estimates for bias, to challenges of interpreting significant Ptes-values. Francis published a rejoinder in which he tries to address these concerns, but I find his rebuttal lacking. For space constraints (this is super long already) I won’t list the points in his reply but I encourage you to read it if you are interested in this method. He disagrees with pretty much every point I’ve listed above, and often claims they are addressing the wrong questions. I contend that he falls into the same trap he warns others to avoid in his rejoinder, that is, “[the significance test can be] inappropriate because the data do not follow the assumptions of the analysis. … As many statisticians have emphasized, scientists need to look at their data and not just blindly apply significance tests.” I completely agree.
Edits: 12/7 correct mistake in Morey summary. 12/8 add links to reviewed commentaries.
Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, 57(5), 153-169.
Francis, G. (2013). We should focus on the biases that matter: A reply to commentaries. Journal of Mathematical Psychology, 57(5), 190-195.
Francis G, Tanzman J, Matthews WJ (2014) Excess Success for Psychology Articles in the Journal Science. PLoS ONE 9(12): e114255. doi:10.1371/journal.pone.0114255
Gelman, A., & Stern, H. (2006). The difference between “significant” and “not significant” is not itself statistically significant. The American Statistician, 60(4), 328-331.
Ioannidis, J. P., & Trikalinos, T. A. (2007). An exploratory test for an excess of significant findings. Clinical Trials, 4(3), 245-253.
Johnson, V. E. (2013). On biases in assessing replicability, statistical consistency and publication bias. Journal of Mathematical Psychology, 57(5), 177-179.
Morey, R. D. (2013). The consistency test does not–and cannot–deliver what is advertised: A comment on Francis (2013). Journal of Mathematical Psychology,57(5), 180-183.
Simonsohn, U. (2013). It really just does not follow, comments on. Journal of Mathematical Psychology, 57(5), 174-176.
Vandekerckhove, J., Guan, M., & Styrcula, S. A. (2013). The consistency test may be too weak to be useful: Its systematic application would not improve effect size estimation in meta-analyses. Journal of Mathematical Psychology,57(5), 170-173.