The Reproducibility Project was finally published this week in *Science*, and an outpouring of media articles followed. Headlines included “More Than 50% Psychology Studies Are Questionable: Study”, “Scientists Replicated 100 Psychology Studies, and Fewer Than Half Got the Same Results”, and “More than half of psychology papers are not reproducible”.

Are these categorical conclusions warranted? If you look at the paper, it makes very clear that the results do not definitively establish effects as true or false:

After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice. (p. 7)

Very well said. The point of this project was not to determine what proportion of effects are “true”. The point of this project was to see what results are *replicable* in an independent sample.* *The question arises of what exactly this means. Is an original study replicable if the replication simply matches it in statistical significance and direction? The authors entertain this possibility:

A straightforward method for evaluating replication is to test whether the replication shows a statistically significant effect (P < 0.05) with the same direction as the original study. This dichotomous vote-counting method is intuitively appealing and consistent with common heuristics used to decide whether original studies “worked.” (p. 4)

How did the replications fare? Not particularly well.

Ninety-seven of 100 (97%) effects from original studies were positive results … On the basis of only the average replication power of the 97 original, significant effects [M = 0.92, median (Mdn) = 0.95], we would expect approximately 89 positive results in the replications if all original effects were true and accurately estimated; however, there were just 35 [36.1%; 95% CI = (26.6%, 46.2%)], a significant reduction … (p. 4)

So the replications, being judged on this metric, did (frankly) horribly when compared to the original studies. Only 35 of the studies achieved significance, as opposed to the 89 expected and the 97 total. This gives a success rate of either 36% (35/97) out of all studies, or 39% (35/89) relative to the number of studies expected to achieve significance based on power calculations. Either way, pretty low. These were the numbers that most of the media latched on to.

Does this metric make sense? Arguably not, since the “difference between significant and not significant is not necessarily significant” (Gelman & Stern, 2006). Comparing significance levels across experiments is not valid inference. A non-significant replication result can be entirely consistent with the original effect, and yet count as a failure because it did not achieve significance. There must be a better metric.

The authors recognize this, so they also used a metric that utilized confidence intervals over simple significance tests. Namely, does the confidence interval from the replication study include the originally reported effect? They write,

This method addresses the weakness of the first test that a replication in the same direction and a P value of 0.06 may not be significantly different from the original result. However, the method will also indicate that a replication “fails” when the direction of the effect is the same but the replication effect size is significantly smaller than the original effect size … Also, the replication “succeeds” when the result is near zero but not estimated with sufficiently high precision to be distinguished from the original effect size. (p. 4)

So with this metric a replication is considered successful if the replication result’s confidence interval contains the original effect, and fails otherwise. The replication effect can be near zero, but if the CI is wide enough it counts as a non-failure (i.e., a “success”). A replication can also be quite near the original effect but have high precision, thus excluding the original effect and “failing”.

This metric is very indirect, and their use of scare-quotes around “succeeds” is telling. Roughly 47% of confidence intervals in the replications “succeeded” in capturing the original result. The problem with this metric is obvious: Replications with effects near zero but wide CIs get the same credit as replications that were bang on the original effect (or even larger) with narrow CIs. Results that don’t flat out contradict the original effects count as much as strong confirmations? Why should both of these types of results be considered equally successful?

Based on these two metrics, the headlines are accurate: Over half of the replications “failed”. But these two reproducibility metrics are either invalid (comparing significance levels across experiments) or very vague (confidence interval agreement). They also only offer *binary answers*: A replication either “succeeds” or “fails”, and this binary thinking leads to absurd conclusions in some cases like those mentioned above. Is replicability really so black and white? I will explain below how I think we should measure replicability in a Bayesian way, with a continuous measure that can find reasonable answers with replication effects near zero with wide CIs, effects near the original with tight CIs, effects near zero with tight CIs, replication effects that go in the opposite direction, and anything in between.

## A Bayesian metric of reproducibility

I wanted to look at the results of the reproducibility project through a Bayesian lens. This post should really be titled, “A Bayesian …” or “One Possible Bayesian …” since there is no single Bayesian answer to any question (but those titles aren’t as catchy). It depends on how you specify the problem and what question you ask. When I look at the question of replicability, I want to know if is there evidence for replication success or for replication failure, and how strong that evidence is. That is, should I interpret the replication results as more consistent with the original reported result or more consistent with a null result, and by how much?

Verhagen and Wagenmakers (2014), and Wagenmakers, Verhagen, and Ly (2015) recently outlined how this could be done for many types of problems. The approach naturally leads to computing a Bayes factor. With Bayes factors, one must explicitly define the hypotheses (models) being compared. In this case one model corresponds to a probability distribution centered around the original finding (i.e. the posterior), and the second model corresponds to the null model (effect = 0). The Bayes factor tells you which model the replication result is more consistent with, and larger Bayes factors indicate a better relative fit. So it’s less about obtaining evidence for the effect *in general* and more about gauging the *relative predictive success* of the original effects. (footnote 1)

If the original results do a good job of predicting replication results, the original effect model will achieve a relatively large Bayes factor. If the replication results are much smaller or in the wrong direction, the null model will achieve a large Bayes factor. If the result is ambiguous, there will be a Bayes factor near 1. Again, the question is which model better predicts the replication result? You don’t want a null model to predict replication results better than your original reported effect.

A key advantage of the Bayes factor approach is that it allows natural grades of evidence for replication success. A replication result can strongly agree with the original effect model, it can strongly agree with a null model, or it can lie somewhere in between. To me, the biggest advantage of the Bayes factor is it disentangles the two types of results that traditional significance tests struggle with: a result that actually favors the null model vs a result that is simply insensitive. Since the Bayes factor is inherently a comparative metric, it is possible to obtain evidence for the null model over the tested alternative. This addresses my problem I had with the above metrics: Replication results bang on the original effects get big boosts in the Bayes factor, replication results strongly inconsistent with the original effects get big penalties in the Bayes factor, and ambiguous replication results end up with a vague Bayes factor.

Bayes factor methods are often criticized for being subjective, sensitive to the prior, and for being somewhat arbitrary. Specifying the models is typically hard, and sometimes more arbitrary models are chosen for convenience for a given study. Models can also be specified by theoretical considerations that often appear subjective (because they are). For a replication study, the models are hardly arbitrary at all. The null model corresponds to that of a skeptic of the original results, and the alternative model corresponds to a strong theoretical proponent. The models are theoretically motivated and answer *exactly* what I want to know: **Does the replication result fit more with the original effect model or a null model****? **Or as Verhagen and Wagenmakers (2014) put it, “Is the effect similar to what was found before, or is it absent?” (p.1458 here).

## Replication Bayes factors

In the following, I take the effects reported in figure 3 of the reproducibility project (the pretty red and green scatterplot) and calculate replication Bayes factors for each one. Since they have been converted to correlation measures, replication Bayes factors can easily be calculated using the code provided by Wagenmakers, Verhagen, and Ly (2015). The authors of the reproducibility project kindly provide the script for making their figure 3, so all I did was take the part of the script that compiled the converted 95 correlation effect sizes for original and replication studies. (footnote 2) The replication Bayes factor script takes the correlation coefficients from the original studies as input, calculates the corresponding original effect’s posterior distribution, and then compares the fit of this distribution and the null model to the result of the replication. Bayes factors larger than 1 indicate the original effect model is a better fit, Bayes factors smaller than 1 indicate the null model is a better fit. Large (or really small) Bayes factors indicate strong evidence, and Bayes factors near 1 indicate a largely insensitive result.

The replication Bayes factors are summarized in the figure below (click to enlarge). The y-axis is the count of Bayes factors per bin, and the different bins correspond to various strengths of replication success or failure. Results that fall in the bins left of center constitute support the null over the original result, and vice versa. The outer-most bins on the left or right contain the strongest replication failures and successes, respectively. The bins labelled “Moderate” contain the more muted replication successes or failures. The two central-most bins labelled “Insensitive” contain results that are essentially uninformative.

## So how did we do?

You’ll notice from this crude binning system that there is quite a spread from super strong replication failure to super strong replication success. I’ve committed the sin of binning a continuous outcome, but I think it serves as a nice summary. It’s important to remember that Bayes factors of 2.5 vs 3.5, while in different bins, aren’t categorically different. Bayes factors of 9 vs 11, while in different bins, aren’t categorically different. Bayes factors of 15 and 90, while in the same bin, are quite different. There is no black and white here. These are the categories Bayesians often use to describe grades of Bayes factors, so I use them since they are familiar to many readers. If you have a better idea for displaying this please leave a comment. Check out the “Results” section at the end of this post to see a table which shows the study number, the N in original and replications, the r values of each study, the replication Bayes factor and category I gave it, and the replication p-value for comparison with the Bayes factor. This table shows the really wide spread of the results. There is also code in the “Code” section to reproduce the analyses.

### Strong replication failures and strong successes

Roughly 20% (17 out of 95) of replications resulted in relatively strong replication failures (2 left-most bins), with resultant Bayes factors at least 10:1 in favor of the null. The highest Bayes factor in this category was over 300,000 (study 110, “Perceptual mechanisms that characterize gender differences in decoding women’s sexual intent”). If you were skeptical of these original effects, you’d feel validated in your skepticism after the replications. If you were a proponent of the original effects’ replicability you’ll perhaps want to think twice before writing that next grant based around these studies.

Roughly 25% (23 out of 95) of replications resulted in relatively strong replication successes (2 right-most bins), with resultant Bayes factors at least 10:1 in favor of the original effect. The highest Bayes factor in this category was 1.3×10^32 (or log(bf)=74; study 113, “Prescribed optimism: Is it right to be wrong about the future?”) If you were a skeptic of the original effects you should update your opinion to reflect the fact that these findings convincingly replicated. If you were a proponent of these effects you feel validation in that they appear to be robust.

These two types of results are the most clear-cut: either the null is strongly favored or the original reported effect is strongly favored. Anyone who was indifferent to these effects has their opinion swayed to one side, and proponents/skeptics are left feeling either validated or starting to re-evaluate their position. There was only 1 very strong (BF>100) failure to replicate but there were quite a few very strong replication successes (16!). There were approximately twice as many strong (10<BF<100) failures to replicate (16) than strong replication successes (7).

### Moderate replication failures and moderate successes

The middle-inner bins are labelled “Moderate”, and contain replication results that aren’t entirely convincing but are still relatively informative (3<BF<10). The Bayes factors in the upper end of this range are somewhat more convincing than the Bayes factors in the lower end of this range.

Roughly 20% (19 out of 95) of replications resulted in moderate failures to replicate (third bin from the left), with resultant Bayes factors between 10:1 and 3:1 in favor of the null. If you were a proponent of these effects you’d feel a little more hesitant, but you likely wouldn’t reconsider your research program over these results. If you were a skeptic of the original effects you’d feel justified in continued skepticism.

Roughly 10% (9 out of 95) of replications resulted in moderate replication successes (third bin from the right), with resultant Bayes factors between 10:1 and 3:1 in favor of the original effect. If you were a big skeptic of the original effects, these replication results likely wouldn’t completely change your mind (perhaps you’d be a tad more open minded). If you were a proponent, you’d feel a bit more confident.

### Many uninformative “failed” replications

The two central bins contain replication results that are insensitive. In general, Bayes factors smaller than 3:1 should be interpreted only as very weak evidence. That is, these results are so weak that they wouldn’t even be convincing to an ideal impartial observer (neither proponent nor skeptic). **These two bins contain 27 replication results**. Approximately 30% of the replication results from the reproducibility project aren’t worth much inferentially!

A few examples:

- Study 2, “Now you see it, now you don’t: repetition blindness for nonwords” BF = 2:1 in favor of null
- Study 12, “When does between-sequence phonological similarity promote irrelevant sound disruption?” BF = 1.1:1 in favor of null
- Study 80, “The effects of an implemental mind-set on attitude strength.” BF = 1.2:1 in favor of original effect
- Study 143, “Creating social connection through inferential reproduction: Loneliness and perceived agency in gadgets, gods, and greyhounds” BF = 2:1 in favor of null

I just picked these out randomly. The types of replication studies in this inconclusive set range from attentional blink (study 2), to brain mapping studies (study 55), to space perception (study 167), to cross national comparisons of personality (study 154).

Should these replications count as “failures” to the same extent as the ones in the left 2 bins? Should studies with a Bayes factor of 2:1 in favor of the original effect count as “failures” as much as studies with 50:1 against? I would argue they should not, they should be called what they are: entirely inconclusive.

Interestingly, study 143 mentioned above was recently called out in this NYT article as a high-profile study that “didn’t hold up”. Actually, we don’t know if it held up! Identifying replications that were inconclusive using this continuous range helps avoid over-interpreting ambiguous results as “failures”.

## Wrap up

To summarize the graphic and the results discussed above, this method identifies roughly as many replications with moderate success or better (BF>3) as the counting significance method (32 vs 35). (footnote 3) These successes can be graded based on their replication Bayes factor as moderate to very strong. **The key insight from using this method is that many replications that “fail” based on the significance count are actually just inconclusive.** It’s one thing to give equal credit to two replication successes that are quite different in strength, but it’s another to call all replications failures equally bad when they show a highly variable range. Calling a replication a failure when it is actually inconclusive has consequences for the original researcher and the perception of the field.

As opposed to the confidence interval metric, a replication effect centered near zero with a wide CI will not count as a replication success with this method; it would likely be either inconclusive or weak evidence in favor of the null. Some replications are indeed moderate to strong failures to replicate (36 or so), but nearly 30% of all replications in the reproducibility project (27 out of 95) were not very informative in choosing between the original effect model and the null model.

So to answer my question as I first posed it, are the categorical conclusions of wide-scale failures to replicate by the media stories warranted? As always, it depends.

- If you count “success” as any Bayes factor that has any evidence in favor of the original effect (BF>1), then there is a 44% success rate (42 out of 95).

- If you count “success” as any Bayes factor with at least moderate evidence in favor of the original effect (BF>3), then there is a 34% success rate (32 out of 95).

- If you count “failure” as any Bayes factor that has at least moderate evidence in favor of the null (BF<1/3), then there is a 38% failure rate (36 out of 95).

- If you only consider the effects sensitive enough to discriminate the null model and the original effect model (BF>3 or BF<1/3) in your total, then there is a roughly 47% success rate (32 out of 68). This number jives (uncannily) well with the prediction John Ioannidis made 10 years ago (47%).

However you judge it, the results aren’t exactly great.

But if we move away from dichotomous judgements of replication success/failure, we see a *slightly* less grim picture. Many studies strongly replicated, many studies strongly failed, but many studies were in between. There is a wide range! Judgements of replicability needn’t be black and white. And with more data the inconclusive results could have gone either way. I would argue that any study with 1/3<BF<3 shouldn’t count as a failure *or* a success, since the evidence simply is not convincing; I think we should hold off judging these inconclusive effects until there is stronger evidence. Saying “we didn’t learn much about this or that effect” is a totally reasonable thing to do. Boo dichotomization!

### Try out this method!

All in all, I think the Bayesian approach to evaluating replication success is advantageous in 3 big ways: It avoids dichotomizing replication outcomes, it gives an indication of the range of the *strength *of replication successes or failures, and it identifies which studies we need to give more attention to (insensitive BFs). The Bayes factor approach used here can straighten out when a replication shows strong evidence in favor of the null model, strong evidence in favor of the original effect model, or evidence that isn’t convincingly in favor of either position. Inconclusive replications should be targeted for future replication, and perhaps we should look into why these studies that purport to have high power (>90%) end up with insensitive results (large variance, design flaw, overly optimistic power calcs, etc). It turns out that having high power in planning a study is no guarantee that one actually obtains convincingly sensitive data (Dienes, 2014; Wagenmakers et al., 2014).

I should note, the reproducibility project did try to move away from the dichotomous thinking about replicability by correlating the converted effect sizes (r) between original and replication studies. This was a clever idea, and it led to a very pretty graph (figure 3) and some interesting conclusions. That idea is similar in spirit to what I’ve laid out above, but its conclusions can only be drawn from batches of replication results. Replication Bayes factors allow one to compare the original and replication results on an effect by effect basis. This Bayesian method can grade a replication on its relative success or failure even if *your* reproducibility project only has 1 effect in it.

I should also note, this analysis is inherently context dependent. A different group of studies could very well show a different distribution of replication Bayes factors, where each individual study has a different prior distribution (based on the original effect). I don’t know how much these results would generalize to other journals or other fields, but I would be interested to see these replication Bayes factors employed if systematic replication efforts ever do catch on in other fields.

### Acknowledgements and thanks

The authors of the reproducibility project have done us all a great service and I am grateful that they have shared all of their code, data, and scripts. This re-analysis wouldn’t have been possible without their commitment to open science. I am also grateful to EJ Wagenmakers, Josine Verhagen, and Alexander Ly for sharing the code to calculate the replication Bayes factors on the OSF. Many thanks to Chris Engelhardt and Daniel Lakens for some fruitful discussions when I was planning this post. Of course, the usual disclaimer applies and all errors you find should be attributed only to me.

## Notes

footnote 1: Of course, a model that takes publication bias into account could fit better by tempering the original estimate, and thus show relative evidence for the bias-corrected effect vs either of the other models; but that’d be answering a different question than the one I want to ask.

footnote 2: I left out 2 results that I couldn’t get to work with the calculations. Studies 46 and 139, both appear to be fairly strong successes, but I’ve left them out of the reported numbers because I couldn’t calculate a BF.

footnote 3: The cutoff of BF>3 isn’t a hard and fast rule at all. Recall that this is a continuous measure. Bayes factors are typically a little more conservative than significance tests in supporting the alternative hypothesis. If the threshold for success is dropped to BF>2 the number of successes is 35 — an even match with the original estimate.

## Results

This table is organized from smallest replication Bayes factor to largest (i.e., strongest evidence in favor of null to strongest evidence in favor of original effect). The Ns were taken from the final columns in the master data sheet,”T_N_O_for_tables” and “T_N_R_for_tables”. Some Ns are not integers because they presumably underwent df correction. There is also the replication p-value for comparison; notice that BFs>3 generally correspond to ps less than .05 — BUT there are some cases where they do not agree. If you’d like to see more about the studies you can check out the master data file in the reproducibility project OSF page (linked below).

## R Code

If you want to check/modify/correct my code, here it is. If you find a glaring error please leave a comment below or tweet at me

## References

Link to the reproducibility project OSF

Link to replication Bayes factors OSF

Dienes, Z. (2014). Using Bayes to get the most out of non-significant results. Frontiers in psychology, 5.

Gelman, A., & Stern, H. (2006). The difference between “significant” and “not significant” is not itself statistically significant. *The American Statistician*, *60*(4), 328-331.

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science 28 August 2015: 349 (6251), aac4716 [DOI:10.1126/science.aac4716]

Verhagen, J., & Wagenmakers, E. J. (2014). Bayesian tests to quantify the result of a replication attempt. *Journal of Experimental Psychology: General*,*143*(4), 1457.

Wagenmakers, E. J., Verhagen, A. J., & Ly, A. (in press). How to quantify the evidence for the absence of a correlation. *Behavior Research Methods.*

Wagenmakers, E. J., Verhagen, J., Ly, A., Bakker, M., Lee, M. D., Matzke, D., … & Morey, R. D. (2014). A power fallacy. Behavior research methods, 1-5.

Really interesting work! I’m a philosopher of science, so it’s fascinating to me to see how different statistical analyses of the replication results are getting (somewhat) different overall conclusions.

If you have a better idea for displaying this please leave a comment.Density plots? http://www.statmethods.net/graphs/density.html; http://docs.ggplot2.org/0.9.3.1/geom_density.html

Great analysis!

This is exactly what I hoped would happen with the data and code.

I would have been surprised if results would have diverged strongly using Bayes factors.

One thing I do not understand is how one can claim a degree of evidence without knowing what was actually predicted and measured. What scientists using frequentist analyses are supposed to do (I’m not saying that they in fact do so) is re-establish the epistemic link between the statistical hypothesis and the research question in terms of the predicted observational constraint, in order to evaluate credibility of the claim.

So, if I predict it is going to rain tomorrow at 12:00 based on information I have today and I am off by 1-2 hours. This will not be considered strong evidence for the credibility of my weather model. If I made this prediction 1 year ago and I am off by the same amount, that would likely be considered a corroboration of my model of the weather.

How can I distinguish between these two situations using Bayes Factors?

Best,

Fred

Hey Fred, thanks for the comment and thanks for your work on the RPP😀

I think you are asking a really good question. Here’s my take: Bayes factors are interesting and relevant only insofar as we judge them as good representatives of our theories. Since these BFs are based on converted correlation coefficients, they are really only approximate Bayes factors. If there is a lot of information in the model not captured by the standardized effect size, then this approximation may not hold (where r is not an approximately sufficient statistic; complex factorial designs, for example). What I don’t think you’d see is a whole lot of cases where a strong success changes to a strong failure (& vice-versa), but I wouldn’t rule that out if the BF approximation is essentially nil and I essentially compared irrelevant models. This is really just a rough first pass at it, and exact Bayes factors would likely change many of the particular numerical values. Again, how much they change depends on how good of an approximation these measures are. I’d be happy for someone to re-analyze this to check that! and I’m confident there would be some cases where these BFs are overshooting and cases where they are undershooting.

In other words, the conversion to a correlation limits the models’ predictions to a standardized scale. Some researchers understandably don’t like evaluating model prediction (including BFs) on standardized scales (cohen’s d as well) because experimental design plays heavily in the calculation of the metric. If this conversion neuters the models’ connections to their respective theory then these aren’t so good.

In terms of distinguishing between your two scenarios, let me think about it and get back to you

One additional method for improving predictions here might be to consider reliability estimates for variables and to correct the effect size for attenuation related to measurement-error attenuation by adapting Spearman’s original formulae:

http://webspace.ship.edu/pgmarr/Geo441/Readings/Spearman%201904%20-%20The%20Proof%20and%20Measurement%20of%20Association%20between%20Two%20Things.pdf

This could lead to finding that two studies found virtually identical findings despite different point estimates.

For example, if Study 1 found an rxy of .35 with reliability of rxx = .9 and an ryy = .9, then the “true” effect corrected for measurement error is rx’y’ = rxy/(rxx * ryy)^.5 = .388. This effect size is equivalent to a replication finding an rxy of .27 with reliability of rxx = .7 and ryy = .7, where rx’y’ correlation = rxy/(rxx * ryy)^.5 = .385.

I doubt this would have a huge effect on predictions (and it shouldn’t if measures have high reliability), but it might be worth considering.

Thanks Matthew, this is a really interesting idea.

“and it shouldn’t if measures have high reliability”

I think you would see a bit of variability in the reliability of measures in a dataset as diverse as this, but it would be interesting to look at how this changes the predictions for a given effect.

Remember that the reliability figures are point estimates and the attenuation corrections make assumptions that may not hold (essentially the corrections tend to work well only if reliability is high and from a large sample – and that assumes that the reliability isn’t different in the study that generated the data and the study that generated the reliability estimate).

One is better of comparing estimates on the original (unstandardised) scale I think.

Reblogged this on Neuroconscience and commented:

Fantastic post by Alexander Etz (@AlxEtz), which uses a Bayes Factor approach to summarise the results of the reproducibility project. Not only a great way to get a handle on those data but also a great introduction to Bayes Factors in general!

Great post (as usual)! I very much like that someone else is fighting back against dichotomiousness (dichotomyness? dichotomousity? The Reign of Dichotomy?)😛 My main beef with the trendy replication movement right now has always been the inevitable misinterpretation of what replications mean or are supposed to mean. It’s great that there *are* replication attempts but at equal measure we need better education about what information they provide. Anyway a few thoughts:

1. Even though it is dichotomous, I think your histogram of BF categories are fine – but you asked so here is another way to represent those replication BFs. In fact, it reveals some interesting features about BFs that I also discussed in my work-in-progress bootstrapping paper (I will eventually write a new draft. It’s just not high on my list of priorities right now). Here I replotted the BFs from your table as a histogram (open circles) and then added a smoothed histogram (using a kernel density smoother in Matlab). Moreover, I used the logarithm to express the BFs (no other way to represent this – your categorical histogram also does that in a way). The dashed and dotted vertical lines show BFs that would classify as “strong (BF>10 or 100 or <1/100)", respectively.

What this shows is that actually most BFs will cluster relatively close to 1 (that is, zero on this plot) even if they provide reasonably strong evidence. Only a handful results fall way off into excessive values. I discussed that same issue in my manuscript. With BFs (or p-values too to be honest) you can get some very extreme values but should you interpret them as being particularly extreme? On repeated measurement (I know this is a very frequentist concept :P) they are very inconsistent. Perhaps one could therefore argue that categorical labels are useful after all – one could say that it doesn’t matter if a result in your “very strong” category has BF=300,000 or 300 but the interpretation would probably be the same. But I am not sure this is right as this would bring back dichotomous thinking. How likely it would be for a replication to cross the category boundary is probably what matters here pragmatically. I don’t know this yet.

2. You argue very compellingly why it is flawed to make inferences on replications based on whether CIs include the original effect or not. But wouldn’t a sensible way be to calculate the conjoint CI of the original and replication data? I guess this works on the assumption that the findings are homogeneous so conceptual replications or all sorts of minor changes to protocols would arguable not fit. But I assume clever statisticians have already come up with a solution to this too😛. Either way, a conjoint CI would tell you the combined effect size and its uncertainty thus allowing you to make an inference about what the effect actually is. If the replication is very precise but the original effect wasn’t then it will be heavily weighted towards the replication. This seems to be essentially Bayesian to me without being formal about it?

3. Okay, this one is a bit tongue in cheek but do you realise that you are essentially talking like a frequentist when you calculate the success rates based on the different BF classifications at the end of your post? It’s just as I have long suspected that Bayesians really can’t shake frequentist thinking either😉

Seriously though, while I think hypothesis testing is important, isn’t it more useful to move to estimation when analysing replications? I can see why original studies want to test specific hypotheses but once you accumulated replications you surely must start to be more interest in what the effect size estimate actually is?

Anyway, I’ll be happy to discuss all this when I’m back from Twitterlessness…😉

Sam, we are antichotomous! Thanks for commenting, I’m eager to have you back on twitter. And thanks for sharing this graph.

1. A tricky part when interpreting Bayes factors is that they don’t scale linearly (since they are multiplicative ratios). So while, yes, a BF of 300k is quite a lot larger than 300, it’s hard to really grasp the consequences of changes of that magnitude. But since they are unitless ratios, we can interpret them as such; we can always say that the first is 1000 times stronger evidence. How compelling these numbers are to someone depends on their personal prior for a specific effect or theory. That’s what I tried to allude to by discussing how skeptics and proponents would react to various results (without invoking actual numbers for their pesky personal priors). And the fact that many are (relatively) near 1 is a function of this dataset. No guarantee this would hold in journal of vision (for example). Or for other specifications of the models, for that matter.

2. I think combining CIs is tricky here. What is the long-run interpretation? The only reason the second CI was calculated was because we wanted to verify the first one. There may be ways to model (simulate?) this dependency between them, but I don’t know how one would. Maybe some variant of a random-effect model or something.

As you say, the goal is not always to get a combined estimate anyways. If I doubt the validity of the first result due to biased reporting processes, why would I want to combine it with a pre-registered (minimally biased) estimate? Also, some of the original CIs were actually pretty narrow, and some replications had smaller N than the original studies, so there is no guarantee that you get a big weight towards your replication. You could build a bayesian model of this by giving different probabilities to the different models and then mixing, but I don’t know how a frequentist could do it and still be a frequentist. Remember, probabilities strictly can’t be assigned to models in that framework.

“This seems to be essentially Bayesian to me without being formal about it?” Sure, let’s just roll with random “sensible” heuristics. Not like that kind of thinking got us into this mess😛 Principles, Sam! Principles!

3. Finally, I can see why you might think tabulating replication success and failure might look like I’m a dirty frequentist (joking!!). But I’m conditioning on observables (data) in my tabulation, not hypothetical parameters. These rates are descriptive, not prescriptive.😉

Once you’re confident there is something to estimate, feel free to estimate it.

Good answers thanks! Regarding the conjoint interval, I still think it would probably tell you something. If, as you say, the interval of the original study is narrower than that of the replication then surely it should carry more weight? I do get your point about potential bias in the original though (of course bias could also exist in the replication but this is a story for another day…😉. In this case it indeed makes more sense to see how consistent the replication effect(s) are with the original ones.

Principles, Sam! Principles!I guess in the end I’ll just always be a pragmatist😉

I’m eager to have you back on twitterConsidering that I’m commenting here I could theoretically be on twitter right now but I’m trying not to be too over the top in my lawlessness (yes, the attentive reader will have induced correctly that your blog is also banned here, as is mine it would appear. A billion people with no access to Bayesian inference or my inane ramblings. Forget social media – this is the true tragedy here!😉

‘deduced’ not ‘induced’. Been thinking too much about visual illusions lately…😛

I just realised I was wrong: your (or my) blog aren’t banned here after all. So all is well I guess…😛

Ooops, looks like my link to the graph didn’t work (too long since I used html). Here is a direct link:

great post!

But why the assumption that the relevant comparison is to a ‘null’ model of exactly zero effect? Isn’t the relevant comparison to ‘an effect of the smallest size to have any practical or theoretical consequences’? In some contexts this might be anything different from zero, but more often than not it would need to be of a certain size to have any practical or theoretical significance….

Of course, taking this into account will only make the original studies look even worse.

I agree with you, Dimiter. In some cases the researchers are really interested in comparing to clinically/theoretically uninteresting effect sizes. In most cases this would shrink a given Bayes factor, since the null can now account for a wider range of observables. By how much depends on the context of the specific effect and how large of an effect size is considered relevant, of course.

I don’t have a problem with non-point nulls in general, and if an analyst thinks it’s reasonable and theoretically motivated then they should feel free to use one. In practice, for this dataset, it would take a considerable amount of work to implement this on a case by case basis but for a smaller “reproducibility project” (with just a few studies) it wouldn’t be too hard. As I say in the post, “there is no single Bayesian answer to any question”, precisely because one can always find other reasonable ways to formulate the problem. This is one such way.

[…] The Bayesian Reproducibility Project […]

I just wanted to say that I loved this post.

The first thing I thought after reading the reproducibility project was “why on Earth are they not employing Bayesian methods for this question?”.

I care little about frequentist approaches to this project. To be frank, it seemed like the authors were attempting to solve a largely frequentist problem using more frequentist problems. As though they were digging around in the frequentist toolbox for answers to their problem, and wound up hammering in a nail with a screw driver because “close enough”.

Although their estimates were not far off of yours, what I *really* want to see is how parameter estimates have changed in light of new data, not some dichotomous ‘did it work again’ decision. Bayes factors get us closer if nothing else.

Thank you for this post!

Stephen, thank you for the very kind words. I think I just put into words what everyone was thinking, Namely, “Surely it can’t be so black and white?” And I’m sure we’ll be seeing many more interpretations of these results in the coming weeks/months/years.

In my opinion, as I said in a reply to Sam above, I think you should start estimating things once you’re confident there is something there to estimate. If someone believes the null is always false, and we should always estimate everything, I understand that. Essentially what that is saying is they give the null model a probability of approximately zero. Reasonable enough, even if I wouldn’t do that. (These are personal probabilities, after all.)

If, however, you don’t have complete disregard for the null, then I think the best estimates we could get here are from averaging the different model estimates based on their posterior probability. Jeff Rouder wrote a good piece on that a while back: http://jeffrouder.blogspot.com/2015/03/estimating-effect-sizes-requires-some.html

I’m curious to hear how people would want to go about estimating these effects. Would you try to extract a bias-reduced estimate from the original studies and then do a precision-weighted average, or give extra weight to the pre-registered (low-bias) replication attempts, or disregard the original estimates from studies with strong failures to replicate, etc etc etc? Have you given it much thought? It seems complicated to me, and I don’t know exactly how I would do it, so I’d be really interested to hear what you think.

I’m still not sure about this. Even if you don’t believe in the effect’s existence, couldn’t you still estimate the parameter to see how far away from zero it is? Only if your parameter estimate after replication is sufficiently beyond the null range of no interest would you then infer that effect was meaningful. I suppose though that this isn’t all that far from the non-zero null models you discussed above so you could do BFs on that too.

I agree that the bias-reduction aspect is a complex one but it seems important. I would assume that if the posterior of the original study incorporated the bias a replication result going in the same directly but with weaker effect size estimate would actually be stronger evidence for the effect’s existence than using the “raw” posterior?

I think looking at how far away parameter estimates are from null values is just a crude way to do a hypothesis test. If you want to do that, then do a test with principles behind it! It’s just as easy to implement a bayes factor, but BFs actually follow from the laws of probability.

I think incorporating the bias into the posterior of the original study is one way you could do it. Maime Guan and Joachim Vandekerckhove just wrote a paper about mitigating bias in effect sizes through averaging the estimates from different bias-generating models http://www.cidlab.com/prints/guan2015bayesian.pdf which is one way you might do that. Still some work to be done on that front though.

In general, if you correct for bias by having the replication prior localized around smaller effects, you would indeed have more favorable bayes factors. The replication BF on the raw posterior can be thought of as highly biased against small effects. Essentially it penalizes you for biasing your original estimates with a bias in this test. The question it asks is: Is the replication effect roughly as large as before or null? If you make that “as large as before” smaller, smaller replication effect sizes will indeed fit better.

I think it would be definitely worth looking more into this and/or promoting this more. Of course in some cases the bias-corrected posterior will probably look more or less like no effect anyway…😉

Yes, I imagine we’d see many published estimates shrink by quite a lot.

[…] эффекта, а какие оказались неинформативны. Выводы Алекса Этза[8] в рамках этого подхода оказались довольно […]

Great post. In regard to how to display the replication Bayes Factors, I recommend the empirical cumulative distribution function, with the “weight of the evidence” (common log of the Bayes Factor) as the x axis. Generally speaking, the cumulative distribution is THE way to display raw data. There is no smoothing or averaging. You can immediately see the location, spread, form and range of the distribution and whether it was censored. In the graph I will attempt to attach, made from the BFs in your table, the data were not censored, but I limited the x-axis range to the portion of greatest interest (-3 to 3). I don’ know how to use Matlab’s publish command to get a figure to appear in a comment. Below is the Matlab code that generated my figure

% Code assumes Dat variable in Matlab’s workspace. The Dat variable is a

% vector of the 95 replication BFs in ETZ’s table

figure

H = cdfplot(log10(Dat));

set(H,’LineWidth’,2)

xlabel(‘Weight of Replication Evidence [log_1_0(BF)]’)

ylabel(‘Cumulative Fraction of Replications’)

hold on

plot([0 0],ylim,’r–‘,’LineWidth’,2)

title(‘Cumulative Distribution’,’FontSize’,14)

xlim([-3 3])

plot([log10(1/3) log10(1/3)],ylim,’k:’,[-1 -1],ylim,’k-.’,[-2 -2],ylim,’k-‘)

plot([log10(3) log10(3)],ylim,’k:’,[1 1],ylim,’k-.’,[2 2],ylim,’k-‘)

text(-2.9,.5,’Decisive’)

text(-1.9,.5,’Strong’)

text(-.95,.5,’Good’)

text(-.3,.5,’Weak’)

text(2.1,.5,’Decisive’)

text(1.1,.5,’Strong’)

text(.5,.5,’Good’)

text(.1,.9,’Favors Original’,’FontSize’,14)

text(-1.1,.9,’Favors Null’,’FontSize’,14)

Published with MATLAB® R2014b

I agree this is quite a nice way to represent this. I created the plot here:

Wow, this is a really nice way of looking at it. Thanks Randy!

And thanks for posting it Sam! I don’t have MATLAB. v_v

On this note, one fine day I will perhaps reinstall R😉

[…] some debate about the methodology of the study — see, for example, the excellent post by Alexander Etz who suggests a Bayesian approach instead of classifying each replication attempt into a success vs. […]

When an original study and a replication (designed to match in power) have wildly different confidence interval widths, I’m interested. If this keeps happening, it tells us we’re bad at estimating confidence — almost certainly being over-certain in published results. Am I correct in thinking Bayes factors are not interested in diagnosing this specifically?

For example, if the original had a confident effect, say CI [4.9, 5.1], and the replication says an uninformative [-100, 100], that gives a Bayes factor near 1. A 50 or 1/50 Bayes factor implies the replication gave a good amount of information, but the converse is not true, i.e. 1 doesn’t imply the replication was uninformative, if I understand correctly. I haven’t solved for this, but I believe there’s a locus of (interval width, interval center) that gives Bayes factor = 1.

In evaluating the replication, there are two dimensions, agreement between the studies’ statements and similarity in strength of statements. (Not orthogonal when stated that way, but there are two dimensions.) Bayes factor is one-dimensional by design, so it doesn’t expect to distinguish the two independently. Ah, rereading you noted the Bayes factor “disentangles the two types of results that traditional significance tests struggle with: a result that actually favors the null model vs a result that is simply insensitive”. It does distinguish those two cases in a useful way, but it doesn’t distinguish between “simply insensitive” and “says something, but something in between the original and the null model”, correct? Which I’d like to distinguish, for motivation above.

You know, I quite like the paper’s scatterplot of the originals’ and replications’ effect sizes, I would just want to draw the intervals (in both directions) on there too.

(I’m a bystander in experimental design and in prob/stats, so please forgive blunders, but thanks for your intriguing write-up.)

Tangent,

It’s important to remember what question the Bayes factor is trying to answer: Given these two models, how much better does the replication data fit one model or the other? If the data are so variable that their standard error is 1000 times that of the original experiment (in your example, 50 vs .05), the data fit both models very poorly and the BF will be near 1, as you note. Sensitivity is relative.

You say, “A 50 or 1/50 Bayes factor implies the replication gave a good amount of information, but the converse is not true, i.e. 1 doesn’t imply the replication was uninformative”

My answer is that a BF of 1 always means the same thing. It always means the data was uninformative *with respect to the models being compared*. There are an infinite number of ways to achieve a BF near 1, but they all indicate that the data do not clearly favor one of the models. In technical terms, it means that the probability of the data (i.e., the marginal likelihood) under both models is approximately the same. The question of *why* the data were insensitive cannot be answered by the Bayes factor. That’s an experimental design question, not an inferential question. I tried to address this in the first paragraph of the “Try out this method” section.

You say, “[the bayes factor] doesn’t distinguish between “simply insensitive” and “says something, but something in between the original and the null model””

First, sensitivity is relative to the models in question. Data with a SE of 50 (your insensitive example) are quite sensitive if the models have variability on the order of 5000. When compared to models with variability on the order of .05, they’re insensitive. Bayesian inference does not entail statements of absolutes except these three: Everything is relative, everything is conditional, and all inferences must follow from the laws of probability.

Second, remember, the question is about comparing the relative fit of these two particular models. This does not preclude you from introducing a third intermediate model that you think would predict the data better. In fact, there will almost always be a third model that fits the data better than the two models under consideration. The question is not whether this model exists (it almost certainly does), but whether it was motivated by theory and not crafted in response to the data being tested. You cannot use the data to create a model that then tests the same data, or you’ll use the data twice. Models constructed in response to these replication data would need to be tested on a new batch of data.

[…] The Bayesian Reproducibility Project […]

[…] perspective on the Reproducibility Project: Psychology.” A little less presumptuous than the old blog’s title. Thanks to the RPP authors sharing all of their data, we research parasites were able to […]

[…] of the Reproducibility Project, Alexander Etz produced a great Bayesian reanalysis of the data from that project (possible because it is all open access, via the Open Science […]

[…] of the Reproducibility Undertaking, Alexander Etz produced a fantastic Bayesian reanalysis of the info from that undertaking (potential as a result of it’s all open entry, by way of […]

[…] confidence interval of the replication. Both arguments can lead to some fairly peculiar results. An early criticism of the initial Reproducibility Project paper suggested a Bayesian approach to testing reproducibility but that had its own […]

[…] 1. Learn about Bayes, because this and this. […]