I was asked by Stephan Lewandowski of the Psychonomic Society to contribute to a discussion of confidence intervals for their Featured Content blog. The purpose of the digital event was to consider the implications of some recent papers published in *Psychonomic Bulletin & Review, *and I gladly took the opportunity to highlight the widespread confusion surrounding interpretations of confidence intervals. And let me tell you, there is *a lot* of confusion.

Here are the posts in the series:

Part 1 (By Lewandowski): The 95% Stepford Interval: Confidently not what it appears to be

Part 2 (By Lewandowski): When you could be sure that the submarine is yellow, it’ll frequentistly appear red, blue, or green

**Part 3 (By Me): Confidence intervals? More like confusion intervals**

Check them out! Lewandowski mainly sticks to the content of the papers in question, but I’m a free-spirit stats blogger and went a little bit more broad with my focus. I end my post with an appeal to Bayesian statistics, which I think are much more intuitive and seem to answer the exact kinds of questions people *think* confidence intervals answer.

And remember, try out **JASP** for Bayesian analysis made easy — and it also does most classic stats — for **free**! Much better than SPSS, and it automatically produces APA formatted tables (this alone is worth the switch)!

Aside: This is not the first time I have written about confidence intervals. See my short series (well, 2 posts) on this blog called “Can confidence intervals save psychology?” part 1 and part 2. I would also like to point out Michael Lee’s excellent commentary on (takedown of?) “The new statistics” (PDF link).

### Like this:

Like Loading...

*Related*

Hello Alex,

Regarding confidence intervals. I’d like to explain to my (undergraduate) marketing students that their usual interpretation of CIs is wrong. Most explanations include some reasoning like in “The Fallacy of Placing Confidence in Confidence Intervals” (Morey, Hoekstra, et. al.). They show that application of different Confidence Procedures on the same dataset leads to absurd results. And that, therefore, the usual interpretation of CIs is incorrect.

I’d like to use a different method, but is my reasoning sound? I want to show that the use of the same Confidence Procedure on different samples (coming from the same population) AND interpreting them as “probability that the true mean is in this specific interval” also leads to strange results. Totally different intervals that all are (wrongly) interpreted as having a (say, 95%) chance of containing the population mean. I also use Cumming’s ” Where will the next mean fall”.

The weak point in my reasoning is however that the students may retort that they find different intervals because they use different information. However, the point remains that their intervals show the same logical problems.

What would you say? Is showing that CIs differ a sufficient argument against the usual wrong (bayesian-ish) interpretation of this frequentist method?

Hi Pieter,

Thanks for the interesting question. I don’t know if the argument you want to make is correct. I don’t think there is any stipulation (in frequentist or bayesian inference) that one must come to the same conclusion with different data. You would see exactly the same interval conflict if you used Bayesian posteriors.

In fact, if different people use different prior distributions then bayesian posteriors can make different probability statements about the parameter even when using the *same* data!

Thank you. The strange thing is that (in the textbooks we use until now) frequentism is so deep “the standard” that not many people think about what exactly these words and figures mean. And some even believe you when you tell what frequentism really is, or that it’s wrong to use frequentist statistics and then draw bayesian conclusions…. (” there is a 95% chance that the population mean falls in this interval” ). To make things worse, many textbooks are wrong.

The tools are also aimed at frequentist analysis, we use SPSS. I am glad that JASP is developing into a nice alternative, because command line programs (like R, Python, Julia) do not appeal to most of our students……it is still a bit limited though….