July 2015

7/1

Richard Morey shared a new preprint of a rejoinder on his github today (hit raw if you want the pdf). Someone wrote a commentary on their paper investigating the misinterpretation of confidence intervals, and made some silly arguments. I can’t link their commentary paper because it isn’t online yet, sorry. I’ll just say, I don’t even need to read the commentary after reading the rejoinder to know that Richard, Jeff, EJ, and Rink just demolished them. The fact that someone wrote such a commentary filled with confusion and mistakes is a testament to how stupid confidence intervals really are.

Also- Got invited to review another paper at JMP! So exciting. 🙂 It’s like 40 pages long though… :/ oh well, I’ll give it a thorough review and do my duty as a reviewer. And I’ll sign it!

7/2

Cool paper coming out from Maime Guan and Joachim Vandekerckhove about bayesian methods for correcting effect size estimates plagued with publication bias. They take a model averaging approach to try to shrink estimates back down to reasonable levels. I haven’t really read it yet but it seems neat. Will report back in once I’ve finished reading and digesting. I’m gonna meet these guys soon so I better read up and be prepared. 🙂

7/3

Joe Hilgard wrote a nice piece explaining the paper I referenced yesterday. It’s meant for plain language readers, but it’s still probably a little much for many non-stat folks. I still haven’t had a chance to read the paper myself, but from my quick look this seems to be a pretty nice writeup. Nice job, Joe. 😉

7/4

Oh yeah, Chris Engelhardt #didthemath on that wobbly chairs crap from Psych Science. BFs are all terribly weak and even support the null. I mean, you didn’t need to see the BFs for this to know that the evidence was weak, but it is nice to have the numbers to back it up. Psych Science, I beg you, please stop publishing shitty science.

7/5

Planning for the California trip. Realize I’ve been having a hugely unproductive month. I guess it’s not the worst thing in the world but I need to step it up if I want to get anywhere. A friend said he had his old GRE books he would lend me, so that’s awesome. I need to kick that studying up a notch. He got into a really good grad program — Yale architecture. Smart dude and a great friend. Spent the weekend at his house celebrating the holiday.

7/6

and – oh my God – it’s full of stars!” Just another instance of incomprehensible statistical reporting.

Also- RetractionWatch covered a new paper that’s come out saying that some replications may cause more harm than good. The idea is that publication bias affects replications just as much as original research. If the replication is biased, so too will the weighted estimates. No news here. Bias = bad. Replication = good. Biased replications = bad.

7/7

There was some discussion of the retractionwatch coverage on their site comments and in the psych methods fb group. In general I think the reaction was similar to mine: flawed and biased replications are still flawed and biased. Is it any surprise that adding 2 biased studies together results in biased average? Now I’m still only going off the write-up, not the paper, so if there is more to the story I haven’t seen it. Tweet me if you think there’s more to it.

7/8

Neat open access (!!) paper by Ian Krajbich, Björn Bartling, Todd Hare & Ernst Fehr titled, “Rethinking fast and slow based on a critique of reaction-time reverse inference”. The key point is this:

the use of behavioural or biological measures to infer mental function, popularly known as ‘reverse inference’, is problematic because it does not take into account other sources of variability in the data, such as discriminability of the choice options.

Also- Coverage of the wobbly desk paper in NYT, & Yahoo. Crappy and poopy and all the bad words. I wish they’d ask for some opinions of folks critical of the findings. Oh wait, yahoo did and then left them out! Just another reason not to trust the media coverage of psych studies. A lot of that blame rests on the scientists too, don’t get me wrong. They overhype their work and the journalists eat it up.

7/9

A new paper in Psych Science making bad inferences.

Also- New updates on PET-PEESE vs PETERS vs whatever we’re on about now. Updates on Will Gervais’s blog and in the fb discussion group. I’m really not interested in these methods, but it is interesting to follow the discussion.

Also- Studying some more calc. I’ve been in and out of this for a while, and I’ve decided to focus a bit more on it. 3 hour car drives promote reading and studying. 🙂

7/10

Another 3 hours in the car today, so more calculus. I’ve just gotten to the part where the book switches from derivatives into integration. Pretty sweet, relearning this stuff. I had forgotten so so so so much. I have such a better appreciation for how amazing this stuff really is though. When I first learned it I was just going through the motions, but this book has explained the deeper theories and now I love it.

Also- A weird top 10 list about stats and science came up by Tom Siegfried. Very bizarre. He cites Kruschke, who is against Confidence intervalss, on how one should be careful thinking about CIs dichotomously (Kruschke actually thinks they’re completely useless) and then cites Cumming who literally wrote a book proposing how one could use them in multiple ways (including dichotomously). Also wants to create a journal of statistical shame. Uhhh how about no.

In the fb discussion group Roger Giner-Sorolla says we should interpret p-values as a neo-Fisherian rather than abandon them. What the heck is that! Oh, turns out I actually have a paper in my dropbox about it. Tooooo long to read it all though. It’s almost 1 million pages long, no joke. I asked Roger and he said it is mainly just ditching dichotomous NP concepts. Hmm. I’ll have to look into neo-Fisherian. Sounds a lot like just regular Fisherian before he went rogue and advocated rejection trials and crazy fiducial intervals.

Also- Infinitely annoyed by paywalled articles. I just want to read and learn, why do you want to charge me $45 dollars to read articles, some from 40+ years ago! Incredibly frustrating. [7/31 edit: It even happens for papers 80+ years old!]

7/11

Not exactly stats, but broadly related: APA scandal. The Hoffman report was released and it contained some damning information. A big reveal was that Zimbardo, while president of APA, reached out to the government to see if psychologists could aid in psychological warfare. WHAT. What the hell, Zimbardo? And now the APA still has the gall to sell ethics training packages. No, thanks though, I’ll get my ethics training from somebody who wasn’t involved in torture and coverups.

Some discussion in psych methods fb group about what we should do if we ask editors to pass along requests to authors and yet are denied. Malte Elson writes,

I asked the editor to relay a request to the authors
a) that they share data with me for the purpose of reviewing the manuscript
b) that they share the data publicly should the manuscript get accepted for publication (or provide reasons why they cannot)

He goes on to say that he was denied flat out by the editor. Hmm. Quite a dilemma (side note: I’m pretty sure i was taught in grade-school that dilemma was spelled dilemna and not dilemma. oh well). Great discussion there, although it does go a bit off topic.

7/12

Had a flight today, so I read an article that I recently downloaded by Aaron Ellison (1996), “An introduction to bayesian research for ecological research and environmental decision-making”. Old, and not much new, but the arguments are coherent and always relevant. Short article, but I liked the illustration of coming to consensus through data. I always wonder if people really never knew about savage-dickey density ratios when they are so freaking intuitive. Now, they aren’t always applicable, so I guess it isn’t necessary to bring them up in every intro to bayes. But I think savage-dickey is way easier to understand than most explanations of bayesian updating.

Spent the rest of my day wandering around Newport Beach. Soooo pretty and the weather is amazing.

7/13

A lot going on today. A lot of calc progress actually. Almost through the book, but I haven’t exactly kept up with the different problem sets from some of the chapters because I get too eager to keep reading. Time to go back and work through some. I did that for a while in a coffeeshop and it was quite fun actually. I had a peach green tea lemonade. Sounded like it would just be a big blend of flavors but it ended up tasting great and the different flavors really jumped out. Ok I’m a food reviewer now.

Also- Did a little re-reading of Bertsch McGrayne’s book. I don’t think I will do a full read again but it is fun to revisit chapters. She’s a really great writer, I can’t recommend this book highly enough.

Also- There has been a lot of discussion on Malte Elson’s fb post. It seems people are talking past one another. “We should really be given a reason why we aren’t getting access to data.” “Well you just don’t get it, some data can’t be shared for reasons x/y/z!” “Okay, then they can just say that as their justification for not sharing.” So much manufactured drama. Listen to each other, people!

7/14

Some calculus pondering today and a little light reading. Nothing too interesting.

I also got to go up to the UCI campus and meet Michael Lee, Joachim Vandekerckhove, Ravi, Stephen, Colin, Alex, Beth, and I’m sorry if I can’t remember the other names. Really cool people. And the group is pretty darn good at trivia too. Got to do a little tour around the campus and it is really pretty.

7/15

Spent some time on the rooftop deck here writing and working on a blog. It’s not so good yet but hopefully it can work out and be okay. It’s about constructing simple priors and why some are good and some are bad. That’s really vague. 😛

7/16-7/21

This block of days was my math psych meeting (conference schedule info here), and a lot of things happened. On the 16th I went to UCI campus and checked into the housing. Then that evening I met up with Joachim and others at the conference hotel’s bar for a few drinks. Cool people, and I ended up seeing them throughout the conference.

On the 17th the conference started and I watched Jonathon Love and EJ Wagenmakers give a masterclass workshop for JASP. I also live-tweeted it, so check that out if you’re interested. Actually, there was a lot of live-tweeting going on (from only a few people) so check out that feed if you want to see some nice stuff from the conference. Then we had a really cool workshop on how academic publishing works. The topic of signing your reviews came up among other things, and opinions were mixed. Some said do it always, some said do it consistently (all or none), some said never. I’m still going to sign them all. 😉 I also got roped into a discussion on twitter by that infamous phrase, “Yes but Bayes’ factors don’t tell you what you need to know” (obviously my emphasis). Uhh, it’s not your job to tell me what I want or need to know, buddy.

Two really really cool sets of speakers on the 18th in the terrace room (check the link above). Particularly cool talks by Oravecz, Lee, Cassey (Brown substituted since Cassey couldn’t come), Verhagen, and Ly. Particularly crazy was Shiffrin’s talk. And I mean crazy. He presented a one-stop table for bayesian inference, but it was so so so dense and nearly unreadable from where I sat. Then it comes up with a predictive distribution for your particular test statistic. Then if you get a study result, you can check it against the predictive distribution to see if it is consistent with your hypothesis. Wait… this is just like p-values (as EJ pointed out in discussion).

Cool talks on the 19th by Brown, Hertwig, and Ahn. Brown tried to figure out if speed accuracy trade-offs could be explained by certain personality traits. Odd, but cool. Hertwig was the plenary speaker and he discussed how people seem to account for different aspects of uncertainty. Ahn was surprising, I only went because i thought it might be neat to see a machine learning talk. What I found was a really cool research program that tries to predict if patients will become heroin or cocaine addicts based on different traits. I don’t know much machine learning but it was cool as hell. I also got to meet with Joachim and chat about what kind of projects we might work on if we get the chance. He seems open to working on just about any type of project, so that’s awesome.

The 20th had talks from Joachim and Amy Criss. Neat talks from both. Jay Myung gave a really cool talk about using bayesian updating in real time to make experiments more efficient in terms of data collection time and number of trials. Very neat, something I’d never heard of before.

On the 20th I also met with EJ about possibly working on a project or two together. Super cool dude. We came up with some good ideas, and I’m excited. One is an experimental project and the other a general bayes project but I won’t spill the beans here. I also met with Dora Matzke at the banquet, which was a ton of fun. She is super cool and does cool work.

The 21st I went home. Long trip, and fun conference. I got to meet everyone I wanted to meet and more. There were so many smart students there, it was humbling to see how they are all so good at cognitive modeling. Luckily I don’t do that, so not feeling threatened. 😉 EJ shared a dropbox folder with me and I started to read a paper by Jeffreys in there on the plane home. This is going to be an enlightening project!

7/22

David Funder wrote a blog today called “Bargain Basement Bayes”, in which he tries to characterize bayesian inference in his own words. It isn’t all technically correct, but he is pretty clear that correctness is not his goal. I don’t feel like being a nitpicker right now so I’ll just say that if you want to use bayes then look at a formal definition 😛

Also- Holy shmokes, irony overload on Mayo’s blog. On a post about bad definitions of p-values, Mayo states, “I’m not sure they wouldn’t be as or more confused with the probability of a type I error, but it would be OK so long as they reported the actual type I error, which is the P-value.” Oh boy. P-values are not type-1 error rates. That is all.

7/23

There’s been some chatting on twitter about the quote I shared from Mayo’s blog. She shared a link to show what she means but it basically just says, these old guys said so. Not convinced.

Jeff Rouder shared an excellent paper today, unfortunate that so much was redacted. He put in a FOIA request for it, so maybe in 2 years we can read the whole thing. Jeff called it a Snarknado. Apt.

Shared some quotes from Oakes’s book. So snarky but I like it.

Also did some real work on new blog post. Might be able to post tomorrow if I’m lucky.

Also think I’ll be working on a paper with Daniel Lakens. Neat!

7/24

Funny comment on Andrew Gelman’s blog today. This guy just has no idea…

7/25

Posted the blog! Aaand nobody read it because I posted it on Saturday at like 5pm. Bad timing I suppose, oh well I’ll do a ICYMI later.

7/26

Okay wow. This blog got shared so much. I’m grateful to all who shared it and doubly so for those who read it 🙂 Today was the busiest blog day by over double the last busiest! I guess people are into this whole bayes thing?

And more silly “bf-hacking” talk on twitter. Nope, just a positive prediction with negative results so null is supported. No surprise, no hacking.

7/27

Started working on a super secret project that has me reading a ton of old foundational articles. YES. Super fun. Here’s a hint, it’s about bayesian stats. 😛

Also, great presentation from Allen Downey about bayes. He has a great slide saying, “Bayesian methods don’t do the same things better. The do different things, which are better.” Just awesome.

7/28

Did some reading for that super secret bayes project. Today it involved reading a paper by Jeffreys from 1933 and 1936. I bet you wish I’d tell you what it is but I’m keeping it a mystery for now.

Also- worked a lot on the paper with Daniel Lakens (5-6 hours? i’m not sure, i get distracted a lot). I think it will be a cool paper! A lot of discussion in terms of likelihoods and probabilities of different amounts of significant studies in a set. I’ll share more soonish.

7/29

Worked most of the day trying to understand and recreate a really old problem from a paper. The problem itself was actually pretty simple after I figured it out, but there was a lot of notation that I didn’t quite understand for a some of it that got me bogged down. I managed to get the problem recreated in R and the answer I get matches the solution from the paper, so that’s good. I needed to come up with a number like .028 and I got .0277, so I’m pretty sure it’s exactly correct and the paper is just rounding. 🙂 It’s for the super secret bayes project so I can’t tell you exactly what it is. But I feel accomplished!

7/30

I posed a question on twitter last night about sample size: can sample size be too large? I woke up to something like 150 notifications (or more). People were arguing about bayes factors. And not just any argument, but some old recurring ones and some new about how the default prior shouldn’t exist, or it should be mandatory to enter the scaling factor, or people should have to complete an educational wizard before they can use it, or people shouldn’t be able to scale it too small without changing the location, or we should use a different prior for the null like an interval of some sort, or we shouldn’t use bayes factors at all, or why should we ever say we support the null hypothesis in a small sample if we know that we wouldn’t in a large sample, but now you’re conditioning on unknown hypotheticals instead of known objects such as data, and supporting the null in smaller samples with small effects is the kind of behavior required for a rational measure of evidence, or we should use three or more models in our comparisons, or we should just do estimation, or scientists don’t care about the null at all, but oh wait some actually do so they should be able to test it, but if you press them then surely they won’t actually care about rejecting a point null for a teeny effect, but wait I’ve actually talked to a lot of people who say yes they would still care no matter how small it is given it is significant.

In short, there were a lot of opinions. And then the coup de grace: I can’t think of a time when this type of prior would be useful, so “the defaults are clearly not appropriate for almost *any* question anyone has”. That’s not a good line of reasoning for any argument. Your sample is always biased because you only know what you know and experience. Other people can want something you can’t think of a reason for or don’t want yourself. I say this all the time and I feel like I’ll be saying it for a long time yet: I hear a lot about “what researchers really want” or “what researchers actually want to know” or “here’s the question researchers are really asking”.

You don’t get to decide what other researchers want to ask. That’s not how it works. Don’t tell me what I want to do, just because you have whatever opinions or goals you have. We should give researchers the ability to answer the questions they want to ask. And then try to convince them to ask meaningful questions. Statements like those strike me as arrogant in some sense, even if they mean well, and I see it all the time. I want to answer this question so everybody must deep down want to answer it too.

7/31

Today was a busy day for reading old papers for the super secret bayes project. The authors included: Fisher, Good, Wrinch, Jeffreys, Haldane, Lindley, Huzurbazar, and Geisser. They’re all brilliant writers. Ironically, I think Fisher might be the key to answering our question. Fisher hated most Bayesian methods, almost as much as Neyman, and so it would be awesome if his paper is the key to this whole project.

Leave a comment