October 2015

10/1

Michael Frank wrote an interesting blog on “Descriptive vs. optimal Bayesian modeling.” He is writing a review of one of Mark Steyvers’s (and colleagues’) new paper. The idea is pretty straightforward: Stop worrying about what is optimal and just use the models to describe the phenomena. I like it.

Allen Downey shared his learning to love Bayes slides again. I think they are mostly the same as when I last linked them. Well, I found a few changes but overall very much the same.

I’m reading a paper by Robert, where he reviews Ly et al’s exposition of Jeffreys’s bayes factors. It is dramatically titled, “The expected demise of the bayes factor”. I don’t think I agree with all that he is saying, but he makes some good points about priors on the variance of models and the savage dickey ratio.

Also, what the hell is this question? Of course it is real Bayesian inference.

Randy Gallistel wrote part 2 of his bayes for beginners series. He gives a very brief look at parameter estimation. I wonder if his next post will be on hypothesis testing?

Emir Efendic had a funny idea to have audiopapers (like audiobooks). I think that would be hilarious for statistics papers.

In his new post, Sam asks, “Is publication bias actually a good thing?” And I’m pretty sure there is an internet law that says any article title ending in a question mark can be answered in the negative. This follows that law.

10/2

Today I was reading the updating rejoinder by Morey and others on their CI survey paper. I think I can sum it up in 2 words: “Total annihilation”.

Also working on writing projects. So many to do ๐Ÿ™‚

10/3

Primarily working on two items today: the grant proposal and the RPP writeup. Both are coming along very nice.

There was also this ridiculous thread on the ISCON fb group. I am more and more convinced I need to leave this group. But sometimes you just can’t help but poke the bear, you know?

I also had fun finding old Jaynes paper for possible inclusion in the review we are doing for PB&R. He has some provocative titles: “What is the question?” “The intuitive inadequacy of classical statistics” and “Monkeys, Kangaroos, and N”

10/4

Joe Hilgard posted a blog today about how significance doesn’t necessarily tell you much. He uses the phrase, “Significant under duress” which I think is really catchy.

A very nice person on twitter shared an overleaf template that has a lot of nice things in it: Making crazy shapes, numbered graphs, aligning equations, multiple columns, etc. Nice resource ๐Ÿ™‚

Steven Shaw wrote and shared a neat piece about managing your online presence. I feel like I picked it up slowly as I went along but this is good stuff. I agree with him that having a good website is a key. If people can’t find you online then they have no way to see what you’re working on these days or if they might share interests with you.

Also working on projects obviously.

10/5

GRE studying. Scoring better on practice tests, so that’s good. I think it’s all about learning how the test questions work. They have a recognizable structure. They repeat the same types but with different substance.

Came across Alexander Ly and EJ’s new preprint on the arxiv twitter feed. Pretty cool! They are analytically deriving correlation posterior distributions. That guy Alexander is super smart.

EJ also shared a cool new preprint he and Quentin submitted. Bayesian p-curve! I guess it was only a matter of time before the Bayesian analogue made its way out. Quentin is first author on this so that’s pretty awesome for him. The key idea is to treat the distribution of p-values as a mixture of results from H0 and H1. Then you try to classify the different results as either originating in H0 or H1 and count the H0 “contamination” rate. Of course, you hope that the contamination rate is low. They analyze the results from the 855 ttest paper and then also a host of social priming studies vs controls. The social priming results don’t do too hot.

Nick Brown getting some blog press today. Robert Grant blogged about his recent takedown here.

Fabio Rojas wrote something interesting about the usefulness of letters of recommendation. The title is “my deep burning hatred of letters of recommendation” so you can guess what his position is.

10/6

Stephen Heard making it into the diary with another very interesting post. He claims that all of statistical testing can essentially be boiled down to: How big is the effect? And, should we take it seriously? I will leave it to the reader to imagine my Bayesian response. But since it takes the classical framework as ground level I’ll let it slide ๐Ÿ˜‰ It’s really quite good. I think the question of getting students interested in statistics is a big deal right now. I hated statistics for a while because all I did in my class was type in a bunch of numbers into spss and read printouts.

This was goofy. “The primary difference between Bayesian statistics and all other statistics is how annoying and evangelical Bayesians are”.ย Alright then.

Rogier Kievit shared a new Psych Science article with me. At one point they find one beta that is .34 and significant, so it is interpreted as important. But then they find the same size later, but it is not significant, so it is not important. Annoying.

Cool new arXiv preprint called, “The problem with assessing statistical methods”. This brings up something I hadn’t really thought about before. We do all these simulations and use criteria that is qualitative to compare methods. But that’s no surprise, since we want them to show certain properties that aren’t necessarily quantifiable. Oh my god, I’ve just slipped into social psychology mode. That’s precisely the problem!

“The high ground of scientific objectivity has been seized by the frequentists” As I said on twitter: “Lol. Okay Brad.”

10/7

Xenia Schmalz wrote another interesting post about why and how to argue for the null. I am really glad to see this idea catching on! Go Xenia! She tackles the idea that original study authors might diss your replication by saying, “Such claims can be mostly counteracted by providing the raw data and the analysis scripts โ€“ and, even better, by pre-registering studies, and in an ideal-case scenario, getting the original authorsโ€™ approval of the experimental design and analysis plan before data is collected.” Another way to avoid this problem is to do a sensitivity analysis!

Anne Scheel asks what she thinks is a silly question, but it’s actually a good question:ย Why do we compute P(D|H1)/P(D|H0), not P(H1|D)/P(H0|D)? My answer is that prior odds are personal. I can’t tell you what to believe. What I can do is tell you how the data should shift your beliefs, either towards the null or towards the alternative. She points out that Richard was making BFs and odds sound binary, when they aren’t necessarily. It’s true! I think I’ll have to make sure to include that in myย “probabilities vs odds” post that hopefully I can write before the end of this month (probably not).

A super nice compliment on twitter from Will Gervais today: “I just realized that @AlxEtz isn’t yet in grad school. Seriously impressed! I assign his blog posts in grad seminars” That made me blush a little. Thanks for the kind words, Will.

Also, took another GRE practice test today. Scored a 166 on the math and 163 on verbal, so I’m feeling pretty confident going into next week’s test.

10/8-10/11

Traveling to Mizzou to visit Jeff and check out the department. Started reading Jaynes’s book more closely. Not much to report.

10/12

Last bits of review for the GRE. Statistics is slow due to this stupid test.

10/13

GRE DAY I’M DONE WOOOOO. Feels good man. Now I need to prep for my Amsterdam trip.

Working on the how to become a Bayesian paper. Started by doing a quick writeup of the Lindley paper. It’s rough but it’s a start.

10/14

Great and interesting post by “strictlystat” about what to expect in the first year of grad school. Lots of practical advice! It seems general enough to apply to most programs out there.

Strictlystat back again with another grad school advice post. This time it’s on dealing with imposter syndrome. Thanks, strictlystat ๐Ÿ™‚

Christopher Olah has a cool introductory post about “Visual Information Theory”. Really cool stuff!

10/15

Prepping for the Amsterdam trip. I’m such an easy packer.

A truly horrible example of somebody defining p-values wrong.

[I had some work here that apparently never got saved. The following couple of days are reconstructions but unfortunately some thoughts are lost]

10/16

Made the trip. Long flight but that’s over now ๐Ÿ™‚ On the flight I’ve been doing a few things stats related. The first is continuing my reading of Jaynes’s book. It can be a bit dense at some points but once you take the time to really grasp what Jaynes is saying it feels very intuitive.

Secondly I am reading a book by Ronald Clark, titled “J.B.S.: The life and work of J.B.S. Haldane.” This book contains a lot of great biographical material. I’m mainly reading it to see if there is anything I can use for my super-secret Bayes project.

10/17

Following up on a lead from the Haldane book, but it seems there isn’t anything of use. Damn.

Also doing some work on the intro to Bayes paper.

Mostly just trying to recover from jetlag so I’m not working too hard today.

10/18

Apparently some people think Frontiers journals are sketchy. Really? I have a paper published there and I had a perfectly reasonable experience. Some folks replied to that tweet and the reaction was mixed. Some say it has a bad rep and they dont want to go there, some say it is no worse than any of the other publishers. Hmmm.

Reading an interesting paper by Lindley, “The Bayesian approach” with discussions. It’s so funny to read the sort of middle-age bayesian papers. They are still in the phase of fighting for respect, and some of the arguments against it seem kind of silly now. But that’s the benefit of hindsight, I suppose.

10/19

First real day in the office here. Still kind of pooped. Worked to get the super secret bayes paper up and running again. Seems to be in a good place.

Also found this satirical article, “How to ensure your paper is rejected by the statistical reviewer”. Brendon quickly informs me that he has never calculated power in his life. Well, that’s reasonable. He is a Bayesian after all.

Stumbled upon this old blog post by Daniel Lakeland, “Model vs Procedure in Statistics”. Interesting stuff. Hard to follow for me at times too.

And a blog post from Thomas Lumley, which I don’t quite see the point for my area of research. Sure, it seems like these corrections don’t do too much to effect sizes on the order of 2-3, but that is not relevant at all to most of psychology at least.

10/20

I’ll be giving a talk at the University of Bielefeld on November 5th, about how to think like a Bayesian. Fun! I’ll get to meet JP de Ruiter finally, and this will be my first Bayesian talk. It will also be about 45 mins long, sooo I’ll have to practice.

There’s a post by “Pierre Laplace”, the second part of the test for anti-bayesian fanaticism. I don’t quite understand where he gets his probabilities from in his Bayesian problem, and so I don’t think I understand the example very well. ๐Ÿ˜ฆ

Realized I need to write the personal statement for the grant proposal within about a week now. Yikes. Okay this can’t be that hard, right? A little bit here, a little bit there, and I’ll be done.

I’m having a blast here in Europe ๐Ÿ™‚

10/21

Soooo I’m also making a stop in the UK while I’m out here. I’m going to go to Cambridge to view some old interviews Harold Jeffreys did. I’m pretty sure there is only a handful of people who have ever seen them, since they were filmed in the early eighties when there was almost no way to efficiently distribute things like this. So I’m going to try to have copies made too, that way the rest of the world can see it!

10/22

Writing today on the super secret Bayes project. I think I’ve given away some of the secrets already but that’s okay. Incidentally, we might have found another interesting tidbit that we may write up as a short comment! Wow, this is fun. New paper ideas cropping up all the time ๐Ÿ™‚ It seems Stigler’s law of eponymy is really really prevalent.

I’ve been invited to write a blog for the Psychonomic society covering Richard Morey’s recent work on confidence intervals (the two papers published in PB&R). Cool! To run November 23rd, so that means I will have some time to write it once I am back from NL.

10/23

Turns out the Stigler law is not really a thing. Oh well.

Doing some serious writing on the super secret bayes project. Coming along nicely!

Downloading some old papers again. This time from the early volumes of the Royal Statistical society’s methodology journal. Some old ones from Barnard, Good, Lindley, Sprott. How many times have I said I love reading old papers?

10/24

It’s the weekend but I’m actually still doing a little writing for the super secret Bayes project. Not the whole day though! Also shared this fun quote from Good,ย “[His] paper was largely Bayesian although he didn’t notice it. Everybody is to some extent…especially when using common sense”. I love it.

More of that whole, “The null is always false” stuff going around. I decided to try out the new twitter polling feature for this, and ask what people think! Pretty neato, and it started a lot of conversation.

And I found this horribly awful “Concentration boost” where apparently you focus your imagination into an orange at the back of your skull. Here’s their caveat at the end: “When you’re ill, underslept and hungry, it doesn’t work that well (and sometimes it almost doesn’t work at all).” You don’t say?

10/25

Most of today writing the personal statement for the NSF grant.

How did the poll do? 66% say “no” the null hypothesis is *not* always false! Haha take that suckers! Campbell was really onto something when he suggested we take the consensus into account. Wait this isn’t a consensus. Oh well.

10/26

Long twitter thread with JASPers.

Those reviews I did a few months back got decision letters today. This was pretty funny ๐Ÿ™‚

This month I’ve been so bad at doing this Diary. So bad.

10/27

Tomorrow Rogier and I go on an adventure. Let’s hope it goes well!

I was featured in Brendon Brewer’s blog!

10/28

Remember folks, don’t confuse the computational tool for the method!

Today was the day I visited the Jeffreys collection. Rogier had to lug a VCR 20km around Cambridge during our tour (we picked it up first thing), and I am so incredibly grateful to him. Unfortunately, we ultimately could not get the VHS tape to function. Well, it was playing the video just fine but the audio appears to be totally dead. It’s not surprising, because the VHS tape is 20+ years old. But it did make me a little sad. But at least there was a full, clean (mostly), transcript that I could copy.

10/29

And there’s a ton of other awesome stuff here. I almost wish I booked my flight for late at night so that I could stay and work 3-4 more hours, but I didn’t think so far ahead.

10/30

I did a poll asking if people understand Bayes factors. Planning to write a few blogs about it. I just do not think people understand that Bayes factors are simply the evidence in the sample.

Also an interesting post from Xenia Schmalz. She is on a roll lately.

10/31

Working on writing a blog post.

Also I realized that Jeffreys wasn’t writing Bayes factor grades of evidence, but posterior odds grades of evidence! At least, that’s my interpretation. Calling things “decisive” depends on the context of the problem. Calling them decisive when they don’t convince a skeptic, for say ESP, means it isn’t decisive at all. Perhaps a blog is in order for this.