8/1
Reading a few papers by Aldrich on a little bit of statistical history. Also reading some old reviews of Jeffreys’s book: Good, Irwin, Wilks, Lindley, Neyman. It’s fun to read all of those reviews of Jeffreys as they were current, since most of what I’ve read are ‘looking back’ sorts of reviews of his general work. At first people were not too enthused, since he was going pretty far against the grain. But then when his third edition came out the reviews were more favorable since there were a few more bayesians around.
8/2
Worked on a blog post today about the general public misinterpreting statistical significance. I think I’ll schedule it to post in the morning, since that seems to be when most people are up and reading twitter. It’s not super interesting because I think everyone would expect the general public to have just no idea at all.
8/3
Today I found a nice article from Russell Lenth arguing against power calculations that use standardized effect sizes and also against post-hoc power.
Also, reading a few [edit: actually a lot of] posts from Christian Robert’s blog. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31
Seriously his blog is like tvtropes for me. Once I’m in I can’t get out.
There is a paper from Fraser with Robert, Wasserman, and a few other discussants asking if bayesian methods match the corresponding repeated sampling foils. They don’t. But that’s not their purpose!
8/4
Did my review (finally) for JMP. Am I allowed to say the topic? Oh well, I’ll just say it was about Signal Detection Theory modeling. I’m not an expert in that topic, but it was a fun paper to review. Super long though (45 pages!!!), almost like a book chapter. Then the authors of the last review sent in their revisions, so now I gotta do that re-review soon.
Also: Oh come on! I don’t think I need to explain why this argument is totally asinine. (Hint: A probability of zero doesn’t correspond to zero knowledge or information gain. How could it?)
Reading some more of the ‘Og, as Robert calls it. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14
Simine Vazire wrote a post asking about the different PR implications for psychology (not the torture stuff). I’m with her: The more unquestioned, overhyped, obviously wrong crap that comes out, the worse off psychology is in the eyes of others.
8/5
More reading on the ‘Og today. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
Also, a nice set of clips from an interview with Dennis Lindley here. It’s so interesting to me to hear from people who were around in the foundational times working on developing the principles I take for granted now. Lindley was just trying to justify statistics based on foundational axioms, like other types of mathematics, and he was led straight to bayes.
I read a very compelling defense of the likelihood principle. Michael Lew has posted to arXiv his critique of the so-called paradox that Birnbaum claims refutes it [the principle]. The main problem with Birnbaum’s treatment is that he has not taken precaution to avoid undue influence from nuisance parameters, and as such one can re-parameterize the problem, eliminate the nuisance parameter, and the paradox is easily solved. Seems clear cut, but perhaps Mayo will have something to say about it; Lew commented on her blog, and she agreed to read it.)
Super snarknado in the ISCON fb group today. It started with linking to a blog post by Michael Ramscar, which was entirely too long for me to read. I didn’t read it, and I probably won’t. Just can’t be bothered. From what I gather, he mounts a defense of original work by way of a moderators defense. There’s something about the frequency of words, and some graphs, and clearly i haven’t read it. But I will stand by what I’ve said before: If you want to mount a moderator defense, there must be strong evidence of something to be moderated. To me this seems like the rational order in which to proceed. Much like I would say focus on estimating the parameters after you’ve established a reason to think they aren’t zero.
8/6
Today was a mixed day. The ISCON fb group exploded and I may not be going back. The mod there escalated the drama by calling people names and being, frankly, a dick. Then he had the gall to lock the thread and threatened to ban anyone who posted on it again. Just total nutso, there. Then he came to his senses and edited his comment, so that it no longer said anything about banning people. But come on! When the guy running the group pulls shenanigans like that it makes me hesitant to stay in the group. Lakens already said he left, I’m still considering if I will or not.
Also, A few cool posts at Larry Wasserman’s blog. I never really kept up with it, but since I’ve been on a blog archive kick lately I figured I’d go through it a bit. Sad that he shut it down, it got so many good comments on every other post. Anyway, a few that I found interesting: 1, 2, 3,
Found a neat paper that I intend to read by Stephen Stigler called, “The Epic Story of Maximum Likelihood.” A cool title, looking forward to reading.
Wrote to Lew about his paper that I’ve just read asking a few questions about what other instances of so-called counterexamples to the likelihood principle it might apply to. I’m thinking of the draw 1 card example. The nuisance parameter would be the identity of the card, and the re-parameterization would be to think in terms of number of types of cards in the deck. So instead of H1: {52 Ace of spades} it would be H1: {52 of X}. Not sure if it is really analogous but it feels quite similar.
This thread from Gelman’s blog is just baffling. Arguing over the correctness of P(A|B) vs P(A;B) is just crazy to me. Just goes to show that if you think probability is a limiting frequency then you’re nuts! (I kid, but I’m kind of serious)
8/7
Today I helped my sister move into a new apartment. It was a hot day! 102F hot. Too hot.
I considered writing a blog giving my personal bayesian take on the question Katie Corker posed on her blog. Her post is worth a read. My post wouldn’t take all that much time, but I realized half-way through outlining it that I didn’t really have a strong point to make. If I’m going to write a blog I think I want to write one of the few I’ve been mulling over for a while. Maybe one day I’ll come back to the topic but for now I’ll pass.
Thinking more about the facebook groups. I honestly don’t feel like I get anything out of either of them (ISCON or psych methods). It’s mostly Uli posting that crazy graph, or someone arguing over publication bias or p-values. But that’s so boring when it’s the only thing that is happening. I guess I can finally understand the folks in ISCON who asked for the psych methods group to be created in the first place. It’s just not interesting to see the same people discussing the same things over and over, when all they do is talk past one another.
Also, Absolutely awesome old dialogue between Neyman and Milton Friedman at this post on Dave Giles’s blog. People have been misinterpreting confidence intervals in the exact same way for nearly 80 years! When will this stop?
8/8
Posted the new blog today, where I explain how Bayes factors work with gif animations. I am really proud of this one. I basically had to create a loop that creates the individual frame images for the gifs and then upload to a gif maker site. I think they turned out really cool, hopefully other people think so too.
8/9
Wow, stunned by the reaction to this piece. Many nice comments and quite a lot of shares on twitter. I never thought when I started blogging that very many people would actually care at all about a random statistics blog. I think statistics is a hot topic right now with the data science trend, so maybe that can account for it. Note to self: Start hash-tagging my post shares with #datascience or something.
Somebody even told me that my posts were instrumental in turning them into a Bayesian. Well that’s just about the nicest comment anyone has ever given me. I write this blog because I think Bayes is cool and statistics is cool. I advocate for Bayes because I think it is the one true way forward for psychology. Getting confirmation of a “conversion” is pretty damn cool.
8/10
[8/11 edit: had a rant here that was deleted. I was frustrated, but I don’t think it is appropriate to share after all. In part related to this kind of claim — make frequently — “this method described in this paper I wrote is what researchers really want to know”]
8/11
Deleted previous entry. I didn’t want to go back and edit the substance of previous entries when I started this, but there is a first time for everything, I suppose.
This thread on the OSF group turned from recommendations about meta-science to a debate about the relative value of effect size estimation vs hypothesis testing. The never-ending debate, that I think will continue on forever. Related to a recurring theme of this diary, many arguments can be summed up as: Listen to me tell you what researchers really want to know.
Also, Reading some of Jeffreys old papers. Wrinch & Jeffreys 1919 and 1923, and Jeffreys 1955. The 1919 paper is especially good, and I think it is the first place to propose that priors “wash out” in estimation with enough data. It might not be though, and it definitely did not use that phrase, but a great paper. It also gives a good counter-argument to the proposal that probability should be thought of as a hypothetical limiting frequency. They give a strong argument that one cannot be confident that these limits necessarily exist for all problems, and so should we think probability does not apply in those cases? No, of course not. That definition of probability would have to change or be discarded. I say we throw it in the dumpster and forget the whole thing ever happened.
I retweeted a blog from “Pierre Laplace” last night, and I see now that the post has 70 (!!!) comments. That’s a lot. Lots of talking past each other, as is common in discussions of statistics.
Andrew Gelman shared a post from Cristobal Young, titled, “Sociologists need to be better at replication.” As Andrew points out, lots of heated comments on this post.
Sean Carroll gives a breakdown of Entropy (for physics and bayesian statistics). Somewhat over my head but cool!
Also, as usual, I HATE paywalls. New york times, I’m looking at you.
8/12
Noah Smith wrote crazy blog titled, “The backlash to the backlash against p-values.” I don’t buy his arguments at all.
“When a technique has been in wide use for decades, it’s certain that LOTS of smart scientists have had a chance to think carefully about it. The fact that we’re only now getting the backlash means that the cause is something other than the inherent uselessness of the methodology.”
Oh please. People have been criticizing p-values and significance tests since their inception. It just so happened that the people in favor of them wrote the textbooks that introduced most of the sciences to formal statistics. Saying that general scientists haven’t had serious beef with p-values until the last few years is flat wrong.
Also, A friend sent me a link to this news (?) article, which cites this paper (by the same authors). I tweeted this excerpt because I thought it was ridiculous. You run a correlation study with 30 participants and then are shocked and surprised when you find non-significant results? What did you think would happen? It also means that any effects that do achieve significance are necessarily large (some in the paper of order r=.75). And then there was a median split, and tons of post-hoc tests. Although to their credit (or the reviewers), the post-hoc tests were clearly labeled as such and they recommend caution in trying to take much away from them.
And reading more Jeffreys. Dude was smart.
8/13
Reading some more Jeffreys today (1935&1936). He uses some hard to follow notation, but that’s understandable because he was writing 80 years ago!!
Finally got a chance to check out Richard Morey’s website for his fallacy of CIs paper (blog explaining it here). It’s REALLY cool. I love this idea. He has interactive notes, shiny apps for the figures, and more. So so so cool. Richard, if you read this, I think you’re doing awesome things!
Also, Uli Schimmack wrote a blog post about post-hoc power curves and assessing replicability. Here’s the fb discussion, I don’t know if it will become much, but worth linking to for future reference. Uli ignores the base rate, and at one point makes the incorrect claim that if only we had 90% power we could justify this 90% success rate. No, we couldn’t unless P(H1) was nearly 1, and in that case there is no point in conducting significance tests at all.
Wisdom from “Pierre Laplace” in this tweet. Who is this person? Pseudonyms!!
8/14
Reading more Jeffreys and about the history of Fisher and Jeffreys and Haldane in Howie (2002) and their original papers (1932-33). Fisher really took it to Haldane. He was not happy that he co-opted and extended likelihoods for inverse probability problems.
Read through Robert’s recent take on the Jeffreys-Lindley Paradox (2014). Robert has written some great stuff. His take on the Jeffreys-Lindley paradox is awesome, and he gives his take on Mayo’s error-statistical methods. He isn’t convinced, and he gives some strong reasons why it might not be a great method for inference.
Also re-reading DeGroot (1982) on the impropriety of improper flat priors. Always take away something new from re-reading this short paper. He gives a clear reason why they cannot be used for model comparison. And why a flat prior on the real line doesn’t represent ignorance, it’s more like it represents knowledge that |mu| is very probably quite large.
This paper by Barnard has some great bits of historical fun in it. For example.
8/15
Reading more of Howie’s book. Quite good, and he tells a great story. It’s very impressive how he re-creates a feeling of genuine conversation between Jeffreys and Fisher through quotes from their papers and personal communications (letters).
Went back and read Haldane again. I’m starting to get a better handle on the more complicated notation these guys used just by brute force reading and re-reading.
Getting through more of Barnard’s paper too. He makes calls for greater treatment of some of the historical periods of bayesian stats, I wonder if anyone went and worked on that. I know there’s been a good bit of work done by historians about the 20th century statisticians, but I wonder about the 19th century dudes. Something to look into I suppose.
8/16
uhhh… no stats today. Busy busy, no time to study.
On a not really stats note: Someone launched a parody website by a so-called Dr. Primestein. He sells anti-priming tin foil hats and ego depletion energy drinks. Original prank, I’ll give whoever is doing this a thumbs up. It’s really quite elaborate, even going so far as to set up a fake website and make thorough product pages. This is a true snarknado. They must be related to social psych / psych generally. Hmmm who could this be?
8/17
Saw this interesting post by Jonathon Bartlett, in which Bartlett reacts to Richard and the gang’s latest confidence interval fallacy paper. He seems confused on some of the issues, which is not surprising because frequency stats are inherently confusing! Worth a read I think, but take it with a grain of salt. [and now Richard has commented]
Also- there’s this post by Shravan Vasishth on his experience teaching frequentists stats. It’s a bit ranty but I like it.
There’s also this super snarky post about replications. The post itself is snarky but the comments section is a total snarknado.
Some excerpts from the comments:
“What a chode.” “I’m surprised that someone with your experience is even asking this question.” “Of course, I have the advantage on you here, given that I’m a tenured stats prof at an R1.” “Stats geeks rarely understand anything about biomedical science and should be ignored for the most part. Our cows aren’t spherical.” “Wow, maybe you should blog about how to take critism and not take it personally.” “but — and this is key, listen up –” “I hope Jason comes back as a rat in the next karmic wheel turn.” “I’m going to guess Anonymous (aka Someone who is close to you) is a mid-career white dood with grant funding” “Some dickhead reviewer will lose his shit over the optics of it though I’m sure.” “I hate people. Thank you for reminding me.” “that’s ok. people hate you back.” “Pity his departmental colleagues, who must interact with him IRL.” “I have always enjoyed your discussions of baking, makeup and the foibles of being a lady PI. You should stick to those topics you know well. Leave the glam humping data massage to those of us who have years of experience with that and can use the power responsibly.” “I see there a lot of low rent imitators here, but I want to be clear that I am the one true Anonymous. I am a much higher caliber and more important form of jerk than those clowns.”
8/18
Today I found a blog post from a while back (April) by Noah Silbert who reacted to the Simonsohn/Rouder/Hilgard conversation about biased bayes factors. And a similar related post from 2013 about bayesian error rates. That sparked a twitter discussion. Apparently I ruined both Joachim and Richard’s day. Success. Sucks that comments aren’t allowed on those old posts. I would have liked to write a little something of my thoughts. I will just say this: The point of doing a “power” calculation as a bayesian is to get an idea of how the study may end up so that you can do some planning. It had literally no impact on actual inference. Its only use is to give you a very rough idea of how the study might go, so that you can increase planned n or try to reduce variance, etc. But if you find compelling evidence before you reach the planned n it makes no difference to the inference. Interpret the data you have, not the data you simulated or imagined. Same with false alarm rates. You can try to minimize your errors in simulations, but that doesn’t factor into inferences when you actually have real data.
I realized by the end of that twitter conversation we were having a very similar discussion from a few months ago. When should we make decisions as researchers, what about costs/benefits, how about evidence, etc etc etc. Another time when the stat diary came in handy! All I had to do was look in May’s entries to find the link to the old thread.
Daniel Lakens linked to an article by Birnbaum (1977) in which he tries to unify the evidential interpretation and behavioral interpretation of NP tests. There’s actually a whole discussion about that paper in the same journal issue. Daniel seems to be convinced by Birnbaum’s arguments, but I am definitely not. I think the arguments in Lindley’s and Smith’s and Pratt’s. I don’t see how Birnbaum’s arguments can possibly stand up to the commentary papers. I linked a few excerpts I liked from Smith (1977). I think it sums up my view perfectly: If you want to make actual decisions, you need to consider more than just which critical region your data end up in. A lot more.
Funny, apparently Fisher thought a high p value gives support to the null. Seems even he isn’t immune to delusions.
8/19
Finally getting around to reading and digesting Maime Guan and Joachim Vandekerckhove’s paper on bayesian correction for pub bias. It’s a really neat idea, and the implementation is really clever! They frame it as a way to mitigate effect size estimates based on different possible models of pub bias. Essentially they’ve taken the likelihood functions for each publication model and modified them so that they give the highest likelihood for the expected biases under that model. So for example, the areas right around the critical regions get spikes of likelihood, so that if multiple “just significant” findings are reported, the likelihood is nearly maximized and the posterior weight for that bias model goes up. You evaluate this for every model, and then calculate relative bayes factors and mix the models to get an average effect size estimate. If the bias models are given high posterior weight, the ES estimate shrinks, and if they are given little weight the estimate isn’t changed much. Mixing the models gives more conservative (read: smaller) ES estimates, and this matches the intuition that the significance filter inflates reported effect sizes.
Interesting 538 piece today. I posted my reaction here on twitter. People are worried about subjectivity in statistics (priors, etc) entering their so-called objective science. But the many analysts project showed pretty clearly that different models often come to similar conclusions, where the differences in the models are due to subjective judgments by the analyst on what to include in their model. Priors are just part of the model. Likelihoods are just part of the model. Posteriors are the outputs of models. Bayes factors tell you how your models change based on the data.
I’m also surprised that the piece focused so much on p-hacking that it didn’t get around to saying much about publication bias more generally. Publication bias is a much bigger threat to the validity of the scientific literature than p-hacking.
8/20
A post by “Pierre Laplace” titled, “The likelihood principle isn’t true” is about good notation and not really about the likelihood principle. I thought I was going to read a spirited attack on it but alas I’ll have to wait for Mayo to post more on it.
Interesting thought from Elizabeth Page-Gould to my tweet yesterday. Should we be averaging these different models from the different scientists? I’d like to note that this is an explicit and encouraged method in Bayesian analysis. You take different models that you find relevant, calculate their posterior probabilities based on the data, and then weight and mix accordingly. See the publication bias paper to see how to do it very intuitively. Almost always gives you better estimates. It’s sort of related to multilevel modeling, but the models aren’t so similar that they can be put in a hierarchy. But because they don’t fit together nicely it’s usually computationally hard.
Also- A laughably bad exposition of subjective probability.
There’s an interesting interview between DeGroot and Lehmann, in which they discuss the downsides of automated statistical software. It seems there has always been a feeling that point-and-click stat software is a bad thing!
8/21
David Robinson wrote a post on bayesian A/B testing, wondering how it does under conditions of optional stopping. His comparison seems strange. He says, “By peeking, we ended up changing from A to B in 11.8% of cases [compared to 2.5% in non-peeking]”, but the relevant comparison is not how much “type 1 error” increases. The relevant question is, given that you stopped and have such and such evidence, what is the probability that you have a situation in which A is better vs B is better? And this is naturally answered by posterior odds. That’s the whole point of Rouder’s paper (which he cites), so I’m surprised he missed the point so badly.
And there is no talk of what gain one expects from correctly choosing B over A. Surely the point in testing is not simply to ask what one looks to lose if B is worse than A, but to ask what one looks to gain if B is better than A. And the cost of implementation. And the cost of maintenance (if there is? I know nothing of A/B testing). I’m not surprised you think the answer is unsatisfactory when you specify an incomplete loss function!
The mantra for bayesian decision analysis has always been: Maximize expected utility. If there is no possible way to gain utility by switching from A to B (i.e., there is only expected loss and not gain) then there is no point in testing and considering the switch. The best strategy is simply to never switch! There’s a twitter thread here where others and I discuss it.
Also- A cool exposition by “Pierre Laplace” that shows a natural development of bayes rule. It shows how what people might naturally desire is knowledge — independent of infinite repetitions. He also has a super snarky tweet that I found funny.
Matti Vuorre has a good post on priors and posteriors, with some sweet plots made in ggplot2. Very nice!
And some recs for power analysis that I gave. The question was what to do if there is no reliable effects in the field to base power analysis on. My answer: use the “minimally interesting difference”. Or try sequential analyses.
8/22
Morgan Tear thanks us twitterers for giving power recs. I suggested trying Bayes! And gave him a few papers in a dropbox folder that I thought were easy intros.
8/23
Funny threads here between “Pierre Laplace” and Senn and Mayo. They’re all so snarky, they’re perfect for each other.
Today I had to explain the central concept of the Haldane paper, so I had to make sure I had a really good understanding of it. I think I do at this point, so if this turns out to be the key paper in this project I’ll be able to explain it thoroughly.
Also did a little work on the paper with Daniel. He’s made a lot of edits to make it more reader friendly, and I think it is in a good spot!
8/24
Skyped with Daniel today to make a plan for the rest of the paper. There’s a bit of work to do yet! But I think we’re going to end up with a solid paper. There are some limitations with our approach that we’ll have to acknowledge, but this paper is much more an illustration to researchers and editors and reviewers rather than a complete analysis set up so I think we’ll be okay.
I think tomorrow or the next day I’ll try to get my review done for JMP. I don’t want to wait until the last minute like last time.
8/25
I get annoyed, but I really shouldn’t, when people who are big figures against bayesian ideas get things about it so wrong. Today that annoyance is due to this comment on Andrew Gelman’s blog. The phrase:
With a prior, we typically imagine selecting a single instance of a parameter. Priors serve very different purposes, so is one testing whether it works for a given purpose or whether it adequately describes the distribution of the parameter?
Let me belabor the point. We aren’t “selecting” a parameter out of a distribution of parameters. The prior is not a distribution of parameters. It is a distribution of knowledge. It is a distribution that characterizes different possible parameter values that our one parameter could take by their relative plausibility. Talking about whether it “adequately describes the distribution of the parameter” is nonsense. There is no distribution, so it really doesn’t make sense. Insofar as we grant that parameters exist, there is only the one parameter for our experiment. We aren’t trying to describe some sampling distribution of parameters. There is no distribution out there in the world that we are trying to make inferences about. We are trying to make inferences about one parameter, the one that was in our experiment. Did I belabor enough? I could go on.
And if there is a super population of parameters out there, and we select a parameter at random, then this machinery we call bayes theorem can deduce range of parameters that ours could have taken. It is a cruel joke of the universe that bayes theorem gives the correct answer in both cases and lends itself to confusion. An even crueler joke is that we use calculation methods such as mcmc that take advantage of this coincidence to computer posteriors. Keep this in mind: They are not the same thing even if they are calculated similarly. It’s no surprise that people are confused by this, but it’s not a valid criticism of bayesians to say we don’t know the prior “true” distribution of the super population of parameters. Of course we don’t because it doesn’t exist. And we wouldn’t care if it did. There is only one parameter, and we try to estimate it’s value — not its parent distribution.
8/26
Mayo linked to one of her old posts. I went through and read the comments again, and some are quite funny. A few select quotes: “Nothing on earth will make me happy to use subjective probabilities!” “Only a Bayesian would say P(H) = 0 or 1 and I am not a subjective Bayesian.” “Yes, I’m being a little naughty here.” “The great advantage of simulation is that it doesn’t need much theory.” “The great *disadvantage* of simulations is that without theory, they are just a bunch of numbers.” “I could go on but this is Senn’s post, and I’m getting on a bus.”
8/27
The reproducibility project came out today, to a whirlwind of media coverage. Just a couple of things I think are worth highlighting here. Jason Mitchell is back, and he apparently approves of replications now. But then he says something really really stupid. In case you were wondering, the stroop effect in question has basically a 100% replication success rate anywhere it is attempted.
I thought this was pretty clever.
Daniel posts his thoughts about the project here. He uses the distribution of p to reassure himself that the studies finding null effects really should be taken at face value. The main idea is that if there were underlying effects then the curve would be skewed outside of p<.05, but it looks pretty uniform.
I wonder why they don’t use 1-sided tests. It’s pretty much the exact thing they were thought up for.
These guys just don’t get it at all.
I think I’ll write a blog on my bayesian re-analysis of the reproducibility project. I think I’ll call it, “The bayesian reproducibility project” even though it should really be “a bayesian..” since there is no 1 single bayesian approach to a problem. There will be replication bayes factors, and once i get the analysis done it should be fairly easy to write up.
8/28
Just working on that post. Want to have it ready to go by Sunday morning. Did most of the work with the code, it’s pretty straightforward since the RPP provided all of their scripts.
8/29
Post is mostly ready. I found a couple of errors while double-checking the code, so good thing I did!
Another interesting post from “Pierre Laplace”, this time about big mistakes Bayesians make when they “convert” over from frequentism. They bring over the bad habits that have been instilled through years of confusing statistical theory.
8/30
Aaaand posted. Nervous what people will think. Check it out here. I think it turned out pretty good!
WOW lots of people reading and sharing this post. I am amazed that people find this so interesting. And also many great comments! I don’t mean to toot my own horn, but the tweet where I shared it has 91 retweets!! Many really nice compliments from people. This one in particular makes me feel really happy. This might just be my best blog yet!
Here’s a running list of very nice shares [edit: being updated periodically], saving mainly for myself so I can look back when I’m having a bad day:
Ivy Onyeador (fb link)
Ray Becker – This sparked a LOT of conversation (that I totally missed bc I was sleeping)
Joachim Vandekerckhove – This one is really really great.
Also this was great to hear. 🙂 I think it’s time to make a “Resources for becoming a Bayesian” page.
8/31
Many more great comments on the post. Sam has apparently come back into the blogosphere, happy to have him back!
Mayo posted her take on the reproducibility project here. I’m not sure about that title, but the post itself is very interesting.
Also this great piece by Randy Gallistel in the APS Observer introducing the difference between probability and likelihood.