This is really great (and not nearly as long as its breakdown into chapters would suggest!) It (hopefully!) won't contain much new information for economists past the first year or two of grad school, but nonetheless, I thought his explanation of how misinterpretation of p-values is an instance of base-rate neglect was clearer than any other discussion of problematic interpretation of p-values that I've read. I've taught base-rate neglect a few times in behavioral economics classes without realizing the connection.

That then made me realize a more rigorous justification for economic theory, beyond the standard "it forces you to think very carefully and clearly about what your assumptions imply, and helps you discover subtler implications that don't intuitively jump from your assumptions." Economists routinely say things like "Our experiment design should be informed by theory" or "Our analysis should be informed by theory" which is pretty vague and doesn't imply anything deeper than the justification above.

But if you want to understand the chances of your statistically significant result being a false positive, rather than simply the chance that random data could have produced it, you need both a p-value and a prior belief. If an outcome is different between two treatments with p=0.05, but there was only a 1% chance that those treatments actually produce different outcomes on average, there's a pretty small chance that your result in fact isn't a fluke,

That then made me realize a more rigorous justification for economic theory, beyond the standard "it forces you to think very carefully and clearly about what your assumptions imply, and helps you discover subtler implications that don't intuitively jump from your assumptions." Economists routinely say things like "Our experiment design should be informed by theory" or "Our analysis should be informed by theory" which is pretty vague and doesn't imply anything deeper than the justification above.

But if you want to understand the chances of your statistically significant result being a false positive, rather than simply the chance that random data could have produced it, you need both a p-value and a prior belief. If an outcome is different between two treatments with p=0.05, but there was only a 1% chance that those treatments actually produce different outcomes on average, there's a pretty small chance that your result in fact isn't a fluke,

*even though*random data would only produce your result 5% of the time. But if you are already sure of the assumptions of a model, and the model predicts a difference, your prior should be much higher than 1%. Any statistically significant results that corroborate the theory are much more informative than they otherwise would be.