Here are some things I’ve changed my mind about. Most of the changes are recent (because I can remember recent stuff more easily) but some of them happened 5+ years ago.

I’m a little nervous about writing this because a few of my old beliefs were really dumb. But I don’t think it would be fair to include only my smart beliefs.

Contents

Effective altruism

  1. My old belief: In my original 2021 version of A Comparison of Donor-Advised Fund Providers, I recommended Schwab Charitable as the best DAF provider for most people.

    What changed my mind: When I reviewed the post this year, I noticed Schwab’s default fund fees are too high, so it was a bad idea to recommend them. I don’t recall exactly what I thought about the default fund fees when I first wrote the article, perhaps I noticed the high fees and thought it didn’t matter because people can switch to cheaper funds. If I did think that, then that was a mistake because a large proportion of people will stick with the default option without looking at it, and if they do that with Schwab, they’ll get ripped off.

  2. My old belief: After writing Uncorrelated Investments for Altruists, I thought that the marginal donor’s philanthropic investment portfolio should aim for near zero correlation to equities.

    What changed my mind: In the process of writing Asset Allocation and Leverage for Altruists with Constraints, I wrote code to do portfolio optimization and ran it under various assumptions. I found results that contradicted my previous belief. Now I believe that the optimal marginal investment portfolio should still have some correlation to equities because getting the extra expected return is worth accepting some positive correlation.

  3. My old belief: In 2015 I donated to Raising for Effective Giving and argued for why they were my favorite donation target.

    What changed my mind: After 2015, their fundraising model didn’t keep working as well as I expected.

  4. My old belief: Metaculus Questions Suggest Money Will Do More Good in the Future

    What changed my mind: After I published that post, some commenters argued for a different interpretation of the Metaculus questions.

Finance

  1. My old belief: SBF and Alameda have skill at beating the market.

    What changed my mind: I’m sure you know what changed my mind. There’s some chance that they did actually have skill and they blew up due to bad luck (them committing fraud was bad behavior, not bad luck, but as I understand it, they blew up because they lost a bunch of money, and they might have gotten away with the fraud if they’d made money). But I now believe it’s more likely that the risks they took were not calculated and they didn’t have much skill. (Clearly SBF had a lot of skill at fundraising, but that’s not the same thing as trading skill.)

  2. My old belief: In 2021 and earlier, I estimated future market returns using Research Affiliates’ model (e.g. here), which assumes market valuations mean-revert after 10 years.

    What changed my mind: I read AQR’s capital market assumptions (see e.g. their 2024 publication) where they argue that there’s no strong reason to expect valuations to mean revert. Now I prefer the AQR model which uses the traditional “yield + growth” approach with no consideration for valuation. I still believe valuations ought to mean revert, but it could easily take more than 10 years, and they might only revert somewhat, so I think it’s reasonable to take an average of the AQR and Research Affiliates projections.

    Putting less consideration on mean reversion makes equity market return projections cluster closer together. I would still order expected equity returns as emerging markets > developed markets ex-US > US, but I do not expect the differences to be as big as I used to. I used to quote something like a 6% real return for emerging markets and 0% for the US market. Now I expect more like 5% for emerging markets and 2% for the US.

    I wrote more about my updated expectations here. I don’t pay too much attention to return projections, so changing my mind on this didn’t change my investment strategy.

  3. My old belief: From a purely financial perspective (ignoring personal taste etc.), renting a house is always a better decision than buying.1

    What changed my mind: Around 2020, I read the argument that owning a house works as a hedge against future housing expenditures, which means buying is better than renting in many cases.2

    I still believe most discourse on renting vs. buying is confused. For example:

    1. It doesn’t make any sense to directly compare monthly rent to monthly mortgage payments because with a mortgage, you’re accumulating equity in an asset.
    2. Maybe people understand point 1 and believe a mortgage is always better because renting is “throwing money away”. That’s also wrong because you have to account for the time value of money. Dumping most of your net worth into a house has a huge opportunity cost.
    3. When you account for opportunity costs, you have to consider the risk of a mortgage vs. the risk of the counterfactual investment (e.g., an index fund).

    I looked through a dozen online “rent vs. buy” calculators, only three of them properly accounted for equity and opportunity costs (Financial Mentor, Motley Fool, and New York Times (paywalled)), and none of them accounted for risk. (The three good calculators use different methods but they’re basically interchangeable.)

  4. My old belief: In 2016 when I was looking for a full-time job at a startup, I evaluated equity compensation at face value.

    What changed my mind: I should have considered risk. Equity compensation is risky (2–4x riskier than an index fund), which makes it look much worse.

    What changed my mind again: Later, I realized there are some other factors that make equity compensation look better. In 2021 I did a more in-depth analysis here, and my current opinion is that equity in a good startup is worth more than its face value.

  5. My old belief: In 2013 I read some research on investing strategies like Greenblatt’s magic formula and thought they sounded like a great idea.

    What changed my mind: Actually, I still believe Greenblatt-esque strategies are a great idea (at least for some people in some contexts). But I believe I over-updated on the evidence I had at the time, I just got lucky that I was looking at weak evidence for true claims.

    I originally became convinced that Greenblatt’s magic formula worked when I read Abbey & Larkin (2012), “Can simple one and two-factor investing strategies capture the value premium?” The paper looked at US stocks over a 30-year period. Now, I would want to see more evidence than that. I like the five criteria given by Berkin & Swedroe’s Your Complete Guide to Factor-Based Investing: to take a market anomaly seriously, it must be (1) persistent across time, (2) pervasive across markets, (3) robust to different formulations, (4), investable, and (5) have a risk-based or behavioral explanation. The evidence from Abbey & Larkin (2012) established robustness and half-established persistence (30 years is decently long so I give it half credit), but didn’t address the other three and a half criteria.

Health and fitness

(I’ve been thinking a lot about health and fitness lately.)

  1. My old belief: In 2011 I quit coffee cold turkey after reading You Are Not So Smart’s article Coffee. I believed that caffeine had no effect on a daily user except to reverse withdrawal symptoms.

    What changed my mind: In Does Caffeine Stop Working?, I investigated more deeply and now I believe that caffeine retains something like half its initial benefit.

  2. My old belief: I can trust the research on stuff like caffeine.

    What changed my mind: I read some caffeine studies and found that most of them were pretty bad. And not bad in the obvious way of having small sample sizes (which is honestly fine as long as you’re aware of it—weak evidence is still evidence). They were bad in the sense of “your study’s methodology is not capable even in principle of providing evidence for or against your hypothesis”. The majority of studies were like that (around 75% of them, if I remember correctly).

  3. My old belief: After reading Starting Strength in 2014,3 I thought Starting Strength-style training was the best in every situation, and bodybuilding-style isolation movements were dumb.

    What changed my mind: After a few months, I realized I had been on the “hill of novice overconfidence” and actually isolation exercises are fine. Later I re-read Starting Strength and realized it never even said you shouldn’t do isolation training. It made the more nuanced claims that (1) isolation training is not ideal for developing strength and (2) compound barbell movements are better for beginners.

    After learning more about the scientific literature and the diversity in how elite athletes train, now I tend to believe differences in training mostly don’t matter. You can get good results with any method as long as you lift heavy weights, increase the weight over time, and get sufficient food and rest.4

  4. My old belief: High-intensity interval training is the best kind of cardio.

    What changed my mind: I started reading some experts on exercise science, and they say low intensity cardio is just as good most of the time, and the ideal routine consists of something like 80% easy cardio and 20% hard cardio (and the easy cardio should be really easy5).

    And thank God for that because I kinda hate doing moderate/hard cardio. I’ve started being way more consistent about aerobic exercise—I go for brisk hilly walks 3 times a week and haven’t missed a day in months.6 And my resting heart rate has gone down from 70–75 bpm a few years ago to 55–58 bpm today, so I guess it’s working. I’ve also noticed I can do high-rep squats and deadlifts without getting winded. I remember in 2016 I deadlifted 225 for 10 reps and I basically died (my heart rate hit 190 bpm). A few weeks ago I deadlifted 315 for 11 (note: my 1-rep max has barely changed since 2016) and I felt fine (heart rate 144 bpm).

    (I’ve read hardly any original research on cardio, but as I understand, the older research did show that HIIT was better than low-intensity cardio, and newer research changed that—see the 6-part series Extraordinary Claims in the Literature on High-Intensity Interval Training by Ekkekakis et al. (2023).7 So my beliefs basically tracked the research findings, although I was getting all my info second-hand.)

  5. My old belief: At different times, I believed (1) we don’t really know anything about nutrition; (2) food choice doesn’t matter as long as you don’t over-eat.

    What changed my mind: I read How Not To Die in 2017, which referenced a large quantity of nutrition research that contradicted my previous beliefs. I now believe the book was wrong about some things (which I will discuss in the next line item), but it was more correct than my pre-2017 self. Later on, I read a wider variety of evidence-based nutrition advice.

    My current position is that we are really pretty sure about some things in nutrition, and some foods are unhealthy even if you don’t over-eat. I believe conventional nutrition advice in educated circles is basically correct: trans fat, saturated fat, and added sugar are bad; processed food is generally bad; whole plant foods (especially fruits and veggies) are good.

    (I still don’t have a great sense of the distinction between foods that make it easy to overeat and foods that are unhealthy at any bodyweight. Like I know that sugar in small quantities isn’t bad for healthy-weight individuals, but is that because the badness is too minor to detect, or because there’s some threshold below which sugar causes zero harm whatsoever?)

    I updated my beliefs by following my “web of trust”: my layman friend trusts this dietitian; my other layman friend trusts this medical doctor who agrees with the first dietitian about most things; I trust Scott Alexander, and he likes this one researcher, who endorsed this book; these two guys know a lot about strength training and I like their epistemology,8 so they probably also know a thing or two about nutrition, and they agree with the guy Scott liked; etc.

    According to my web of trust, the best book on nutrition is Eat, Drink, and Be Healthy, which as far as I know is the only book that makes a 100% earnest effort to represent the state of nutrition science.

    I still haven’t made an effort to interpret the primary literature on nutrition science. All the really big studies are observational, and there’s an art to controlling for confounders, and I believe it would take a lot of work for me to understand how they do it. I trust that at least some researchers have a good conception of how to disentangle causality (Willett & Skerrett, authors of Eat, Drink, and Be Healthy, do a good job of explaining why they believe observational studies establish causality in certain cases.9)

  6. My old belief: After reading How Not To Die in 2017, I believed processed unsaturated fats (such as olive oil) were unhealthy, and unsaturated fats should be consumed as whole foods (e.g. by eating nuts).

    What changed my mind: I read Eat, Drink, and Be Healthy, which said olive oil is healthy, and it presented some evidence that looked reasonable to me. There’s a plausible mechanism for oil being healthy (it helps the body produce HDL and HDL sucks loose cholesterol out of the arteries), and there are some empirical studies where oil (esp. olive oil) was associated with better health outcomes, including at least one RCT.

    The argument against olive oil in How Not to Die is that it’s processed to remove some of the nutrition of the olive, which makes it less healthy. That’s not wrong—raw olives are probably healthier than olive oil—but realistically I’m not gonna replace olive oil with eating handfuls of raw olives,10 and the empirical evidence suggests that adding olive oil to a diet makes it healthier.

    How Not to Die recommends replacing olive oil with nuts. Some RCT evidence does suggest that nuts are healthier than olive oil, but nuts often don’t work as a substitute for oil (you can’t pan-fry food in a bed of nuts).

    How Not to Die is biased toward veganism, which I knew before I read it so I didn’t update much on the stuff about how all animal products are unhealthy, although I was vegan anyway so it didn’t affect my behavior. I largely trusted the book on non-animal subjects because it cited a lot of research and seemed well-reasoned. Based on my current understanding of mainstream positions among nutrition scientists, almost all of the non-animal stuff (and most of the animal stuff) in the book is mainstream, but it over-emphasizes the badness of processed foods in general. The mainstream position among nutrition scientists is that you should avoid “processed foods” as a general rule, but plenty of specific processed foods are fine, like olive oil or protein powder.

  7. My old belief: The ideal BMI is around 18–20 (on the low end of the “healthy” range of 18.5 to 25).

    What changed my mind: My old belief wasn’t based on direct evidence. I just had a prior that official recommendations are gonna be too generous, for example recommending less exercise than is optimal because they don’t think people will actually do the optimal amount of exercise, or recommending a “healthy” BMI that’s actually a bit too generous because they think people will give up if they’re told to aim for a BMI of 20. I updated my belief after reading a large study11 on BMI and all-cause mortality. As of writing the first draft of this post, I weakly believed the ideal was on middle-high side of the “healthy” range (so around 22–23), but I wrote in my first draft, “I want to investigate this more.”

    What changed my mind again: I investigated more. (I had to take a diversion from writing this post to write a different post about BMI.) Now I believe the ideal BMI is 20–22, for reasons I explain in the linked post. Lower than 20 is fine, maybe even better, if you have adequate lean mass. 22–23 appears to carry (slightly) greater health risks.

  8. My old belief: I believed that I had seen an RCT that found that sunscreen didn’t work, and I had written that down in a note.

    What changed my mind: I looked up my note and saw that, in actuality, my note said that sunscreen did work. I somehow flipped the sign of the outcome in my memory. I don’t understand how that happened.

Miscellaneous

  1. My old belief: RCTs are high-quality evidence.

    What changed my mind: I learned about the replication crisis, and (later) I read some actual RCTs. I now believe the median RCT is pretty badly designed and it shouldn’t change your beliefs much unless you’ve actually read it and understood its methodology. And, perhaps as a corollary, the median scientist isn’t very smart. (This doesn’t necessarily follow because there are reasons why smart scientists might publish dumb papers.)

    (Doctors and professors appear to have average IQs around 115–125—see Meritocracy, Cognitive Ability, and the Sources of Occupational Success—which is a full standard deviation below the IQs of most of my friends,12 and probably most of the people reading this. So maybe it’s fair after all to say the median scientist isn’t very smart. But you could still argue that a lifetime of expertise matters more than 15 IQ points.)

    I see one area where scientists routinely mis-interpret their own evidence. They often struggle to understand the difference between

    1. Our study found a large (in absolute terms) but non-significant effect because our study was underpowered.
    2. Our study robustly established no effect: the standard error in our data was small enough that any meaningful effect would show up, and it didn’t.

    You especially see this fallacy in areas where small effect sizes still matter. For example, a 0.1% decrease in mortality risk matters a lot, but it’s very hard to detect with a study, and study authors often incorrectly conclude that the effect doesn’t exist when they fail to find it.

    (David J. Balan says there are two kinds of “no evidence”. I’d say there are three kinds: (1) we haven’t looked for evidence; (2) we looked for evidence in a way that wasn’t gonna find any evidence, and we didn’t find any evidence; (3) we looked for evidence in a way that would have found it if it existed, and we still didn’t find it.)

  2. My old belief: RCTs on strength training are basically useless. I read Practical Programming for Strength Training, which argued that strength coaches know better than researchers because RCTs are deeply flawed: they test a group of untrained individuals over 12 weeks or less, and that sort of training context doesn’t generalize to a more-experienced individual who follows a program for a year or longer.

    What changed my mind: This one’s interesting because it’s the opposite of the previous line item.

    I started listening to some more research-driven experts like Barbell Medicine and Stronger by Science and hearing their perspective on scientific studies. While it’s true that many (most?) sports science RCTs don’t generalize, there are plenty of studies that correct for the criticisms made by Practical Programming for Strength Training.

    Why the difference between “RCTs are bad” in the previous line item and “RCTs are good actually” now? As best I can figure, this is the deal:

    1. RCTs are bad if you blindly accept all of them.
    2. RCTs are good if you know how to read a study and understand where it does and does not generalize.

    The science popularizers I pay attention to know how to distinguish between bad and good studies, and how to synthesize commonalities that repeatedly appear in many studies.

  3. My old belief: Before 2009, I disagreed with affirmative action because I thought it was anti-meritocratic.

    What changed my mind: In 2009, I saw a debate in which the pro-affirmative action side won. I did not actually read the debate but I thought the pro side seemed credible so I changed my position.

    What changed my mind again: I now disagree with affirmative action. The main argument that changed my mind was that affirmative action has been around for a generation so if it was going to work, we would certainly see the benefits by now, and we don’t.

    To be a little more specific, there are (broadly speaking) two theories for why affirmative action ought to work:

    1. Gatekeepers (e.g. hiring managers) unfairly discriminate against minorities, and affirmative action cancels this out.
    2. Minorities underperform in some areas because they haven’t been given sufficient opportunities. Affirmative action gives them those opportunities so that they can get better.

    The first theory is pretty easy to test: if minorities outperform after being accepted, then they were being discriminated against. Empirically, we see the opposite—for example, racial minorities at universities have on average lower GPAs than whites. The exception is Asians, who do actually outperform, which suggests they’re being discriminated against. But we don’t need affirmative action to fix anti-Asian discrimination, in fact it’s caused by affirmative action.

    You can test the second theory in a similar way: after being accepted, do minorities on average improve their performance? Empirically, they don’t. You can wiggle out of this by saying they still face hurdles even after being accepted to (e.g.) university. But even the children of under-represented minorities who went to elite colleges still underperform (on average). If affirmative action doesn’t improve outcomes even for the children of the beneficiaries, it’s probably never going to work.

    That doesn’t prove that minorities don’t face hardships that hamper their performance, it just proves that affirmative action doesn’t do anything to rectify those hardships.

  4. My old belief: Regulation is basically good; free markets often hurt people.

    What changed my mind: I now believe there’s way too much regulation and free markets are almost always good. I wouldn’t go so far as to say all regulations are bad, but I think the developed world would be much better off if lawmakers simply deleted 75% of existing regulations. Nor would I say free markets are always good, but I’d guess that the ratio of economic problems caused by market restrictions to problems caused by overly free markets is about 20:1.

    I can’t pinpoint a specific period where I changed my mind but it happened somewhere between the beginning and the end of college. The main things that changed my mind were (1) getting better at applying basic economic reasoning (I took econ in 11th grade but I didn’t start applying it to life until later) and (2) reading economist polls and updating toward economists’ beliefs.

  5. My old belief: Maybe if I force myself to go to enough social events, I will learn to enjoy them.

    What changed my mind: I went to a whole bunch of social events that I didn’t want to go to and I never learned to enjoy them. I now believe it’s better to simply not go to social events that I don’t want to go to.

    (People often tell me things like, “You should come, you’ll start enjoying it once you get there!” These people are badly failing to model the fact that my brain does not work the same as their brain.)

  6. My old belief: In 8th grade Spanish class, the teacher had an accent I was unfamiliar with. I thought he was mis-pronouncing certain Spanish words.

    What changed my mind: I learned that his pronunciations were a regional accent.

    I don’t expect 8th graders to have particularly good reasoning abilities but this mistake feels especially severe. Surely I could have figured out that I, a kid who has taken one year of Spanish, do not know more about Spanish pronunciation than this guy who is a native Spanish speaker and teaches Spanish.

  7. My old belief: Nuclear power is too dangerous, mainly because we can’t safely manage radioactive waste.

    What changed my mind: In 2010, I heard about Kahan et al.’s Cultural Cognition of Scientific Consensus13. The paper studied how people form beliefs on two issues where many people disagree with the scientific consensus—climate change and the disposal of nuclear wastes. Kahan et al. cited a consensus report14 where scientists agreed that radioactive waste can be disposed of safely. This was the first time I came in contact with the notion that the scientific consensus supports nuclear power. I knew very little about nuclear waste disposal (and still know little), but I changed my belief to align with the scientific consensus.

  8. My old belief: Creationists are uniquely bad at reasoning.

    What changed my mind: I know very smart and educated people who believe things that are about as obviously-wrong as creationism (such as the labor theory of value, or that infants are “blank slates”,15 or nuclear waste can’t be disposed of safely—wait, that last one was me).

    I now believe it’s basically impossible to avoid believing dumb things. I still think creationism is extremely wrong, but I have a lot of sympathy for creationists, and I don’t think they’re particularly worse at reasoning than anyone else. I believe you should rule thinkers in, not out.

    The thing is, if you live in a bubble, you might not ever hear the arguments for why the labor theory of value is wrong, or why blank slatism is wrong. Or you might hear arguments but not good ones, perhaps because knowledgeable people would rather make fun of you than try to persuade you. So I find it totally understandable that people hold on to these beliefs. And I think of creationism the same way—some people have the misfortune to live in bubbles where everyone around them believes in creationism and they never hear good counter-arguments.

What patterns emerge?

Did I systematically change my mind in certain ways? Can I predict how I might change my mind in the future?

I can see four broad patterns:

  1. Initially, I overconfidently believed the first credible-sounding thing I heard. Later, I moderated my beliefs when I learned about other perspectives.
  2. I investigated an area where not much is known, so I had to figure things out on my own. My initial conclusion was wrong, and I changed my mind by investigating more deeply.
  3. I used to believe individual scientific studies, and now I don’t give them much credibility.
  4. I pay more attention to the scientific consensus.

On #4, it’s not so much that I ever wanted to disagree with the scientific consensus, but that it’s hard to know what scientists believe. On economics, nutrition, and nuclear power, I used to disagree with the consensus, but only because I didn’t know what the consensus was.

Notes

  1. My argument went like this:

    From a financial perspective, buying a house is isomorphic to buying a rental property and renting it out, while also paying rent to live at a second, identical house. A rental property is not a good investment because it’s highly non-diversified. If you shouldn’t buy a house as a rental property, then you shouldn’t buy a house to live in, either.

    (People can pretend houses aren’t risky because the price doesn’t update on a minute-to-minute basis, but a single house is a much riskier investment than an index fund.) 

  2. There are some caveats to this:

    1. Buying a house only works as a hedge if you plan on living there forever. If you ever plan on moving, you’re subject to fluctuations in the price of your current house relative to that of your future house.
    2. It only works if you get a competitive mortgage interest rate and if the value of the hedge balances out the opportunity cost of pouring a bunch of money at once into a house.
    3. Houses have maintenance expenditures and the like, although this should be priced in to rent.

  3. I read Starting Strength based on a recommendation in a LessWrong comment: “The set of my friends who are strong is exactly the set of my friends who do / have done Starting Strength or a close variant.” 

  4. I like what Mike Israetel said in an interview (paraphrased): “If you can point out a dude to me and tell [based on how he looks] that he trains with sets of 20–30 and point out another dude and say that dude trains with sets of 5–8, I’d be super impressed, because I can’t.” 

  5. The standard rule of thumb is the “talk test”: if you can carry out a conversation with a little bit of difficulty, it’s the right intensity. 

  6. This sentence was true when I originally wrote it. Now I’m revising and I feel the need to note that I missed a day last week because I was sick. 

  7. Ekkekakis et al. (2022–2023). Extraordinary Claims in the Literature of High-Intensity Interval Training.

  8. I generally like the epistemology of the Barbell Medicine guys, but we do disagree sometimes. I’ve noticed a common pattern in how we disagree.

    Say there’s a question about whether A is good, and the state of the evidence is that

    1. There’s some theoretical reason to expect A to be good
    2. Some limited empirical research has failed to find that A is good

    In that case, they believe A is not good, and I believe A is good.

    For example, they say you shouldn’t take a multivitamin because RCTs generally haven’t found benefits. I say you should take a multivitamin because they’re cheap and we know vitamins are important in principle.

    I believe scientists and doctors tend to overweight weakly negative empirical findings relative to theory. I agree with what Scott Alexander wrote in A Failure, But Not Of Prediction: authority figures said (in April 2020, when he wrote the post) that masks don’t prevent the spread of disease because there’s no supporting RCT evidence. But Scott believes masks are worth using because there’s good theoretical reason to expect them to work.

    The Barbell Medicine guys understand perfectly well that sometimes theory matters more than empirical evidence, it’s just that I tend to favor theory a little bit more than they do. 

  9. For example, I was concerned that a lot of associations between diet and health were confounded by socioeconomic class (rich people eat more veggies) or by conscientiousness (conscientious people eat more veggies plus they’re less inclined to over-eat). But Willett & Skerrett give evidence that some nutritional findings can’t be explained by class or conscientiousness.

    • On socioeconomic class: Greeks today and Chinese people in the 1990s were healthier than Americans even though they were poorer.
    • On conscientiousness: Americans who conscientiously followed the 1990s USDA guidelines had worse health outcomes (because the guidelines were dumb).

    (Also: Isn’t it kind of insane that people who followed the USDA guidelines had worse health outcomes? In general, every diet works, whether it’s low-carb, low-fat, paleo, or whatever, because diets force you to pay more attention to what you eat and restrict your caloric intake. It’s perversely impressive that the USDA managed to come up with a diet (possibly the only diet ever?) that actually makes health worse.) 

  10. Where do you even buy raw olives? I’ve only ever seen jarred or canned olives soaked in brine. 

  11. Bhaskaran, K., dos-Santos-Silva, I., Leon, D. A., Douglas, I. J., & Smeeth, L. (2018). Association of BMI with overall and cause-specific mortality: a population-based cohort study of 3-6 million adults in the UK. 

  12. Earlier in this post, I wrote about how most caffeine studies had bad methodology. As a test, I described the methodology of one study to a friend and asked them what they thought about it, and they immediately pointed out the same flaw that I had noticed. So clearly it’s not just me. 

  13. Kahan, D. M., Jenkins‐Smith, H., & Braman, D. (2011). Cultural cognition of scientific consensus.

    (Note: The cited version was published in 2011 but the paper was originally posted online in 2010.) 

  14. National Research Council. Board on Radioactive Waste Management. (1990). Rethinking high-level radioactive waste disposal: A position statement of the Board on Radioactive Waste Management, Commission on Geosciences, Environment, and Resources, National Research Council. 

  15. It would derail the post too much for me to explain why I believe these beliefs are on par with creationism, but I can throw out some links:

    1. Criticisms of the labor theory of value on Wikipedia
    2. The Blank Slate, an article by Steven Pinker that summarizes his book, titled (surprise!) The Blank Slate