There Are Three Kinds of "No Evidence"

David J. Balan once proposed that there are two kinds of “no evidence”:

  1. There have been lots of studies directly on this point which came back with the result that the hypothesis is false.
  2. There is no evidence because there are few or no relevant studies.

I propose that there are three kinds of “no evidence”:

  1. The hypothesis has never been studied.
  2. There are studies, the studies failed to find supporting evidence, but they wouldn’t have found supporting evidence even if the hypothesis were true.
  3. There are studies, the studies should have found supporting evidence if the hypothesis were true, and they didn’t.

Example of type 1: A 2003 literature review found that there were no studies1 showing that parachutes could prevent injury when jumping out of a plane.

Example of type 2: In 2018, there was finally a randomized controlled trial2 on the effectiveness of parachutes, and it found no difference between the parachute group and the control group. However, participants only jumped from a height of 0.6 meters (~2 feet). I don’t know about you, but this result does not make me want to jump out of a plane without a parachute.

Like in the parachute example, you see type-2 “no evidence” whenever the conditions of a study don’t match the real-world environment. You also see type-2 “no evidence” when an experiment is underpowered. Say you want to test the hypothesis that boys are taller than girls. So you go find your niece Sally and your neighbor’s son James and it turns out Sally is an inch taller than James. Your methodology was valid—you can indeed test the hypothesis by finding some people and measuring their heights—but your sample size was too small.

(The difference between type 2 and type 3 can be a matter of degree. The more powerful a study is, the stronger its “no evidence” if it fails to find an effect.)

Notes

Posted on

The 7 Best High-Protein Breakfast Cereals

Updated 2025-03-19 to add Catalina Crunch Cinnamon Toast.

(I write listicles now)

(there are only 7 eligible high-protein breakfast cereals, so the ones at the bottom are still technically among the 7 best even though they’re not good)

If you search the internet, you can find rankings of the best “high-protein” breakfast cereals. But most of the entries on those lists don’t even have that much protein. I don’t like that, so I made my own list.

This is my ranking of genuinely high-protein breakfast cereals, which I define as containing at least 25% calories from protein.

Many food products like to advertise how many grams of protein they have per serving. That number doesn’t matter because it depends on how big a serving is. Hypothetically, if a food had 6g protein per serving but each serving contained 2000 calories, that would be a terrible deal. The actual number that matters is the proportion of calories from protein.

My ranking only includes vegan cereals because I’m vegan. Fortunately most cereals are vegan anyway. The main exception is that some cereals contain whey protein, but that’s not too common—most of them use soy, pea, or wheat protein instead.

High-protein cereals, ranked by flavor

Continue reading
Posted on

My submission for Worst Argument In The World

Scott Alexander once wrote:

David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.

If he can unilaterally declare a Worst Argument, then so can I.

If those guys can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this:

“A long time ago, not-A, and also, not-B. Now, A and B. Therefore, A caused B.”

Example: In 1820, pirates were everywhere. Now you hardly ever see pirates, and global temperatures are rising. Therefore, the lack of pirates caused global warming.

(This particular argument was originally made as a joke, but I will give some real examples later.)

Naming fallacies is hard. Maybe we could call this the “two distant points in time fallacy”. For now I’ll just call it the Worst Argument.

Continue reading
Posted on

I have whatever the opposite of a placebo effect is

Two personal stories:

A story about caffeine

When I first started working a full-time job, I started tracking my daily (subjective) productivity along with a number of variables that I thought might be relevant, like whether I exercised that morning or whether I took caffeine. I couldn’t perceive any differences in productivity based on any of the variables.

After collecting about a year of data, I ran a regression. I found that most variables had no noticeable effect, but caffeine had a huge effect—it increased my subjective productivity by about 20 percentage points, or an extra ~1.5 productive hours per day. Somehow I never noticed this enormous effect. Whatever the opposite of a placebo effect is, that’s what I had: caffeine had a large effect, but I thought it had no effect.

A story about sleep

People always say that exercise helps them sleep better. I thought it didn’t work for me. When I do cardio, even like two hours of cardio, I don’t feel more tired in the evening and I don’t fall asleep (noticeably) faster.

Yesterday, I decided to test this. I wrote a script to predict how long I slept based on how many calories my phone says I burned. The idea is that if I sleep less, that probably means I didn’t need as much because my sleep was higher quality. (I almost always wake up naturally without an alarm.)

Well, turns out exercise does help. For every 500 calories burned (which is about what I burn during a normal cardio session), I sleep 25 minutes less. Once again, exercise had a huge effect, and I thought it didn’t do anything.

I guess I’m not very observant.

Posted on

Just because a number is a rounding error doesn't mean it's not important

Sometimes, people call a number a “rounding error” as if to say it doesn’t matter. But a rounding error can still be very important!

Say I’m tracking my weight. If I’ve put on 0.1 pounds since yesterday, that’s a rounding error—my weight fluctuates by 3 pounds on a day-to-day basis, so 0.1 pounds means nothing. But if I continue gaining 0.1 pounds per day, I’ll be obese after 18 months, and by the time I’m 70 I’ll be the fattest person who ever lived.

Or if the stock market moves 1% in a day, that’s a rounding error. If it moves up 1% every day for a year, every individual day of which is a rounding error, it will be up 3700%, which would be the craziest thing that’s ever happened in the history of the global economy.

This happens whenever the standard deviation is much larger than the mean. A large standard deviation means a “real” change gets obscured by random movement. But over enough iterations, the random movements even out and the real changes persist. For example, the stock market has an average daily return of 0.02% and a standard deviation of 0.8%. The standard deviation is 40x larger than the mean, so a real trend in prices gets totally washed out by noise. The market’s daily average return is a rounding error, but it’s still important.

Posted on

Some Things I've Changed My Mind On

Here are some things I’ve changed my mind about. Most of the changes are recent (because I can remember recent stuff more easily) but some of them happened 5+ years ago.

I’m a little nervous about writing this because a few of my old beliefs were really dumb. But I don’t think it would be fair to include only my smart beliefs.

Continue reading
Posted on

← Newer Page 1 of 6