My submission for Worst Argument In The World

Scott Alexander once wrote:

David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.

If he can unilaterally declare a Worst Argument, then so can I.

If those guys can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this:

“A long time ago, not-A, and also, not-B. Now, A and B. Therefore, A caused B.”

Example: In 1820, pirates were everywhere. Now you hardly ever see pirates, and global temperatures are rising. Therefore, the lack of pirates caused global warming.

(This particular argument was originally made as a joke, but I will give some real examples later.)

Naming fallacies is hard. Maybe we could call this the “two distant points in time fallacy”. For now I’ll just call it the Worst Argument.

Continue reading
Posted on

I have whatever the opposite of a placebo effect is

Two personal stories:

A story about caffeine

When I first started working a full-time job, I started tracking my daily (subjective) productivity along with a number of variables that I thought might be relevant, like whether I exercised that morning or whether I took caffeine. I couldn’t perceive any differences in productivity based on any of the variables.

After collecting about a year of data, I ran a regression. I found that most variables had no noticeable effect, but caffeine had a huge effect—it increased my subjective productivity by about 20 percentage points, or an extra ~1.5 productive hours per day. Somehow I never noticed this enormous effect. Whatever the opposite of a placebo effect is, that’s what I had: caffeine had a large effect, but I thought it had no effect.

A story about sleep

People always say that exercise helps them sleep better. I thought it didn’t work for me. When I do cardio, even like two hours of cardio, I don’t feel more tired in the evening and I don’t fall asleep (noticeably) faster.

Yesterday, I decided to test this. I wrote a script to predict how long I slept based on how many calories my phone says I burned. The idea is that if I sleep less, that probably means I didn’t need as much because my sleep was higher quality. (I almost always wake up naturally without an alarm.)

Well, turns out exercise does help. For every 500 calories burned (which is about what I burn during a normal cardio session), I sleep 25 minutes less. Once again, exercise had a huge effect, and I thought it didn’t do anything.

I guess I’m not very observant.

Posted on

Just because a number is a rounding error doesn't mean it's not important

Sometimes, people call a number a “rounding error” as if to say it doesn’t matter. But a rounding error can still be very important!

Say I’m tracking my weight. If I’ve put on 0.1 pounds since yesterday, that’s a rounding error—my weight fluctuates by 3 pounds on a day-to-day basis, so 0.1 pounds means nothing. But if I continue gaining 0.1 pounds per day, I’ll be obese after 18 months, and by the time I’m 70 I’ll be the fattest person who ever lived.

Or if the stock market moves 1% in a day, that’s a rounding error. If it moves up 1% every day for a year, every individual day of which is a rounding error, it will be up 3700%, which would be the craziest thing that’s ever happened in the history of the global economy.

This happens whenever the standard deviation is much larger than the mean. A large standard deviation means a “real” change gets obscured by random movement. But over enough iterations, the random movements even out and the real changes persist. For example, the stock market has an average daily return of 0.02% and a standard deviation of 0.8%. The standard deviation is 40x larger than the mean, so a real trend in prices gets totally washed out by noise. The market’s daily average return is a rounding error, but it’s still important.

Posted on

Some Things I've Changed My Mind On

Here are some things I’ve changed my mind about. Most of the changes are recent (because I can remember recent stuff more easily) but some of them happened 5+ years ago.

I’m a little nervous about writing this because a few of my old beliefs were really dumb. But I don’t think it would be fair to include only my smart beliefs.

Continue reading
Posted on

Explicit Bayesian Reasoning: Don't Give Up So Easily

Recently, Saar Wilf, creator of Rootclaim, had a high-profile debate against Peter Miller on whether COVID originated from a lab. Peter won and Saar lost.

Rootclaim’s mission is to “overcome the flaws of human reasoning with our probabilistic inference methodology.” Rootclaim assigns odds to each piece of evidence and perfoms Bayesian updates to get a posterior probability. When Saar lost the lab leak debate, some people considered this a defeat not just for the lab leak hypothesis, but for Rootclaim’s whole approach.

In Scott Alexander’s coverage of the debate, he wrote:

While everyone else tries “pop Bayesianism” and “Bayes-inspired toolboxes”, Rootclaim asks: what if you just directly apply Bayes to the world’s hardest problems? There’s something pure about that, in a way nobody else is trying.

Unfortunately, the reason nobody else is trying this is because it doesn’t work. There’s too much evidence, and it’s too hard to figure out how to quantify it.

Don’t give up so easily! We as a society have spent approximately 0% of our collective decision-making resources on explicit Bayesian reasoning. Just because Rootclaim used Bayesian methods and then lost a debate doesn’t mean those methods will never work. That would be like saying, “randomized controlled trials were a great idea, but they keep finding that ESP exists. Oh well, I guess we should give up on RCTs and just form beliefs using common sense.”

(And it’s not even like the problems with RCTs were easy to fix. Scott wrote about 10 known problems with RCTs and 10 ways to fix them, and then wrote about an RCT that fixed all 101 of those problems and still found that ESP exists. If we’re going to give RCTs more than 10 tries, we should extend the same courtesy to Bayesian reasoning.)

I’m optimistic that we can make explicit Bayesian analysis work better. And I can already think of ways to improve on two problems with it.

Continue reading
Posted on

← Newer Page 1 of 5