Thoughts on My Donation Process
I have some observations and half-baked ideas about my recent donation process. They weren’t important enough to include in the main post, but I want to talk about them anyway.
Continue reading
I have some observations and half-baked ideas about my recent donation process. They weren’t important enough to include in the main post, but I want to talk about them anyway.
Continue readingPsychologists have done experiments that supposedly show how people behave irrationally. But in some of those experiments, people do behave rationally, and it’s the psychologists’ expectations that are irrational.
Continue readingScott Alexander once wrote:
David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.
If he can unilaterally declare a Worst Argument, then so can I.
If those guys can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this:
“A long time ago, not-A, and also, not-B. Now, A and B. Therefore, A caused B.”
Example: In 1820, pirates were everywhere. Now you hardly ever see pirates, and global temperatures are rising. Therefore, the lack of pirates caused global warming.
(This particular argument was originally made as a joke, but I will give some real examples later.)
Naming fallacies is hard. Maybe we could call this the “two distant points in time fallacy”. For now I’ll just call it the Worst Argument.
Continue readingTwo personal stories:
When I first started working a full-time job, I started tracking my daily (subjective) productivity along with a number of variables that I thought might be relevant, like whether I exercised that morning or whether I took caffeine. I couldn’t perceive any differences in productivity based on any of the variables.
After collecting about a year of data, I ran a regression. I found that most variables had no noticeable effect, but caffeine had a huge effect—it increased my subjective productivity by about 20 percentage points, or an extra ~1.5 productive hours per day. Somehow I never noticed this enormous effect. Whatever the opposite of a placebo effect is, that’s what I had: caffeine had a large effect, but I thought it had no effect.
People always say that exercise helps them sleep better. I thought it didn’t work for me. When I do cardio, even like two hours of cardio, I don’t feel more tired in the evening and I don’t fall asleep (noticeably) faster.
Yesterday, I decided to test this. I wrote a script to predict how long I slept based on how many calories my phone says I burned. The idea is that if I sleep less, that probably means I didn’t need as much because my sleep was higher quality. (I almost always wake up naturally without an alarm.)
Well, turns out exercise does help. For every 500 calories burned (which is about what I burn during a normal cardio session), I sleep 25 minutes less. Once again, exercise had a huge effect, and I thought it didn’t do anything.
I guess I’m not very observant.
Sometimes, people call a number a “rounding error” as if to say it doesn’t matter. But a rounding error can still be very important!
Say I’m tracking my weight. If I’ve put on 0.1 pounds since yesterday, that’s a rounding error—my weight fluctuates by 3 pounds on a day-to-day basis, so 0.1 pounds means nothing. But if I continue gaining 0.1 pounds per day, I’ll be obese after 18 months, and by the time I’m 70 I’ll be the fattest person who ever lived.
Or if the stock market moves 1% in a day, that’s a rounding error. If it moves up 1% every day for a year, every individual day of which is a rounding error, it will be up 3700%, which would be the craziest thing that’s ever happened in the history of the global economy.
This happens whenever the standard deviation is much larger than the mean. A large standard deviation means a “real” change gets obscured by random movement. But over enough iterations, the random movements even out and the real changes persist. For example, the stock market has an average daily return of 0.02% and a standard deviation of 0.8%. The standard deviation is 40x larger than the mean, so a real trend in prices gets totally washed out by noise. The market’s daily average return is a rounding error, but it’s still important.
Here are some things I’ve changed my mind about. Most of the changes are recent (because I can remember recent stuff more easily) but some of them happened 5+ years ago.
I’m a little nervous about writing this because a few of my old beliefs were really dumb. But I don’t think it would be fair to include only my smart beliefs.
Continue readingI recently read Eat, Drink, and Be Healthy: The Harvard Medical School Guide to Healthy Eating. As I understand, it’s the book that does the best job of representing the mainstream scientific perspective on nutrition for a lay audience. Here are the notes I took.
Last updated 2024-09-02.
Continue readingIn 2012, Scott Alexander defended social sciences against the claim that they can’t figure anything out. He gave a long list of well-established findings across a variety of social science disciplines.
12 years later, how well did that list hold up?
Continue readingRecently, Saar Wilf, creator of Rootclaim, had a high-profile debate against Peter Miller on whether COVID originated from a lab. Peter won and Saar lost.
Rootclaim’s mission is to “overcome the flaws of human reasoning with our probabilistic inference methodology.” Rootclaim assigns odds to each piece of evidence and perfoms Bayesian updates to get a posterior probability. When Saar lost the lab leak debate, some people considered this a defeat not just for the lab leak hypothesis, but for Rootclaim’s whole approach.
In Scott Alexander’s coverage of the debate, he wrote:
While everyone else tries “pop Bayesianism” and “Bayes-inspired toolboxes”, Rootclaim asks: what if you just directly apply Bayes to the world’s hardest problems? There’s something pure about that, in a way nobody else is trying.
Unfortunately, the reason nobody else is trying this is because it doesn’t work. There’s too much evidence, and it’s too hard to figure out how to quantify it.
Don’t give up so easily! We as a society have spent approximately 0% of our collective decision-making resources on explicit Bayesian reasoning. Just because Rootclaim used Bayesian methods and then lost a debate doesn’t mean those methods will never work. That would be like saying, “randomized controlled trials were a great idea, but they keep finding that ESP exists. Oh well, I guess we should give up on RCTs and just form beliefs using common sense.”
(And it’s not even like the problems with RCTs were easy to fix. Scott wrote about 10 known problems with RCTs and 10 ways to fix them, and then wrote about an RCT that fixed all 101 of those problems and still found that ESP exists. If we’re going to give RCTs more than 10 tries, we should extend the same courtesy to Bayesian reasoning.)
I’m optimistic that we can make explicit Bayesian analysis work better. And I can already think of ways to improve on two problems with it.
Continue readingIn the eyes of popular culture (and in the eyes of many philosophy professors), the essence of utilitarianism is “it’s okay to do bad things for the greater good.” In my mind, that’s not the essence of utilitarianism. The essence is, “doing more good is better than doing less good.”
Utilitarianism is about doing the most good. You don’t do the most good by fretting over weird edge cases where you can harm someone to help other people. You do the most good by picking up massive free wins like donating to effective charities where money does 100x more good than it would if you spent it on yourself.
(Richard Y. Chappell might call this beneficentrism: “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” You can be a beneficentrist without being a utilitarian, but if you’re a utilitarian, you have to be a beneficentrist, and as a utilitarian, being a beneficentrist is much more important than being a “do bad things for the greater good”-ist.)