I was probably wrong about HIIT and VO2max

This research piece is not as rigorous or polished as usual. I wrote it quickly in a stream-of-consciousness style, which means it’s more reflective of my actual reasoning process.

My understanding of HIIT (high-intensity interval training) as of a week ago:

  1. VO2max is the best fitness indicator for predicting health and longevity.
  2. HIIT, especially long-duration intervals (4+ minutes), is the best way to improve VO2max.
  3. Intervals should be done at the maximum sustainable intensity.

I now believe those are all probably wrong.

Continue reading
Posted on

Retroactive If-Then Commitments

An if-then commitment is a framework for responding to AI risk: “If an AI model has capability X, then AI development/deployment must be halted until mitigations Y are put in place.”

As an extension of this approach, we should consider retroactive if-then commitments. We should behave as if we wrote if-then commitments a few years ago, and we should commit to implementing whatever mitigations we would have committed to back then.

Imagine how an if-then commitment might have been written in 2020:

Pause AI development and figure out mitigations if:

Well, AI models have now done or nearly-done all of those things.

We don’t know what mitigations are appropriate, so AI companies should pause development until (at a minimum) AI safety researchers agree on what mitigations are warranted, and those mitigations are then fully implemented.

(You could argue about whether AI really hit those capability milestones, but that doesn’t particularly matter. You need to pause and/or restrict development of an AI system when it looks potentially dangerous, not definitely dangerous.)

Notes

  1. Okay, technically it did not score well enough to qualify, but it scored well enough that there was some ambiguity about whether it qualified, which is only a little bit less concerning. 

Posted on

The 7 Best High-Protein Breakfast Cereals

(I write listicles now)

(there are only 7 eligible high-protein breakfast cereals, so the ones at the bottom are still technically among the 7 best even though they’re not good)

If you search the internet, you can find rankings of the best “high-protein” breakfast cereals. But most of the entries on those lists don’t even have that much protein. I don’t like that, so I made my own list.

This is my ranking of genuinely high-protein breakfast cereals, which I define as containing at least 25% calories from protein.

Many food products like to advertise how many grams of protein they have per serving. That number doesn’t matter because it depends on how big a serving is. Hypothetically, if a food had 6g protein per serving but each serving contained 2000 calories, that would be a terrible deal. The actual number that matters is the proportion of calories from protein.

My ranking only includes vegan cereals because I’m vegan. Fortunately most cereals are vegan anyway. The main exception is that some cereals contain whey protein, but that’s not too common—most of them use soy, pea, or wheat protein instead.

High-protein cereals, ranked by flavor

Continue reading
Posted on

Charity Cost-Effectiveness Really Does Follow a Power Law

Conventional wisdom says charity cost-effectiveness obeys a power law. To my knowledge, this hypothesis has never been properly tested.1 So I tested it and it turns out to be true.

(Maybe. Cost-effectiveness might also be log-normally distributed.)

  • Cost-effectiveness estimates for global health interventions (from DCP3) fit a power law (a.k.a. Pareto distribution) with \(\alpha = 1.11\). [More]
  • Simulations indicate that the true underlying distribution has a thinner tail than the empirically observed distribution. [More]
Continue reading
Posted on

Where I Am Donating in 2024

Summary

Last updated 2024-11-20.

It’s been a while since I last put serious thought into where to donate. Well I’m putting thought into it this year and I’m changing my mind on some things.

I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I’ve managed to reason myself out of those emotions.

Within x-risk:

  • AI is the most important source of risk.
  • There is a disturbingly high probability that alignment research won’t solve alignment by the time superintelligent AI arrives. Policy work seems more promising.
  • Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.

In the rest of this post, I will explain:

  1. Why I prioritize x-risk over animal-focused longtermist work and global priorities research.
  2. Why I prioritize AI policy over AI alignment research.
  3. My beliefs about what kinds of policy work are best.

Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate.

Cross-posted to the Effective Altruism Forum.

Continue reading
Posted on

My submission for Worst Argument In The World

Scott Alexander once wrote:

David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.

If he can unilaterally declare a Worst Argument, then so can I.

If those guys can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this:

“A long time ago, not-A, and also, not-B. Now, A and B. Therefore, A caused B.”

Example: In 1820, pirates were everywhere. Now you hardly ever see pirates, and global temperatures are rising. Therefore, the lack of pirates caused global warming.

(This particular argument was originally made as a joke, but I will give some real examples later.)

Naming fallacies is hard. Maybe we could call this the “two distant points in time fallacy”. For now I’ll just call it the Worst Argument.

Continue reading
Posted on

← Newer Page 1 of 14