Songs of the Day

Starting about seven years ago, every time I heard a song that I really liked that stuck with me for the rest of the day, I recorded it in my journal on a list of “Songs of the Day”.

This list shows how my musical tastes have shifted over the years. It isn’t entirely representative because there are plenty of songs I love that never made it onto this list.

Here’s the complete list up to the time of this writing. An asterisk means that I liked the song before it was Song of the Day, but I gained a new appreciation for it on that day. I have a corresponding Spotify playlist, although it only includes songs up to November 2015.

Continue reading
Posted on

How Sentient Are Farm Animals?

I wrote this as a quick explanation of why I value non-human animals the way I do. It’s not particularly thorough, and my explanation has some clear holes; this is just a general outline.

When we’re considering charitable interventions that help animals, it’s important to have some sense of how valuable it is to help those animals, which means we want to know how sentient they are.

How sentient an animal is–that is, how strongly it experiences pleasure and pain–almost certainly relates to how its brain works. I see four reasonably plausible ways that sentience could relate to brain size:

  1. Suffering is caused by certain fixed brain structures, and for certain types of physical pain (like what chickens experience on factory farms), humans and chickens have the same brain parts and therefore experience this pain equally.
  2. Sentience is linear with brain size.
  3. Sentience is sub-linearly related to brain size; for example, sentience may be logarithmic with brain size.
  4. Less intelligent animals are generally more sentient because they “are [not] capable of intelligently working out what is good for [them], and what damaging events [they] should avoid”, so they need a stronger pain response to compensate.
Continue reading
Posted on

The Myth that Reducing Wild Animal Suffering Is Intractable

Lots of people accept that wild animal suffering is a big problem, but they believe it’s completely intractable. I even see some people claim that it’s one of the biggest problems in the world, but we still shouldn’t try to do anything about it. Wild animal suffering is in fact much more tractable than most people believe.

If we think wild animal suffering is a pressing problem and we want to do something about it, what can we do?

Continue reading
Posted on

What Would Change My Mind About Where to Donate

If I’m wrong about anything, I want you to change my mind. I want to make that as easy as possible, so I’m going to give a list of charities/interventions and say what would convince me to support each of them.

Please try to change my mind! I prefer public discussions so the best thing to do is to comment on this post or on Facebook, but if you want to talk to me privately you can email me or message me on Facebook.

My Current Position

I discuss how I got to my current position here. Here’s a quick summary:

Continue reading
Posted on

Feedback Loops for Values Spreading

I recently wrote about values spreading, and came out weakly in favor of focusing on global catastrophic risks over values spreading. However, I neglected an important consideration in favor of values spreading: feedback loops.

When we try to take actions that will benefit the long-term future but where we don’t get immediate feedback on our actions, it’s easy to end up taking actions that do nothing to achieve our goals. For instance, it is surprisingly difficult to predict in advance how effective a social intervention will be. This gives reason to be skeptical about the effectiveness of interventions with long feedback loops.

Interventions on global catastrophic risks have really, really bad feedback loops. It’s nearly impossible to tell if anything we do reduces the risk of a global pandemic or unfriendly AI. An intervention focused on spreading good values is substantially easier to test. An organization like Animal Ethics can produce immediate, measurable changes in people’s values. Measuring these changes is difficult, and evidence for the effectiveness of advocacy is a lot weaker than the evidence for, say, insecticide-treated bednets to prevent malaria. But short-term values spreading still has an advantage over GCR reduction in that it’s measurable in principle.

Still, will measurable short-term changes in values result in sustainable long-term changes? That’s a harder question to answer. It certainly seems plausible that values shifts today will lead to shifts in the long term; but, as mentioned above, interventions that sound plausible frequently turn out not to work. Values spreading may not actually have a stronger case here than GCR reduction.

We can find feedback loops on GCR reduction that measure proxy variables. This is particularly easy in the case of climate change, where we can measure whether an intervention reduces greenhouse gas levels in the atmosphere. But we can also find feedback loops for something like AI safety research: we might say MIRI is more successful if it publishes more technical papers. This is not a particularly direct metric of whether MIRI is reducing AI risk, but it’s still a place where we can get quick feedback.

Given that short-term value shifts don’t necessarily predict long-term shifts, and that we can measure proxy variables for global catastrophic risk reduction, it’s non-obvious that values spreading has better feedback loops than GCR reduction. There does seem to be some sense in which value shifts today and value shifts in a thousand years are more strongly linked than, say, number of AI risk papers published and a reduction in AI risk; although this might just be because both involve value shifts–they may not actually be that strongly tied, or tied at all.

Values spreading appears to have the advantage of short-term feedback loops. But it’s not clear that these changes have long-term effects, and this claim isn’t any easier to test than the claim that GCR work today reduces global catastrophic risk.

Posted on

Response to the Global Priorities Project on Human and Animal Interventions

Owen Cotton-Barratt of the Global Priorities Project wrote an article on comparing human and animal interventions. His major conclusions include:

  1. Indirect long-term effects dominate considerations.
  2. Changing behavior of far-future humans matters more than alleviating immediate animal suffering.
  3. Helping humans has better flow-through effects than helping non-human animals.

The analysis effectively concludes that helping humans is more important than helping non-human animals but I believe it misses a few important considerations.

(These are fairly quick thoughts about which I have a lot of uncertainty; I’m publishing them here for the sake of making the conversation public.)

Continue reading
Posted on

Cause Prioritization Research I Would Like to See

Here are some research topics on cause prioritization that look important and neglected, in no particular order.

  1. Look at historical examples of speculative causes (especially ones that were meant to affect the long-ish-term future) that succeeded or failed and examine why.
  2. Try to determine how well picking winning companies translates to picking winning charities.
  3. In line with 2, consider if there exist simple strategies analogous to value investing that can find good charities.
  4. Find plausibly effective biosecurity charities.
  5. Develop a rigorous model for comparing the value of existential risk reduction to values spreading.
  6. Perform basic analyses of lots of EA-neglected or weird cause areas (e.g. depression, argument mapping, increasing savings, personal productivity–see here) and identify which ones look most promising.
  7. Reason about the expected value of the far future.
  8. Investigate neglected x-risk and meta charities (FHI, CSER, GPP, etc.).
  9. Reason about expected value estimates in general. How accurate are they? Do they tend to be overconfident? How overconfident? Do some things predictably make them more reliable?
Continue reading
Posted on

Observations on Consciousness

What is consciousness?

We can divide theories about consciousness into three categories:

  1. Consciousness is a special non-physical property (dualism).
  2. Consciousness is the result of the physical structures of the brain (identity theory).
  3. Conscious mental states are the result of their functional role within a process (functionalism).

In particular, I want to talk about Turing machine functionalism, a specific form of functionalism which states that consciousness is computation on a Turing machine. I want to talk about Turing machine functionalism in particular because it is probably correct.

Continue reading
Posted on

Charities I Would Like to See

There are a few cause areas that are plausibly highly effective, but as far as I know, no one is working on them. If there existed a charity working on one of these problems, I might consider donating to it.

Happy Animal Farm

The closest thing we can make to a hedonium shockwave with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.

I am not aware of any public discussion on this subject, so I will perform a quick ad-hoc effectiveness estimate.

Continue reading
Posted on

Some Writings on Cause Selection

Cross-posted to the EA Forum.

The cause selection blogging carnival is well under way, and we already have a few submissions. But before the blogging carnival began, some folks had already written some of their thoughts on cause selection. Here I’ve compiled a short list of links to a few such writings supporting a variety of cause areas. Maybe some of these will give you ideas or even convince you to change your mind.

Jeff Kaufman explains why he supports global poverty.

Brian Tomasik discusses where he donates and why.

Topher Hallquist explains why he believes animal rights organizations look most promising.

Luke Muehlhauser claims that AI safety is the most important cause.

GiveWell staff members discuss their personal donations for 2014.

Posted on

Page 5 of 6