The Myth that Reducing Wild Animal Suffering Is Intractable

Lots of people accept that wild animal suffering is a big problem, but they believe it’s completely intractable. I even see some people claim that it’s one of the biggest problems in the world, but we still shouldn’t try to do anything about it. Wild animal suffering is in fact much more tractable than most people believe.

If we think wild animal suffering is a pressing problem and we want to do something about it, what can we do?

Continue reading
Posted on

What Would Change My Mind About Where to Donate

If I’m wrong about anything, I want you to change my mind. I want to make that as easy as possible, so I’m going to give a list of charities/interventions and say what would convince me to support each of them.

Please try to change my mind! I prefer public discussions so the best thing to do is to comment on this post or on Facebook, but if you want to talk to me privately you can email me or message me on Facebook.

My Current Position

I discuss how I got to my current position here. Here’s a quick summary:

Continue reading
Posted on

Feedback Loops for Values Spreading

I recently wrote about values spreading, and came out weakly in favor of focusing on global catastrophic risks over values spreading. However, I neglected an important consideration in favor of values spreading: feedback loops.

When we try to take actions that will benefit the long-term future but where we don’t get immediate feedback on our actions, it’s easy to end up taking actions that do nothing to achieve our goals. For instance, it is surprisingly difficult to predict in advance how effective a social intervention will be. This gives reason to be skeptical about the effectiveness of interventions with long feedback loops.

Interventions on global catastrophic risks have really, really bad feedback loops. It’s nearly impossible to tell if anything we do reduces the risk of a global pandemic or unfriendly AI. An intervention focused on spreading good values is substantially easier to test. An organization like Animal Ethics can produce immediate, measurable changes in people’s values. Measuring these changes is difficult, and evidence for the effectiveness of advocacy is a lot weaker than the evidence for, say, insecticide-treated bednets to prevent malaria. But short-term values spreading still has an advantage over GCR reduction in that it’s measurable in principle.

Still, will measurable short-term changes in values result in sustainable long-term changes? That’s a harder question to answer. It certainly seems plausible that values shifts today will lead to shifts in the long term; but, as mentioned above, interventions that sound plausible frequently turn out not to work. Values spreading may not actually have a stronger case here than GCR reduction.

We can find feedback loops on GCR reduction that measure proxy variables. This is particularly easy in the case of climate change, where we can measure whether an intervention reduces greenhouse gas levels in the atmosphere. But we can also find feedback loops for something like AI safety research: we might say MIRI is more successful if it publishes more technical papers. This is not a particularly direct metric of whether MIRI is reducing AI risk, but it’s still a place where we can get quick feedback.

Given that short-term value shifts don’t necessarily predict long-term shifts, and that we can measure proxy variables for global catastrophic risk reduction, it’s non-obvious that values spreading has better feedback loops than GCR reduction. There does seem to be some sense in which value shifts today and value shifts in a thousand years are more strongly linked than, say, number of AI risk papers published and a reduction in AI risk; although this might just be because both involve value shifts–they may not actually be that strongly tied, or tied at all.

Values spreading appears to have the advantage of short-term feedback loops. But it’s not clear that these changes have long-term effects, and this claim isn’t any easier to test than the claim that GCR work today reduces global catastrophic risk.

Posted on

Response to the Global Priorities Project on Human and Animal Interventions

Owen Cotton-Barratt of the Global Priorities Project wrote an article on comparing human and animal interventions. His major conclusions include:

  1. Indirect long-term effects dominate considerations.
  2. Changing behavior of far-future humans matters more than alleviating immediate animal suffering.
  3. Helping humans has better flow-through effects than helping non-human animals.

The analysis effectively concludes that helping humans is more important than helping non-human animals but I believe it misses a few important considerations.

(These are fairly quick thoughts about which I have a lot of uncertainty; I’m publishing them here for the sake of making the conversation public.)

Continue reading
Posted on

Cause Prioritization Research I Would Like to See

Here are some research topics on cause prioritization that look important and neglected, in no particular order.

  1. Look at historical examples of speculative causes (especially ones that were meant to affect the long-ish-term future) that succeeded or failed and examine why.
  2. Try to determine how well picking winning companies translates to picking winning charities.
  3. In line with 2, consider if there exist simple strategies analogous to value investing that can find good charities.
  4. Find plausibly effective biosecurity charities.
  5. Develop a rigorous model for comparing the value of existential risk reduction to values spreading.
  6. Perform basic analyses of lots of EA-neglected or weird cause areas (e.g. depression, argument mapping, increasing savings, personal productivity–see here) and identify which ones look most promising.
  7. Reason about the expected value of the far future.
  8. Investigate neglected x-risk and meta charities (FHI, CSER, GPP, etc.).
  9. Reason about expected value estimates in general. How accurate are they? Do they tend to be overconfident? How overconfident? Do some things predictably make them more reliable?
Continue reading
Posted on

Observations on Consciousness

What is consciousness?

We can divide theories about consciousness into three categories:

  1. Consciousness is a special non-physical property (dualism).
  2. Consciousness is the result of the physical structures of the brain (identity theory).
  3. Conscious mental states are the result of their functional role within a process (functionalism).

In particular, I want to talk about Turing machine functionalism, a specific form of functionalism which states that consciousness is computation on a Turing machine. I want to talk about Turing machine functionalism in particular because it is probably correct.

Continue reading
Posted on

Charities I Would Like to See

There are a few cause areas that are plausibly highly effective, but as far as I know, no one is working on them. If there existed a charity working on one of these problems, I might consider donating to it.

Happy Animal Farm

The closest thing we can make to a hedonium shockwave with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.

I am not aware of any public discussion on this subject, so I will perform a quick ad-hoc effectiveness estimate.

Continue reading
Posted on

Some Writings on Cause Selection

Cross-posted to the EA Forum.

The cause selection blogging carnival is well under way, and we already have a few submissions. But before the blogging carnival began, some folks had already written some of their thoughts on cause selection. Here I’ve compiled a short list of links to a few such writings supporting a variety of cause areas. Maybe some of these will give you ideas or even convince you to change your mind.

Jeff Kaufman explains why he supports global poverty.

Brian Tomasik discusses where he donates and why.

Topher Hallquist explains why he believes animal rights organizations look most promising.

Luke Muehlhauser claims that AI safety is the most important cause.

GiveWell staff members discuss their personal donations for 2014.

Posted on

Meditations on Basic Income Guarantees

Disclaimer: I haven’t researched or thought about this much, and a lot of what I’m saying is probably derivative or completely wrong. I just wanted to work through some of my thoughts.

What would happen if we implemented basic income guarantees tomorrow?

Assume we’re just talking about the United States here. Assume we don’t have any major technological advances between today and tomorrow, so we can’t automate every single person’s job. Let’s say that the income guarantee is enough to live off of—maybe $30,000.

What would people do? And would the economy continue to generate enough money to be able to pay for everyone’s income guarantee?

Change in Incentives

When people automatically get $30,000, this dramatically reduces their willingness to work. There are a lot of jobs that people only work because they desperately need a job, and they would really prefer not to. Once they get a basic income guarantee, demand for these jobs will drop dramatically. If the jobs are important, wages will increase until some people once again become willing to take those jobs.

Exactly how much people are willing to work depends on the tax rate. Let’s say we have a progressive taxation scheme which starts much higher than the current tax rate—maybe 50% at the lowest bracket and 90% at the highest (I’m just making up numbers here). That means if you make $30,000 a year for doing nothing and take a job that pays $30,000, now you’re making $45,000 after taxes. People have diminishing marginal utility of money, so people will be less willing to do this, but there should still be a lot of people who want to make more than the basic income and end up taking jobs.

Which jobs will they take?

Jobs

When people have a basic income, that dramatically changes their incentives to work. In economic terms, supply of labor drops. Which jobs continue to be prominent depends on which jobs have high or low price elasticity of demand for labor.

To get more concrete, let’s think about two jobs: garbage collector and fast food burger flipper. Probably a small minority of the people in these jobs actually enjoy them; if these people suddenly had a guaranteed $30,000 a year, how would the market respond?

People really need garbage collectors, so they have a high willingness to pay for their salaries. Or, more precisely, they have a high willingness to accept higher taxes so that the government can employ garbage collectors. In all likelihood, not enough people will be willing to work as garbage collectors for their current salaries. Demand for garbage collectors is highly inelastic, so as supply of willing workers decreases, wages will increase by a lot. The increase in wages should be enough to incentivize people to continue working as garbage collectors.

The labor supply for burger flippers would similarly decrease. Fast food companies would have to raise salaries by a lot in order to get people to keep working for them, which means they would have to increase food prices. The increased food prices would decrease quantity demanded, and fast food companies would shrink (and possibly disappear entirely). I am probably okay with this.

The Broad Market

But since people have less need to work, they should become more willing to work intrinsically enjoyable goods, so we should see an increase in the supply of short films, music, and other similar goods. Interestingly, writing books seems to be so intrinsically enjoyable that the market’s already over-saturated even without a basic income guarantee—publishers get way more manuscripts than they can use.

There’s a spectrum between “everybody intrinsically enjoys this” and “nobody intrinsically enjoys this”, and every job lands somewhere on the spectrum. Even among jobs that most people don’t intrinsically enjoy, we will still see differences. A lot fewer people will work in factory farms, since I can’t imagine that anybody would actually want to do that. But we probably won’t see that big a reduction in the quantity of auto mechanics. A lot of people like working on cars—people often do it as a hobby. We’d expect these people to be willing to work as auto mechanics for only relatively little pay.

Software Development

I want to talk a little extra about software development since it’s my field. Generally speaking, a lot of programmers enjoy programming, but there are a lot of kinds that are more fun than others. We’d probably see more people starting their own companies and fewer people working software jobs that involve a lot of boring repetition.

This changes the incentives for companies hiring developers. Boring routine work becomes more expensive since fewer developers are willing to do it, so companies have stronger incentives to automate as much work as possible.

There probably won’t be a huge effect since developers tend to make well $30,000, so that extra money doesn’t do as much for them; the most affected jobs will be those that pay less than or about as much as the basic income.

Does It Work?

An economy with a basic income guarantee would reduce or remove unimportant jobs while still retaining important jobs. Prices would be higher and people probably wouldn’t buy as much, but the things they’d buy less of would mostly be the things that weren’t really important to begin with. People aren’t perfectly rational; a lot of purchases people make just keep them going on the hedonic treadmill and don’t actually improve their lives.

Perhaps a world with McDonald’s is better than one without, but if it is, it’s certainly not much better, and I wouldn’t feel too bad about it if McDonald’s went out of business after all the low-level employees quit.

Please explain in the comments why I’m wrong about everything. I think the economic effects of a basic income guarantee could be really interesting and possibly surprising, and I want to hear what you think.

Posted on

Page 5 of 6