On Values Spreading

Cross-posted to the EA Forum.

Introduction

Note: When I speak of extinction risk in this essay, it refers not just to complete extinction but to any event that collapses civilization to the point where we cannot achieve highly good outcomes for the far future.

There are two major interventions for shaping the far future: reducing human extinction risk and spreading good values. Although we don’t really know how to reduce human extinction, the problem itself is fairly clear and has seen a lot of discussion among effective altruists. Values spreading is less clear.

A lot of EA activities could be classified as values spreading, but of very different sorts. Meta-organizations like Giving What We Can and Charity Science try to encourage people to value charity more highly; animal charities like The Humane League and Animal Ethics try to get people to assign greater weight to non-human animals. Many supporters of animal welfare interventions believe that these interventions have a large positive effect on the far future via spreading values that cause people to behave in ways that make the world better.

I believe that reducing extinction risk has a higher expected value than spreading good values, and there are a number of concerns with values spreading that make me reluctant to support it. This essay lays out my reasoning.

Personal note: In 2014 I directed my entire donations budget to The Humane League, and in 2015 I directed it to Animal Charity Evaluators. At the time, I generally agreed with the arguments that values spreading is the most important intervention. But recently I have considered this claim more carefully and now I am more skeptical, for the reasons outlined below.

Continue reading
Posted on

Some Writings on Cause Selection

Cross-posted to the EA Forum.

The cause selection blogging carnival is well under way, and we already have a few submissions. But before the blogging carnival began, some folks had already written some of their thoughts on cause selection. Here I’ve compiled a short list of links to a few such writings supporting a variety of cause areas. Maybe some of these will give you ideas or even convince you to change your mind.

Jeff Kaufman explains why he supports global poverty.

Brian Tomasik discusses where he donates and why.

Topher Hallquist explains why he believes animal rights organizations look most promising.

Luke Muehlhauser claims that AI safety is the most important cause.

GiveWell staff members discuss their personal donations for 2014.

Posted on

Is Preventing Human Extinction Good?

If humans become extinct, wild animal suffering will continue indefinitely on earth (unless all other animals go extinct as well, which is unlikely but possible). Wild animals’ lives are likely not worth living, so this would be bad, but it’s not the worst thing that could happen.

Preventing human extinction obviously means that humans will continue to exist, but this direct effect is trivial compared to the effects described below.

Major reasons why preventing human extinction might be bad:

  • We sustain or worsen wild animal suffering on earth.
  • We colonize other planets and fill them with wild animals whose lives are not worth living.
  • We create lots of computer simulations of extremely unhappy beings.
  • We eventually create an AI with evil values that creates lots of suffering on purpose. (But this seems highly unlikely.)

Major reasons why preventing human extinction might be good:

  • We colonize other planets and fill them with wild animals whose lives are worth living.
  • We successfully create a hedonium shockwave–i.e. we fill the universe with beings experiencing the maximum amount of pleasure that it is possible for beings to experience.
  • Even if we don’t create eudamonia, we fix the problem of wild animal suffering and make most of the beings in the universe very happy.
  • We find other planets with wild animals whose lives are net negative and we make their lives better.

Emotional disclosure: I’m biased toward optimism here because I don’t like the idea of humans becoming extinct, and I definitely don’t like the idea that this could be the best outcome in expectation.

Even people who are pessimistic about wild animal suffering generally assume that preventing human extinction is good, but I do not often see this justified. Let’s consider some arguments for and against.

Continue reading
Posted on

Why Effective Altruists Should Use a Robo-Advisor

Cross-posted to the Effective Altruism Forum.

TL;DR: Go sign up for Wealthfront right now and transfer all your savings into it. If you’re young and/or you plan on donating most of your savings, choose the highest risk tolerance Wealthfront allows.

Investing Basics

You probably want to save money for retirement, or some future large purchase like a house. Many effective altruists have money that they want to donate eventually, but want to hold onto it for now. What should you do with that money while you’re keeping it?

The simplest option would be to keep all your money in a savings account at your bank. This way you’re guaranteed not to lose your money, but savings accounts earn hardly any interest. If you’re willing to put your money into some riskier investments, you will probably end up with a lot more money than when you started.

The two most important investment vehicles are stocks and bonds. You can buy these on your own, but you don’t need to.

Robo-Advisors

There are services like Wealthfront, called robo-advisors, that manage your money for you automatically. You give the robo-advisor some basic facts about yourself such as your age and how much risk you can tolerate, and it figures out a good way to allocate your money. You deposit your savings and the robo-advisor does the rest–you never have to worry about your savings again. A good robo-advisor invests your money to get the best possible returns for your risk tolerance.

Both individual and professional investors rarely outperform the market in the long run, so a robo-advisor like Wealthfront will probably manage your money better than either you or a professional would. Even better, Wealthfront has low fees–far lower than anything you’d get from a human money manager–so you get to keep more of your money.

When you sign up for Wealthfront, it will give you a short quiz to determine how much risk it thinks you’re willing to take on. The more risk you accept, the higher expected return you can get. Whatever this quiz tells you, it might be smart for you to choose the most aggressive, highest-risk allocation. As Carl Shulman explains in “Salary or startup? How do-gooders can gain more from risky careers”, effective altruists can afford to take on more risk than most people. To borrow his example, your tenth Ferrari isn’t as valuable as your first, but with your tenth vaccine, you can vaccinate a tenth kid and do just as much good as with your first vaccine. Most investors are highly risk-averse: not losing money is much more important to them than gaining money. But as effective altruists, we can afford to take risky bets because if we win big, we can do massively more good in the world.

For the curious, Colby Davis’s A Guide to Rational Investing explains in more detail why investing on your own or with a (human) advisor is a usually bad idea, and why it’s possible to do better than simply buying a total-market index fund. Wealthfront is likely to outperform a total-market index fund because it puts some of your money into emerging markets, which probably outperform the U.S. market in the long run.

Why not Betterment?

Betterment is another popular robo-advisor that offers a similar service to Wealthfront. I slightly prefer Wealthfront, but if you already use Betterment and you don’t want to switch, that’s probably fine. It would be counterproductive to get into a debate about the minor points in favor of one or the other–if you prefer to use Betterment, by all means do so. The main benefits to be had here come from putting your money into a good robo-advisor. After that, it doesn’t matter much which one you pick.

There are a few other robo-advisors on the market which might be just as good. I haven’t spent much time looking into any others, but I feel comfortable recommending either Betterment or Wealthfront.

Why not manage my own basket of index funds?

(If you don’t want to do this, you don’t need to read this section.)

Actually, if you choose a good asset allocation and stick with it, you can probably get better results managing your own assets than using a robo-advisor. This approach requires more dedication, and you need to have a strong stomach to stick with your strategy even when it performs badly. But if that sounds like you, you might want to pursue this approach instead.

For nearly risk-neutral investors, even Wealthfront’s highest-risk, highest-return allocation still leaves a lot of room to squeeze out more returns. You could earn considerably more money by putting a larger percentage of your portfolio into high-return assets, and the best way to do this is to manually manage your investments.

This means buying a basket of index funds with a high weighting in asset classes that have historically outperformed the broad market, which could include small-capitalization stocks, value stocks, and emerging market stocks. You should NOT simply buy a total U.S. or total world index fund. This will both perform worse than Wealthfront (because it is not weighted toward high-return asset classes) and have higher risk (because it is less diversified). It might sound like a total world index fund is maximally diversified, and in one sense it is because it holds stocks from every part of the world. But Wealthfront’s asset allocation has better diversification properties because it holds a higher weighting in asset classes that tend to be less correlated with each other.

I plan on writing a future post with some recommendations for nearly risk-neutral investors who want to manage their own investments. For anyone who wants to learn more now, I recommend William Bernstein’s The Intelligent Asset Allocator, which lays out which asset classes perform best and how to find a good allocation.

Is this just for effective altruists?

No, not really. Most people would be better off if they used a robo-advisor. But it’s particularly important that effective altruists are able to make money on their investments, because it means they will have more money to donate.

Disclaimers: I am not affiliated with Wealthfront; I just think robo-advisors are awesome. I am not a financial advisor and you should use your own judgment when making significant financial decisions.

Posted on

Meditations on Basic Income Guarantees

Disclaimer: I haven’t researched or thought about this much, and a lot of what I’m saying is probably derivative or completely wrong. I just wanted to work through some of my thoughts.

What would happen if we implemented basic income guarantees tomorrow?

Assume we’re just talking about the United States here. Assume we don’t have any major technological advances between today and tomorrow, so we can’t automate every single person’s job. Let’s say that the income guarantee is enough to live off of—maybe $30,000.

What would people do? And would the economy continue to generate enough money to be able to pay for everyone’s income guarantee?

Change in Incentives

When people automatically get $30,000, this dramatically reduces their willingness to work. There are a lot of jobs that people only work because they desperately need a job, and they would really prefer not to. Once they get a basic income guarantee, demand for these jobs will drop dramatically. If the jobs are important, wages will increase until some people once again become willing to take those jobs.

Exactly how much people are willing to work depends on the tax rate. Let’s say we have a progressive taxation scheme which starts much higher than the current tax rate—maybe 50% at the lowest bracket and 90% at the highest (I’m just making up numbers here). That means if you make $30,000 a year for doing nothing and take a job that pays $30,000, now you’re making $45,000 after taxes. People have diminishing marginal utility of money, so people will be less willing to do this, but there should still be a lot of people who want to make more than the basic income and end up taking jobs.

Which jobs will they take?

Jobs

When people have a basic income, that dramatically changes their incentives to work. In economic terms, supply of labor drops. Which jobs continue to be prominent depends on which jobs have high or low price elasticity of demand for labor.

To get more concrete, let’s think about two jobs: garbage collector and fast food burger flipper. Probably a small minority of the people in these jobs actually enjoy them; if these people suddenly had a guaranteed $30,000 a year, how would the market respond?

People really need garbage collectors, so they have a high willingness to pay for their salaries. Or, more precisely, they have a high willingness to accept higher taxes so that the government can employ garbage collectors. In all likelihood, not enough people will be willing to work as garbage collectors for their current salaries. Demand for garbage collectors is highly inelastic, so as supply of willing workers decreases, wages will increase by a lot. The increase in wages should be enough to incentivize people to continue working as garbage collectors.

The labor supply for burger flippers would similarly decrease. Fast food companies would have to raise salaries by a lot in order to get people to keep working for them, which means they would have to increase food prices. The increased food prices would decrease quantity demanded, and fast food companies would shrink (and possibly disappear entirely). I am probably okay with this.

The Broad Market

But since people have less need to work, they should become more willing to work intrinsically enjoyable goods, so we should see an increase in the supply of short films, music, and other similar goods. Interestingly, writing books seems to be so intrinsically enjoyable that the market’s already over-saturated even without a basic income guarantee—publishers get way more manuscripts than they can use.

There’s a spectrum between “everybody intrinsically enjoys this” and “nobody intrinsically enjoys this”, and every job lands somewhere on the spectrum. Even among jobs that most people don’t intrinsically enjoy, we will still see differences. A lot fewer people will work in factory farms, since I can’t imagine that anybody would actually want to do that. But we probably won’t see that big a reduction in the quantity of auto mechanics. A lot of people like working on cars—people often do it as a hobby. We’d expect these people to be willing to work as auto mechanics for only relatively little pay.

Software Development

I want to talk a little extra about software development since it’s my field. Generally speaking, a lot of programmers enjoy programming, but there are a lot of kinds that are more fun than others. We’d probably see more people starting their own companies and fewer people working software jobs that involve a lot of boring repetition.

This changes the incentives for companies hiring developers. Boring routine work becomes more expensive since fewer developers are willing to do it, so companies have stronger incentives to automate as much work as possible.

There probably won’t be a huge effect since developers tend to make well $30,000, so that extra money doesn’t do as much for them; the most affected jobs will be those that pay less than or about as much as the basic income.

Does It Work?

An economy with a basic income guarantee would reduce or remove unimportant jobs while still retaining important jobs. Prices would be higher and people probably wouldn’t buy as much, but the things they’d buy less of would mostly be the things that weren’t really important to begin with. People aren’t perfectly rational; a lot of purchases people make just keep them going on the hedonic treadmill and don’t actually improve their lives.

Perhaps a world with McDonald’s is better than one without, but if it is, it’s certainly not much better, and I wouldn’t feel too bad about it if McDonald’s went out of business after all the low-level employees quit.

Please explain in the comments why I’m wrong about everything. I think the economic effects of a basic income guarantee could be really interesting and possibly surprising, and I want to hear what you think.

Posted on

Haskell Is Actually Practical

Haskell has all these language features that seem cool, but then you wonder, what is this actually good for? When am I ever going to need lazy evaluation? What’s the point of currying?

As it turns out, these language constructs come in handy more often than you’d expect. This article gives some real-world examples of how Haskell’s weird features can actually help you write better programs.

Lazy Evaluation

Lazy evaluation means that Haskell will only evaluate an expression if its value becomes needed. This means you can do cool things like construct infinite lists.

To take a trivial example, suppose you want to write a function that finds the first n even numbers. You could implement this in a lot of different ways, but let’s look at one possible implementation (in Python):

def first_n_evens(n):
    numbers = range(1, 2*n + 1)
    return [ x for x in numbers if x % 2 == 0 ]

Here we construct a list with 2n numbers and then take every even number from that list. Here’s how we could do the same in Haskell:

firstNEvens n = take n $ [ x | x <- [1..], even x ]

Instead of constructing a list with 2n elements, we construct an infinite list of even numbers and then take the first n.

Okay, so that’s pretty cool, but what’s the point? When am I ever going to use this in real life?

Why It’s Useful

I recently wrote a simple spam classifier in Python. To classify a text as spam or not-spam, it counts the number of blacklisted words in the text. If the number reaches some threshold, the text is classified as spam. 1

Before reading further, think for a minute about how you could implement this.

Originally, I wanted to write something like this.

  1. filter the list for only blacklisted words
  2. see if the length of the list reaches the threshold

Here’s the equivalent Python code:

class LazyClassifier(Classifier):
    def classify(self, text):
        return len(filter(self.blacklisted, text.split())) >= self.threshold

This code is simple and concise. The problem is, it requires iterating through the entire list before returning, which wastes a huge amount of time. The text might contain tens of thousands of words, but could be identified as spam within the first hundred.

I ended up implementing it like this:

class ImperativeClassifier(Classifier):
    def classify(self, text):
        words = text.split()
        count = 0
        for word in words:
            if self.blacklisted(word):
                count += 1
                if count >= self.threshold:
                    return 1
        return 0

Instead of using higher-order functions like sum, this implementation manually iterates over the list, keeping track of the number of blacklisted words, and breaks out once the number reaches the threshold. It’s faster, but much uglier.

What if we could write our code using the first approach, but with the speed of the second approach? This is where lazy evalution comes in.

If our program is lazily evaluated, it can figure out when the count reaches the threshold and return immediately instead of waiting around to evaluate the whole list.

Here’s a Haskell implementation:

classify text = (length $ filter blacklisted $ words text) >= threshold

(For those unfamiliar with Haskell syntax, see note.2)

Unfortunately, this doesn’t quite work. If the condition is true for the first k elements of the list then it will also be true for the first k+1 elements, but Haskell has no way of knowing that. If you call classify on an infinite list, it will run forever.

We can get around this problem like so:

classify text =
    (length $ take threshold $ filter blacklisted $ words text) == threshold

Note that the take operation takes the first k elements of the list and drops the rest. (If you call take k on a list with n elements where n < k, it will simply return the entire list.)

So this function will take the first threshold blacklisted words. If it runs through the entire list before finding threshold blacklisted words, it returns False. If it ever successfully finds threshold blacklisted words, it immediately stops and returns True.

Using lazy evalution, we can write a concise implementation of classify that runs as efficiently as our more verbose implementation above.

(If you want, it is also possible to do this in Python using generators.)

Partial Application

In Haskell, functions are automatically curried. This means you can call a function with some but not all of the arguments and it will return a partially-applied function.

This is easier to understand if we look at an example. Let’s take a look at some Haskell code:

add :: Int -> Int -> Int
add x y = x + y

add is a simple function that takes two arguments and returns their sum. You can call it by writing, for example, add 2 5 which would return 7.

You can also partially apply add. If you write add 2, instead of returning a value, it returns a function that takes a single argument and returns that number plus 2. In effect, add 2 returns a function that looks like this:

add2 y = 2 + y

You could also think of it as taking the original add function and replacing all occurrences of x with 2.

Then we can pass in 5 to this new function:

(add 2) 5 == 7

In fact, in Haskell, (add 2) 5 is equivalent to add 2 5: it calls add 2, which returns a unary function, and then passes in 5 to that function.

A similar function could be constructed in Python like so:

def add(x):
    return lambda y: x + y

Then you could call (add(2))(5) to get 7.

Why It’s Useful

To take a simple example, suppose you want to add 2 to every element in a list. You could map over the list using a lambda:

map (\x -> x + 2) myList

Or you could do this more concisely by partially applying the + function:

map (+2) myList

It might seem like this just saves you from typing a few characters once in a while, but this sort of pattern comes up all the time.

This summer I was working on a program that required merging a series of rankings. I had a list of maps where each map represented a ranking along a different dimension, and I needed to find the sum ranking for each key. I could have done it like this:

mergeRanks :: (Ord k) => [Map.Map k Int] -> Map.Map k Int
mergeRanks rankings = Map.unionsWith (\x y -> x + y) rankings

(Note: unionsWith takes the union of a list of maps by applying the given function to each map’s values.)

With partial application, we can instead write:

mergeRanks = Map.unionsWith (+)

This new function uses partial appliation in two ways. First, it passes in + instead of creating a lambda.

Second, it partially applies unionsWith. This call to unionsWith gives a function that takes in a list of maps and returns the union of the maps.

Notice also how mergeRanks is not defined with any arguments. Because the call to unionsWith returns a function, we can simply assign mergeRanks to the value of that function.

Perhaps this example is a bit on the confusing side; I intentionally chose a complex example that has real-world value. Once you grok partial applications, they show up more often than you might think, and you can use them to perform some pretty sophisticated operations.

And I haven’t even mentioned function composition.

Here’s a more complicated usage of partial application combined with function composition that I wrote this summer. See if you can figure out what it does.

yearSlice year stocks =
  filter ((>0) . length) $
  map (filter ((==year) . fyearq)) stocks

In one program of about 500 lines, I wrote about a dozen pieces of code similar to this one.

Pattern Matching

Pattern matching gives us a new way of writing functions. To take the canonical example, let’s look at the factorial function. Here’s a simple Python implementation.

def fac(n):
    if n == 0:
        return 1
    else:
        return n * fac(n - 1)

And the same program written in Haskell:

fac n =
    if n == 0
    then 1
    else n * fac (n - 1)

But we could also write this using pattern matching.

fac 0 = 1
fac n = n * fac (n - 1)

Think of this as saying

  • the factorial of 0 is 1
  • the factorial of some number n is n * fac (n - 1)

So pattern matching is more declarative rather than imperative–a declarative program describes the way things are rather than what to do.

Why It’s Useful

Wait, isn’t this just a different way of writing the same thing? Sure, it’s interesting, but what can pattern matching do that if statements can’t?

Well, quite a lot, actually.3 Pattern matching makes it trivial to deconstruct a data structure into its component values. Haskell’s pattern matching intricately relates to how Haskell handles data types.

Suppose we want to implement the map function. Recall that map takes a function and a list and returns the list obtained by applying the function to each element of the list. So map (*2) [1,2,3] == [2,4,6]. (Notice how I used partial application there?)

You may wish to take a moment to consider how you would implement map.

Without using pattern matching, we could implement map like this:

map f xs =
    if xs == []
    then []
    else (f (head xs)) : (map (tail xs))

4

But this is a bit clunky, and we can do a lot better by using pattern matching. Think about how to define map recursively:

  • The map of an empty list is just an empty list.
  • The map of a list is the updated head of the list plus the map of the tail of the list.
map f [] = []
map f (x:xs) = (f x) : (map f xs)

So much nicer!

This sort of design pattern comes in handy when you’re operating over data structures. To take a real-world example, I recently wrote a function that operated over an intersection of three values:

scoreIntersection (Intersection x y z) = whatever

I could pass in the Intersection type and pattern matching made it easy to pull out the three values into the variables x, y, and z.

Summary

Haskell has a number of language features that appear strange to someone with an imperative-programming background. But not only do these language features allow the programmer to write more concise and elegant functions, they teach lessons that you can carry with you when you use more imperative programming languages.

Many modern languages partially or fully support some of these features; Python, for example, supports lazy evaluation with generator expressions, and it’s possible to implement pattern matching in Lisp. And I’m excited to see that Rust supports sophisticated pattern matching much like Haskell.

If you want to learn more about Haskell, check out Learn You a Haskell for Great Good! Or if you’ve already dipped your toes into the Haskell ocean and want to go for a dive, Real World Haskell can teach you how to use Haskell to build real programs.

P.S. This site is relatively new, so if you see a mistake, please leave a comment and I’ll try and fix it.

Notes

  1. I realize this is a terrible way to implement a spam classifier. 

  2. Note on Haskell Syntax

    The $ operator groups expressions, so

    length $ filter blacklisted $ words text
    

    is equivalent to

    length (filter blacklisted (words text))
    

    The words function splits a string into a list of words. words text is roughly equivalent to Python’s text.split()

  3. Well, technically nothing, because every Turing-complete language is computationally equivalent. Anything that can be written in Python can also be written in assembly; that doesn’t mean you want to write everything in assembly. 

  4. The : operator is a cons–given a value and a list, it prepends the value to the head of the list. 

Posted on

Utilitarianism Resources

This is a collection of some of my favorite resources on utilitarianism.

Light Books
Practical Ethics, Peter Singer
Animal Liberation, Peter Singer
The Life You Can Save, Peter Singer

Heavy Books
The Methods of Ethics, Henry Sidgwick
Utilitarianism, John Stuart Mill
The Principles of Morals and Legislation, Jeremy Bentham

Introductions
Introduction to Utilitarianism, William MacAskill
Consequentialism FAQ
Utilitarian FAQ
Common Criticisms of Utilitarianism

Organizations
80,000 Hours
Effective Animal Activism
GiveWell
The High Impact Network
Giving What We Can

Collections
Utilitarian Philosophers
Utilitarianism: past, present and future
Recommended Reading

Communities
Felicifia
Effective Altruists on Facebook Less Wrong

Essays and Blogs
Essays on Reducing Suffering
The Effective Altruism Blog
Reflective Disequilibrium
Everyday Utilitarian
Measuring Shadows
Philosophy, et cetera
Reducing Suffering

Posted on

Voting to Do the Most Good

If you are a United States citizen and you want to do as much good as possible with your vote, then how should you use it? (These principles apply outside the US as well, but my analysis focuses on US elections.)

Expected Value of Voting

For those who care about maximizing the welfare of society, the importance of voting increases as the population increases. Below is the mathematical justification for this claim. These calculations assume that you know the correct person to vote for. If you wish to avoid math, you can skip to the next section.

Continue reading
Posted on

Free Will, Moral Responsibility, and Justice

Free will is an illusion [1]. What does this say about moral responsibility?

If the purpose of morality is to maximize the happiness of sentient beings, as I often claim, then whether free will exists is irrelevant. In fact, whether free will exists does not matter as long as morality focuses on the consequences of actions, rather than their motives.

The traditional argument goes: if free will is an illusion, then we are not in control of our own actions, which means we cannot be held responsible for them. So it doesn’t matter what actions we take, right? We can run around killing people, right? Well, no. Our actions still matter just as much as they ever did: they affect the outside world whether they are the product of free will or the result of deterministic processes. Others are still affected by our actions. We still feel emotions, even if those emotions arise deterministically.

Continue reading
Posted on

Page 12 of 13