Archive

  • Home
  • About
  • Archive
  • Notes

  • Subscribe
  • RSS
  • Where I Am Donating In 2024
  • Outlive: A Critical Review
  • Obvious Investing Facts
  • Should Earners-to-Give Work at Startups Instead of Big Companies?
  • A Comparison of Donor-Advised Fund Providers
  • Investors Can Simulate Leverage via Concentrated Stock Selection
  • Asset Allocation and Leverage for Altruists with Constraints
  • The Risk of Concentrating Wealth in a Single Asset
  • Estimating the Philanthropic Discount Rate
  • All Expected Utility Distributions Have One of Two Big Problems
  • A Complete Quantitative Model for Cause Selection
  • GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics
  • My Cause Selection 2015
  • Do Protests Work? A Critical Review
  • Charity Cost-Effectiveness Really Does Follow a Power Law
  • "You can't calculate the expected utility of a communist revolution"
  • Thoughts on My Donation Process
  • Where I Am Donating in 2024
  • Where Some People Donated in 2017
  • Good Ventures/Open Phil Should Make Riskier Grants
  • Where I Am Donating in 2016
  • Is Preventing Global Poverty Good?
  • Why I'm Prioritizing Animal-Focused Values Spreading
  • Altruistic Organizations Should Consider Counterfactuals When Hiring
  • Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering
  • Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply)
  • How Should a Large Donor Prioritize Cause Areas?
  • What Would Change My Mind About Where to Donate
  • Are GiveWell Top Charities Too Speculative?
  • More on REG's Room for More Funding
  • Response to the Global Priorities Project on Human and Animal Interventions
  • Cause Prioritization Research I Would Like to See
  • Excessive Optimism About Far Future Causes
  • My Cause Selection: Michael Dickens
  • Charities I Would Like to See
  • On Values Spreading
  • Some Writings on Cause Selection
  • Is Preventing Human Extinction Good?
  • Why would AI companies use human-level AI to do alignment research?
  • Retroactive If-Then Commitments
  • Where I Am Donating in 2024
  • Existential Risk Reduction Is Naive (And That's a Good Thing)
  • How Do AI Timelines Affect Giving Now vs. Later?
  • "Disappointing Futures" Might Be As Important As Existential Risks
  • Giving Now vs. Later for Existential Risk: An Initial Approach
  • Should We Prioritize Long-Term Existential Risk?
  • The Importance of Unknown Existential Risks
  • Altruistic Investors Care About Other Altruists' Portfolios
  • Should Earners-to-Give Work at Startups Instead of Big Companies?
  • Low-Hanging (Monetary) Fruit for Wealthy EAs
  • Investment Strategies for Donor-Advised Funds
  • A Comparison of Donor-Advised Fund Providers
  • Asset Allocation and Leverage for Altruists with Constraints
  • Uncorrelated Investments for Altruists
  • Donor-Advised Funds vs. Taxable Accounts for Patient Donors
  • The Risk of Concentrating Wealth in a Single Asset
  • Do Theoretical Models Accurately Predict Optimal Leverage?
  • How Much Leverage Should Altruists Use?
  • Should Altruists Leverage Donations?
  • Why Effective Altruists Should Use a Robo-Advisor
  • Return stacked funds: A new way to get leverage
  • A 401(k) Sometimes Isn't Worth It
  • Index Funds That Vote Are Active, Not Passive
  • How I Estimate Future Investment Returns
  • Obvious Investing Facts
  • More Evidence on Concentrated Stock Selection
  • The True Cost of Leveraged ETFs
  • Investors Can Simulate Leverage via Concentrated Stock Selection
  • Newcomb's Problem and Efficient Markets
  • Do Investors Put Too Much Stock in the US?
  • Philanthropists Probably Shouldn't Mission-Hedge AI Progress
  • A Preliminary Model of Mission-Correlated Investing
  • Quantifying the Far Future Effects of Interventions
  • A Complete Quantitative Model for Cause Selection
  • On Priors
  • Preventing Human Extinction, Now With Numbers!
  • Expected Value Estimates You Can (Maybe) Take Literally
  • How Valuable Are GiveWell Research Analysts?
  • Should Patient Philanthropists Invest Differently?
  • Future Funding/Talent/Capacity Constraints Matter, Too
  • How Do AI Timelines Affect Giving Now vs. Later?
  • Metaculus Questions Suggest Money Will Do More Good in the Future
  • Reverse-Engineering the Philanthropic Discount Rate
  • Giving Now vs. Later for Existential Risk: An Initial Approach
  • Estimating the Philanthropic Discount Rate
  • Correction on Giving Now vs. Later
  • Should Global Poverty Donors Give Now or Later?
  • Why Do Small Donors Give Now, But Large Donors Give Later?
  • Philanthropists Probably Shouldn't Mission-Hedge AI Progress
  • A Preliminary Model of Mission-Correlated Investing
  • Mission Hedgers Want to Hedge Quantity, Not Price
  • Mission Hedging via Momentum Investing
  • Let's take a moment to marvel at how bad the original USDA food pyramid was
  • Can you maintain lean mass in a calorie deficit?
  • The Triple-Interaction-Effects Argument
  • I was probably wrong about HIIT and VO2max
  • The 7 Best High-Protein Breakfast Cereals
  • Outlive: A Critical Review
  • Protein Quality (DIAAS) Calculator
  • Continuing My Caffeine Self-Experiment
  • Notes on Eat, Drink, and Be Healthy
  • What's the Healthiest Body Composition?
  • What's the Healthiest BMI?
  • Caffeine Cycling Self-Experiment
  • Does Caffeine Stop Working?
  • Avoiding Caffeine Tolerance
  • The Triple-Interaction-Effects Argument
  • There Are Three Kinds of "No Evidence"
  • Subjects in Pysch Studies Are More Rational Than Psychologists
  • My submission for Worst Argument In The World
  • How well did Scott Alexander's list of social science findings hold up?
  • Two Types of Scientific Misconceptions You Can Easily Disprove
  • High School Science Experiments
  • Why We Can See Stars
  • Utilitarianism Isn't About Doing Bad Things for the Greater Good. It's About Doing the Most Good
  • Two Types of Scientific Misconceptions You Can Easily Disprove
  • Are All Actions Impermissible Under Kantian Deontology?
  • All Expected Utility Distributions Have One of Two Big Problems
  • A Consciousness Decider Must Itself Be Conscious
  • Observations on Consciousness
  • There Are Three Kinds of "No Evidence"
  • My submission for Worst Argument In The World
  • Just because a number is a rounding error doesn't mean it's not important
  • Some Things I've Changed My Mind On
  • Explicit Bayesian Reasoning: Don't Give Up So Easily
  • Do I Read My Own Citations?
  • Argumentative Tactics I Would Like to Stop Seeing
  • New Page: Convert Credences into a Bet
  • You can now read my reading notes
  • I have whatever the opposite of a placebo effect is
  • The United States Is Weird
  • Most Theories Can't Explain Why Game of Thrones Went Downhill
  • The Review Service Vortex of Death
  • Can Good Writing Be Taught?
  • Summaries Are Important
  • My Experience Trying to Force Myself to Do Deep Work
  • What Are the Best TV Shows (According to IMDb Episode Ratings)?
  • New Comment System
  • Ideas Too Short for Essays
  • Songs of the Day
  • Robin Hanson Emulator
  • Meditations on Basic Income Guarantees
  • Haskell Is Actually Practical

Copyright © 2025 Michael Dickens. Powered by Jekyll, adapted from theme by Matt Harzewski