Not-Discovered-Here Syndrome
An investor is considering putting her money into a mutual fund. “I will just invest some money for the next six months,” she says, “and see how it goes.”
A philanthropist is considering donating to a charity. “I will donate some money and see how it goes.”
Harvard University is considering whether SAT scores are all that important for admissions. “Let’s make SAT scores optional and see what happens.”
A child climbs to the top of a slide and is about to jump off the edge. “Don’t jump off of that,” his mom says, “you’ll get hurt.” He jumps off the slide. He gets hurt.
Not-invented-here syndrome is when an organization unnecessarily re-invents products or tools that already exist elsewhere. The cousin of this phemonenon is not-discovered-here syndrome, in which people refuse to consider evidence unless they’ve collected it themselves.
“A wise man learns from his mistakes, but a wiser man learns from the mistakes of others.” Not-discovered-here syndrome is what happens when you insist on making mistakes for yourself.
Institutional investors like to “try out” new investments for six months or a year. That doesn’t make any sense. Whatever you learn in the six months of holding the fund, you could’ve learned by looking at a price chart of the prior six months. (In fact you probably could’ve learned a lot more, because most funds have more than six months of history.) Or you could keep an eye on the fund for the next six months without investing. Putting money into the fund doesn’t teach you anything.
Harvard made the SAT optional in 2020. Prior to 2020, there already existed a mountain of data showing that a student’s SAT score are is a good good predictor of college success. It was predictable in advance that if colleges stop requiring the SAT, then they will do a worse job at identifying good candidates. But they ignored the data and learned that lesson the hard way instead.
I had a similar criticism of the book Outlive; I decided not to include it in my book review, but I’ll mention it here since it’s on-topic. In the book, Peter Attia is a big proponent of continuous glucose monitors (CGMs), which show how your blood sugar goes up after you eat.
This sort of confuses me. You can easily find websites that tell you the glycemic loads of different foods. I can tell you what will happen if I eat white flour: my blood sugar will go up a lot. I know because it has a high glycemic load. I can also tell you that if I eat walnuts, my blood sugar will only go up a little bit. A CGM doesn’t tell me anything I don’t already know.
Wearing a CGM is a psychological tool that works for some people, but that’s kind of my point: those people have not-discovered-here syndrome. It’s not enough for me to know which foods have high glycemic load; I have to wear a monitor showing that, yes, this food does raise my blood sugar.
Sometimes people really do make better decisions when they collect the evidence themselves. But there’s no inherent reason why it has to be that way. It feels like there ought to be some way to get people to pay attention to evidence that they didn’t personally discover, but I don’t know how.