EA and AI Safety's Humility Problem

Note: All anecdotes used are not meant to represent any group or movement in its entirety, but rather to provide more concrete context for certain situations.

I believe many EAs are too confident in the beliefs they arrive at through reasoning built on their own lived experience, which (in my opinion) contributes to the public’s perception of the community as cultish.

A person’s experiences make up 100% of their perception of the world, despite representing only 0.000000001% of what has actually occurred. Yet many EAs seem strongly convinced of certain beliefs because they build on that 0.000000001% with reason and logic. Some of these convictions may well be true regardless of the limited foundational evidence, but I believe many are likely mistaken—and, when shared so readily, they contribute to public perception problems.

The other day, I was talking to a friend who is heavily involved in the AI safety movement—someone who is convinced of the risks and doing considerable work to help mitigate them. They told me about a time when they were speaking with a researcher they thought highly of and agreed with on many notable points. However, the conversation turned unpleasant when the researcher began discussing acausal trade simulations as if this were almost certainly what was happening with humans. This casualness toward such an out-there concept ultimately made my friend question their beliefs about AI safety quite heavily. If these people can so readily believe such “crazy” ideas, should I really be agreeing with them on other topics like AI safety? I empathized with that questioning, but I didn’t see acausal trade simulations as far enough out there to concern me.

Last night, however, I was talking with a sizable group of EAs and safety researchers, and the conversation culminated in around half the group agreeing that there are implementations in which intelligence-based eugenics is a good idea. That made me deeply question my place in the AI safety field.

I believe scenarios like these stem from a lack of humility in the EA space and are ultimately damaging to public perception (and, as a result, EA’s impact). I believe reasoning can be incredibly impactful when working on the margin, but it generalizes very poorly to more “final,” long-term scenarios. Yet I find many people quite confident in their ability to comprehend very long-term or vastly alternative worlds—so much so that they have a propensity to share these beliefs with a high level of conviction. I believe these overconfident assertions greatly diminish EA’s credibility on other topics in the public’s eyes.

In conclusion, I believe a considerable part of the EA community lacks sufficient humility to acknowledge that conclusions reached through logic—but built on limited experience and a limited ability to comprehend abstract worlds—can often be mistaken, and that overconfidence in these beliefs can damage their reputation. If you care about impact, you should care about people’s perception, which entails some level of humility and empathy toward others.


← Back to Blog