When I started writing this post weeks ago, I wasn't sure I was going to put it up
You'll have to be a member to read this full post. Click here to join for free.
See, I've been taking notes to beat up on Utilitarianism and its cousins for years now.
This sham of a moral theory appeals to nerds who love objective, universal principles but have a hard time relating to actual, real-life, flesh-and-guts human beings right in front of them.
The same people who will utter slogans like "I love humanity but hate people".
The kids are confused these days.
As a professional career contrarian, I don't agree a whit with Utilitarianism, Effective Altruism, consequentialism, or any other sort of moral [sic] theory that puts an arrogant geek in charge of the Good and the Right like a manager running the office.
After spending many years digging into dusty tomes where none of these Effective Altruists would dare roam, I've discovered many a magical incantation to keep them at bay.
No sooner had I blown the dust off this piece did I discover that no less than Curtis Yarvin, reactionary political thinker turned self-help guru, released his own Substack slam-track against the Effective Altruism hustle.
I can't say I find much to disagree with in Uncle Yarv's piece. He's nailed the major issues with the EA cult and the philosophy (such as it is) behind it.
Which made me wonder if I should bother.
I'm going to take a slightly different path here. Just a friendly chat over tea about the ethical arguments.
Remember arguments? Back when you could talk about an idea and nothing but an idea without worrying about who said it and what for?
When people ask what I think about ethics and moral theories, I claim "virtue ethics".
But – as I've spent far too many life-years writing about – what that means has almost nothing to do with what you think it means. Virtue ethics isn't about finding a theory to tell you what you ought to do. It's way deeper than that.
This article is informed by that POV, but it isn't about that.
Consider this a brow-beating of the middle-managers from HR who pretend they've got a deep understanding of the Good and the Right because objective numbers, bro.
And why would you care about this? For starters, much of the EA movement is all about a fantastical techno-future made up of lots and lots of machines... and it's our duty to make sure they all feel real good.
It's the stuff of sci-fi fantasy (in the worst way), but I can't deny there's serious storytelling potential.
If you want to write interesting characters with interesting problems, this kind of thinking can't hurt you one bit.