Why care for the world when it is so bad?

My cliffs notes from Caring about something larger than yourself:

People are terrible and annoying, so it doesn’t go without saying that we should care about each other — and many people simply don’t, questioning why they should. (I suspect I have in large part because I was taught to intellectually, and because I feel a compelling emotional connection with people striving against oppression. I’ve never really thought through whether the intellectual concept I was taught is rationally defensible.)

Some people confuse feelings with caring. Caring for the remote masses broadly can be dispassionate. Caring is “about not having the emotional compulsion and doing the right thing anyway”. But you can care for strangers just as much as you do for friends.

My default settings, roughly speaking, make it easy for me to feel for my friends and hate at my competitors. But my default settings also come with a sense of aesthetics that prefers fairness, that prefers compassion. My default feelings are strong for those who are close to me, and my default sensibilities are annoyed that it’s not possible to feel strongly for people who could have been close to me. My default feelings are negative towards people antagonizing me, and my default sensibilities are sad that we didn’t meet in a different context, sad that it’s so hard for humans to communicate their point of view.

My point is, I surely don’t lack the capacity to feel frustration with fools, but I also have a quiet sense of aesthetics and fairness which does not approve of this frustration. There is a tension there.

I choose to resolve the tension in favor of the people rather than the feelings.

Why? Because when I reflect upon the source of the feelings, I find arbitrary evolutionary settings that I don’t endorse, but when I reflect upon the sense of aesthetics, I find something that goes straight to the core of what I value.

What we feel helped our genes navigate evolution; it isn’t about what’s deeply good or true.

So I look upon myself, and I see that I am constructed to both (a) care more about the people close to me, that I have deeper feelings for, and (b) care about fairness, impartiality, and aesthetics. I look upon myself and I see that I both care more about close friends, and disapprove of any state of affairs in which I care more for some people due to a trivial coincidence of time and space.

We can examine this deeply and conclude that the feelings are tribal vestiges; while the aesthetics reflect deep values, which wins the argument. We are capable of acting on realizations like this to change not necessarily what we feel, but how we act on what what we feel.

Thus we can choose to care even for others we don’t feel for.

For those unswayed, who still see too much bad in humanity to be mustered to care, an uneasy thought experiment: Consider how much easier it is to sympathize with a mistreated animal (for example, a dog) than for a mistreated human. Does that default setting for our feelings seem correct?

(Nate presents quotes a description of the “Machiavellian Intelligence’ hypothesis from Sue Blackmore’s “The Meme Machine”, a theory that explains our brains’ makeup as the result of a spiraling biological ‘arms race’ of power through social political maneuvering.

I’ve already quoted excessively from the source essay, but this is too good to pass up:

I mean, look at us. Humans are the sort of creature that sees lightning and postulates an angry sky-god, because angry sky-gods seem much more plausible to us than Maxwell’s equations — this despite the fact that Maxwell’s equations are far simpler to describe (by a mathematical standard) than a generally intelligent sky-god. Think about it: we can write down Maxwell’s equations in four lines, and we can’t yet describe how a general intelligence works. Thor feels easier for us to understand, but only because we have so much built-in hardware for psychologically modeling humans.

We see in other humans suspicious agents plotting against us. When they lash at us we lash back. We see cute puppies as innocent, and we sympathize with them they are angry.

Which is why, every so often, I take a mental step back and try to see the other humans around me, not as humans, but as innocent animals full of wonder, exploring an environment they can never fully understand, following the flows of their lives.

Other analogies for humans: Angels who missed their shot at heaven. Monkeys struggling to be comfortable out of the comfort of the trees.

Why care about others? Separate your feelings from your aesthetic judgements that are in tension, follow where that leads, and choose to care (not feel) as seems right to you.

Incidentally, this is from the introduction:

As with previous posts, don’t treat this as a sermon about why you should care about things that are larger than yourself; treat it as a reminder that you can, if you want to.

Nate assumes this posture throughout this essay series. Arguing the reader should take his advice would be antithetical to a part of his argument (that’s coming soon). All he needs to is point out the reader can choose to think a certain way. That’s compelling enough.

This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.

You’re allowed to fight for something

My cliffs notes from You’re allowed to fight for something

This series is all about removing guilt. But certain forms of guilt are easier to remove; others are easier to first shift to these easier-to-remove forms. And to that end:

‘Tis better than to feel guilty for a specific reason — say, playing video games all day instead of practicing resistance — than to feel “listless” guilt for no particular reason — guilt that maybe there should be something to feel guilty for not doing.

The listless guilt comes from intuitively knowing there can be something more — more good that you’d like to do, for non-selfish reasons. The Nihilist trap convinces some that it is impossible to want to take some altruistic action because you care about others, without any selfish reasons, but listless guilt is the disproof of this.

A thought experiment: Imagine someone offered you a deal to shoot your pet, erase your memory of the pet (without any fallibility; they would also alter the memory of those around you and your environment), and give you a dollar. You don’t take the dollar. Why? That’s proof you can care for something outside yourself, when there’s no selfish motivation.

And you are allowed to want something for non-selfish reasons, without needing to understand or explain.

To shake the listless guilt, ask what you’d like to be different in the world, and look for ideas that compel you to make a difference if you can.

The listless guilt is a guilt about not doing anything. To remove it, we must first turn it into a guilt about not doing something in particular.

This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.

Let altruism be altruism

My cliffs notes from The Stamp Collector:

This is an argument against nihilism, the belief that nothing does or can matter. Dispensing with nihilism is necessary to make altruism accessible as a source of intrinsic motivation, to offset listless guilt — the guilt of doing nothing when it seems like there should be something more to life.

People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.

People will tell you that, because we’re trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.

But I have a message for you: You can, in fact, care about the outer world.

And you can steer it, too. If you want to.

Evidence for this are the analogy of the stamp collector — a robot designed to take actions that increase the number of stamps in its inventory — and the analogy of human altruists working from the same principle.

“Naïve philosophers” fall to the homunculus fallacy when attempting to understand the robot. They refuse to see what it is in fact doing: Taking actions that result in the outcome it seeks, with its best available information. Differentiating between its internal representation of its inventory and its actual inventory is fallacious, because it doesn’t have any more meaningful access to its internal representation of inventory than its external inventory.

Similarly, the naïve philosophers mistake human altruistic behavior as pleasure-maximizing. But behaviors such as giving away all of your money to charity, or jumping in front of a moving car to save a child, stress that theory to the breaking point.

We can and do choose to care about things outside of our heads. Don’t get bogged down in whether altruism is real; just accept that it’s accessible to you.

This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.