You don’t get to know what you’re fighting for

My cliff notes from You don’t get to know what you’re fighting for:

Knowing what specifically you are working towards is hard.

It is easy to specify what you want negatively — what outcomes don’t you want — but that only tells you want you don’t want, not what you do want. It’s much harder to positively state what you want.

Then, it’s often true that when pursuing a specific goal, our goals shift. For example “help the poor” can turn into “get rich to donate a lot”.

With our current understanding of philosophy, it is highly likely if not inevitable that even simple altruistic goals will shift as we progress toward them or as we investigate our philosophies that underpin them.

[This shakiness of philosophies Nate points at to question several example goals is why I’ve limited the time I spend investing in philosophical understanding. It’s always seemed to be the further down philosophical trails I go, the less it helps with anything.]

Even when you think you know what you’re fighting for, there’s no guarantee you are right, and since there’s no clearly established objective morality, there’s likely to be an argument against your stance.

There is no objective morality writ on a tablet between the galaxies. There are no objective facts about what “actually matters.” But that’s because “mattering” isn’t a property of the universe. It’s a property of a person.

There are facts about what we care about, but they aren’t facts about the stars. They are facts about us.

Then it is also possible that our brains are lying to us about our intentions. We don’t have enough introspective capability to truly understand our motivations, which are often grounded in hidden and arbitrary rules of natural selection.

My values were built by dumb processes smashing time and a savannah into a bunch of monkey generations, and I don’t entirely approve of all of the result, but the result is also where my approver comes from. My appreciation of beauty, my sense of wonder, and my capacity to love, all came from this process.

I’m not saying my values are dumb; I’m saying you shouldn’t expect them to be simple.

We’re a thousand shards of desire forged of coincidence and circumstance and death and time. It would be really surprising if there were some short, simple description of our values. […]

Don’t get me wrong, our values are not inscruitable [sic]. They are not inherently unknowable. If we survive long enough, it seems likely that we’ll eventually map them out.

We don’t need to know what motivates us or what we want exactly; “The world’s in bad enough shape that you don’t need to.” We can have something to fight for without knowing exactly what it is. We have a good enough idea of which direction to go. Depending on what we are pretty sure of is good enough for us to decide what to do next.

This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.

Let altruism be altruism

My cliffs notes from The Stamp Collector:

This is an argument against nihilism, the belief that nothing does or can matter. Dispensing with nihilism is necessary to make altruism accessible as a source of intrinsic motivation, to offset listless guilt — the guilt of doing nothing when it seems like there should be something more to life.

People will tell you that humans always and only ever do what brings them pleasure. People will tell you that there is no such thing as altruism, that people only ever do what they want to.

People will tell you that, because we’re trapped inside our heads, we only ever get to care about things inside our heads, such as our own wants and desires.

But I have a message for you: You can, in fact, care about the outer world.

And you can steer it, too. If you want to.

Evidence for this are the analogy of the stamp collector — a robot designed to take actions that increase the number of stamps in its inventory — and the analogy of human altruists working from the same principle.

“Naïve philosophers” fall to the homunculus fallacy when attempting to understand the robot. They refuse to see what it is in fact doing: Taking actions that result in the outcome it seeks, with its best available information. Differentiating between its internal representation of its inventory and its actual inventory is fallacious, because it doesn’t have any more meaningful access to its internal representation of inventory than its external inventory.

Similarly, the naïve philosophers mistake human altruistic behavior as pleasure-maximizing. But behaviors such as giving away all of your money to charity, or jumping in front of a moving car to save a child, stress that theory to the breaking point.

We can and do choose to care about things outside of our heads. Don’t get bogged down in whether altruism is real; just accept that it’s accessible to you.

This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.