Trillions Upon Trillions of Viruses Fall From the Sky Each Day. Most of them aren’t bad for us. They might be responsible for a majority of our genome. I had no idea.
Tag: evolution
You don’t get to know what you’re fighting for
My cliff notes from You don’t get to know what you’re fighting for:
Knowing what specifically you are working towards is hard.
It is easy to specify what you want negatively — what outcomes don’t you want — but that only tells you want you don’t want, not what you do want. It’s much harder to positively state what you want.
Then, it’s often true that when pursuing a specific goal, our goals shift. For example “help the poor” can turn into “get rich to donate a lot”.
With our current understanding of philosophy, it is highly likely if not inevitable that even simple altruistic goals will shift as we progress toward them or as we investigate our philosophies that underpin them.
[This shakiness of philosophies Nate points at to question several example goals is why I’ve limited the time I spend investing in philosophical understanding. It’s always seemed to be the further down philosophical trails I go, the less it helps with anything.]
Even when you think you know what you’re fighting for, there’s no guarantee you are right, and since there’s no clearly established objective morality, there’s likely to be an argument against your stance.
There is no objective morality writ on a tablet between the galaxies. There are no objective facts about what “actually matters.” But that’s because “mattering” isn’t a property of the universe. It’s a property of a person.
There are facts about what we care about, but they aren’t facts about the stars. They are facts about us.
Then it is also possible that our brains are lying to us about our intentions. We don’t have enough introspective capability to truly understand our motivations, which are often grounded in hidden and arbitrary rules of natural selection.
My values were built by dumb processes smashing time and a savannah into a bunch of monkey generations, and I don’t entirely approve of all of the result, but the result is also where my approver comes from. My appreciation of beauty, my sense of wonder, and my capacity to love, all came from this process.
I’m not saying my values are dumb; I’m saying you shouldn’t expect them to be simple.
We’re a thousand shards of desire forged of coincidence and circumstance and death and time. It would be really surprising if there were some short, simple description of our values. […]
Don’t get me wrong, our values are not inscruitable [sic]. They are not inherently unknowable. If we survive long enough, it seems likely that we’ll eventually map them out.
We don’t need to know what motivates us or what we want exactly; “The world’s in bad enough shape that you don’t need to.” We can have something to fight for without knowing exactly what it is. We have a good enough idea of which direction to go. Depending on what we are pretty sure of is good enough for us to decide what to do next.
This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.
Why care for the world when it is so bad?
My cliffs notes from Caring about something larger than yourself:
People are terrible and annoying, so it doesn’t go without saying that we should care about each other — and many people simply don’t, questioning why they should. (I suspect I have in large part because I was taught to intellectually, and because I feel a compelling emotional connection with people striving against oppression. I’ve never really thought through whether the intellectual concept I was taught is rationally defensible.)
Some people confuse feelings with caring. Caring for the remote masses broadly can be dispassionate. Caring is “about not having the emotional compulsion and doing the right thing anyway”. But you can care for strangers just as much as you do for friends.
My default settings, roughly speaking, make it easy for me to feel for my friends and hate at my competitors. But my default settings also come with a sense of aesthetics that prefers fairness, that prefers compassion. My default feelings are strong for those who are close to me, and my default sensibilities are annoyed that it’s not possible to feel strongly for people who could have been close to me. My default feelings are negative towards people antagonizing me, and my default sensibilities are sad that we didn’t meet in a different context, sad that it’s so hard for humans to communicate their point of view.
My point is, I surely don’t lack the capacity to feel frustration with fools, but I also have a quiet sense of aesthetics and fairness which does not approve of this frustration. There is a tension there.
I choose to resolve the tension in favor of the people rather than the feelings.
Why? Because when I reflect upon the source of the feelings, I find arbitrary evolutionary settings that I don’t endorse, but when I reflect upon the sense of aesthetics, I find something that goes straight to the core of what I value.
What we feel helped our genes navigate evolution; it isn’t about what’s deeply good or true.
So I look upon myself, and I see that I am constructed to both (a) care more about the people close to me, that I have deeper feelings for, and (b) care about fairness, impartiality, and aesthetics. I look upon myself and I see that I both care more about close friends, and disapprove of any state of affairs in which I care more for some people due to a trivial coincidence of time and space.
We can examine this deeply and conclude that the feelings are tribal vestiges; while the aesthetics reflect deep values, which wins the argument. We are capable of acting on realizations like this to change not necessarily what we feel, but how we act on what what we feel.
Thus we can choose to care even for others we don’t feel for.
For those unswayed, who still see too much bad in humanity to be mustered to care, an uneasy thought experiment: Consider how much easier it is to sympathize with a mistreated animal (for example, a dog) than for a mistreated human. Does that default setting for our feelings seem correct?
(Nate presents quotes a description of the “Machiavellian Intelligence’ hypothesis from Sue Blackmore’s “The Meme Machine”, a theory that explains our brains’ makeup as the result of a spiraling biological ‘arms race’ of power through social political maneuvering.
I’ve already quoted excessively from the source essay, but this is too good to pass up:
I mean, look at us. Humans are the sort of creature that sees lightning and postulates an angry sky-god, because angry sky-gods seem much more plausible to us than Maxwell’s equations — this despite the fact that Maxwell’s equations are far simpler to describe (by a mathematical standard) than a generally intelligent sky-god. Think about it: we can write down Maxwell’s equations in four lines, and we can’t yet describe how a general intelligence works. Thor feels easier for us to understand, but only because we have so much built-in hardware for psychologically modeling humans.
We see in other humans suspicious agents plotting against us. When they lash at us we lash back. We see cute puppies as innocent, and we sympathize with them they are angry.
Which is why, every so often, I take a mental step back and try to see the other humans around me, not as humans, but as innocent animals full of wonder, exploring an environment they can never fully understand, following the flows of their lives.
Other analogies for humans: Angels who missed their shot at heaven. Monkeys struggling to be comfortable out of the comfort of the trees.
Why care about others? Separate your feelings from your aesthetic judgements that are in tension, follow where that leads, and choose to care (not feel) as seems right to you.
Incidentally, this is from the introduction:
As with previous posts, don’t treat this as a sermon about why you should care about things that are larger than yourself; treat it as a reminder that you can, if you want to.
Nate assumes this posture throughout this essay series. Arguing the reader should take his advice would be antithetical to a part of his argument (that’s coming soon). All he needs to is point out the reader can choose to think a certain way. That’s compelling enough.
This post is part of the thread: Replacing Guilt Cliffs Notes – an ongoing story on this site. View the thread timeline for more context on this post.