Feed URL:

Wed, 20 Sep 2017 11:32:19 EDT

I wrote a post last year on two different kinds of expectations: anticipations and entitlements. I realized sometime later that there is a third, very important kind of expectation. I’ve spent a lot of good time trying to find a good name for them but haven’t, so I’m just calling them “the third kind of expectation”. On reflection, while this is unwieldy, it is an absolutely fantastic name by the sparkly pink purple ball thing criterion.

First, a recap on the other two kinds of expectations in my model: anticipations and entitlements. An anticipation is an expectation in the predictive sense: what you think will happen, whether implicitly or explicitly. An entitlement is what you think should happen, whether implicitly or explicitly. If your anticipation is broken, you feel surprised or confused. If your entitlement is broken, you feel indignant or outraged.

I made the claim in my previous article that entitlements are in general problematic, both because they create interpersonal problems and because they’re a kind of rationalization.

But isn’t entitlement okay when…?

Since then, some people have pointed out to me that there’s an important role that entitlements play. Or more precisely, situations where an angry response may make sense. What if someone breaks a promise? Or oversteps a boundary? It’s widely believed that an experience of passionate intensity like anger is an appropriate response to having one’s boundaries violated.

I continue to think entitlements aren’t helpful, and that what you’re mostly looking for in these situations are more shaped like this third kind of expectation.

The difference is primarily a matter of whether or not the angry person feels a sense that the violation should not have happened—that they deserved something else. These stances inhabit a belief that the past is somehow wrong, which… first of all, doesn’t make any sense, because it’s already happened. Moreover, they have a quality of closedness to them, being unwilling to let in information about why the other person might have done what they did.

What’s left when that confusion and closedness are removed, though?

A painful and beautiful response. I’ve watched a friend of mine realize how messed up his childhood was, and he got angry at how let down he was by his parents. And there was a purity to his experience, and his expression, because he wasn’t trying to deny or negate what had happened, or trying to oversimplify the situation and find a counterfactual “somehow” in which the trauma had never happened. He was just feeling the full depth of what had happened. Letting it the reality of it, and feeling angry.

He hadn’t been able to feel angry at the time, because—like many people experiencing abuse—he had made sense of the experience in part by assuming that somehow he deserved it.

You don’t have to hold “I deserved something other than this” to feel angry. But you do have to not hold “I did deserve this.”

These are different. Contradiction isn’t simple negation. “I deserve X” is different than “I don’t deserve not-X”, because the latter is compatible with “I don’t deserve anything. Nobody deserves anything. ‘Deserving’ is broken.” A view of things that is beyond rights and entitlements.

I expect I’ll write a blog post specifically breaking down “deserving” at some point, but for now I’m going to move on.

The point about this third type of expectation isn’t about whether or not you deserve something, it’s about whether or not you can reasonably ask it of someone, or something.

A “something” example would be if you bought a product that was advertised as doing a particular thing. You can expect, in this third sense, that it will do this thing. It may be that you also have an entitlement here, to eg a refund if the product was flawed. “Satisfaction guaranteed.” Implicitly, you can also reasonably ask of the company that made the product why it didn’t work when you tried it.

a graphic illustrating the 3 kinds of expectations. The text is a summary of what's already in this blog post. Anticipations are illustrated with an image of dominoes falling, entitlements have a justice-scales symbol, and the third kind of expectation has a graphic symbolizing "commitment" from a previous post of mine.

Commitments: a basis for the third kind of expectation

When two people get married, they make a set of vows to each other. These represent a contract that they’re making with each other, or in other words, a basis for each partner having an expectation_3 that the other partner will act in a certain way. In this sense, each partner can expect this behavior from each other—even if the vows are pretty bold and the people don’t anticipate that their partner will be able to achieve that level of performance consistently (particularly at the beginning). It’s still a standard that they can hold each other to. They’re answerable to each other, and to that.

I was counselling two friends on their relationship some time ago, and was present while they had a profound and connective conversation about the quality of relating that they each wanted in their relationship.

A few weeks later, I was talking to one of them about the utter absence of that level of performance on their partner’s part. She was upset, and was wondering if that upsetness had an entitled quality. It was this conversation that first prompted me to realize that there had to be a third kind of expectation.

It made sense for her to ask that of him: there was a shared basis on which that was wanted by both of them. This wasn’t someone who was lost in an abstract concept of what a (good) relationship should be, trying to fit that shape onto their relationship and generating entitlements in the process. This was someone who was genuinely confused: you said you wanted this, but then you disappeared.

This concept, of a third kind of expectation, is very closely related with my article on Common-Knowledge Self-Commitments. If you’re making commitments to yourself, that are being held by others, then you’re establishing a basis for this third kind of expectation. You’re making yourself answerable for that level of performance; equivalently, you’re giving someone a sense of what they can ask of you.

When one’s commitment to one’s commitments is unclear

The most basic form of feedback that having common-knowledge self-commitments is intended to produce is of the form: “Hey, you’re committed to doing X, and I observe you not doing X… I thought you’d want to know that.” This happening is a win for common-knowledge self-commitments.

But sometimes, someone has a thought of the form, “Person said they were committed to doing X, but they’re not doing X, and… I kind of think they maybe don’t want to know, because I kind of think maybe they’re not actually committed…”

What then?

Well, speaking as someone who has made many bold commitments and several times failed to act committed… I can tell you!

A few years ago, I started talking about being committed to taking responsibility for humanity’s survival and thrival. Several years later, I still hadn’t really done anything towards that end in particular, but I was still talking about it. And some of my friends were kind enough to sharply call me out on this. Essentially what they were communicating was something to the effect of “It doesn’t feel like you’re on the team [of people actually working on this] so we can’t actually give you feedback or advice as a member of the team, but you keep saying you’re on the team. Cut it out.

There was a fierce anger to this, and reasonably so! Not only was I not doing what I was committed to, I was undermining the meaning of such a commitment by continuing to assert that I was committed. Such a cancer does not a viable superorganism make.

In response to this intervention—this challenge—I started seriously reflecting and doing internal work to align my sense of priorities, and I think that I’m now much more pointed towards this, not just hypothetically. Would probably be worth checking in with other people again too.

My other major stated commitment is to a new kind of post-judgment, post-blame, post-entitlement mindset. This commitment is also primarily known within the context of a group of people who are similarly committed. And sometimes I’ve done things that have called my commitment to this into question.

It’s not merely failing at the thing itself: everyone who has joined the project in the time I’ve known it has had hundreds of judgments, blames, etc come up over the years, as we’re learning this new way of thinking. This is expected (in the anticipation sense).

It’s… something else. Something that’s subtle and hard to point at—I know this because I’ve tried, twice, to write directly about it. Text that would have gone right here, but was instead scrapped, due to being misleading or confused.

At any rate, the point is that there have been a few times when Jean (who is spearheading this whole post-judgment project) has had some sense that she can’t rely on my commitment to it.

As with the previous example, giving feedback about this isn’t easy or smooth, because the very nature of the situation is that the would-be giver of feedback is speaking to someone with unclear commitments, and therefore the speaker needs to figure out how to frame the message so that it can land clearly in an unclear landscape.

With Jean, and me, this has included a fierce protectiveness on her part for the integrity of the work as she sees it. It’s not anger, per se, but it has a similar kind of passionate, care-driven energy.

And that energy invites me to embody my own sense of caring for the integrity of the work, while simultaneously recognizing the threat that my ambivalence would be if it were to continue operating in relation to the project. This creates a challenge in balancing

  • my sense of desire to be part of the project, with
  • my sense of desire to protect it from myself

This challenge isn’t a problem, however—it’s something I necessarily need to lean into, in order to create an integrated commitment/desire to doing this work.

These conversations have been immensely challenging to experience and to integrate, and they’ve been really valuable, for me individually and for the system as a whole. I, and the system, are still in the process of integrating the most recent (and most intense) conversation of this nature, and me writing and sharing this blog post is part of that. I have a sense now that I can anticipate more consistent alignment on my part, but in order for others to anticipate that, they’ll need to build a new sense based on experiences of me.

And these conversations were made possible by this third type of expectation. If someone in Jean’s position had anticipated from the outset that I would operate out of this new mindset 100% of the time, that would have been naïve and inaccurate: if it were that easy, we’d be done already. If someone had felt entitled to me operating out of this new mindset 100% of the time, that would actually make it much harder to give clear feedback, because now it’s about them and their entitlement, not about my performance. That would get in the way.

Summary: 3 kinds of expectations

My original concept for this post didn’t foresee the intense section above. But writing is one way that I make sense of my experiences, and so it doesn’t surprise me that it made its way in there. I think that those help to powerfully illustrate the concept, although it’s important to remember that it can show up in much less extreme situations.

I intend for this article to act as a reference for this model, so in service of that, here’s a summary:

The simplest kind of expectation is an anticipation: what you think will happen. If it doesn’t, you get surprised or confused. This leads to a feeling of curiosity and possibly an investigation into why things happened differently from expected.

Another common kind of expectation is an entitlement: what you think should happen. If it doesn’t, you get indignant or outraged. This leads to judgment and potentially punishment (or threat of punishment). (Sometimes that is self-blame and self-punishment)

Then there’s the third kind of expectation: what you can ask of someone, based on what they’ve committed to or promised. What they’re answerable to. If something unexpected happens, you feel awareness, and possibly also passion or even anger to the extent that the commitment matters to you. This leads to an inquiry into the nature of the commitment, and feedback based on your experience.

a graphic illustrating the 3 kinds of expectations. The text is a summary of what's already in this blog post. Anticipations are illustrated with an image of dominoes falling, entitlements have a justice-scales symbol, and the third kind of expectation has a graphic symbolizing "commitment" from a previous post of mine.

Just because someone promises you something doesn’t make you entitled to it, in any moral sense. I mean, go ahead and feel entitled if you want. I’m just making the case that the majority of the value of entitlement is actually contained in the third form of expectation, and once you have that, you can let go of entitlement.

If you’re encountering a challenge disambiguating these in your own experience, that’s okay! The distinction is tricky, because it’s based on shifting some assumptions that have underpinned a lot of language and thinking for centuries if not millenia.

If you’re encountering a challenge disambiguating these conceptually, then comment below and I’ll try to explain it more clearly 🙂

?feed-stats-post-id=1556

Wed, 20 Sep 2017 11:09:42 EDT

I.

I always wanted to meditate more, but never really got around to it. And (I thought) I had an unimpeachable excuse. The demands of a medical career are incompatible with such a time-consuming practice.

Enter Daniel Ingram MD, an emergency physician who claims to have achieved enlightenment just after graduating medical school. His book is called Mastering The Core Teachings Of The Buddha, but he could also have called it Buddhism For ER Docs. ER docs are famous for being practical, working fast, and thinking everyone else is an idiot. MCTB delivers on all three counts. And if you’ve ever had an attending quiz you on the difference between type 1 and type 2 second-degree heart block, you’ll love Ingram’s taxonomy of the stages of enlightenment.

The result is a sort of perfect antidote to the vague hippie-ism you get from a lot of spirituality. For example, from page 324:

I feel the need to address, which is to say shoot down with every bit of rhetorical force I have, the notion promoted by some teachers and even traditions that there is nothing to do, nothing to accomplish, no goal to obtain, no enlightenment other than the ordinary state of being…which, if it were true, would have been very nice of them, except that it is complete bullshit. The Nothing To Do School and the You Are Already There School are both basically vile extremes on the same basic notion that all effort to attain to mastery is already missing the point, an error of craving and grasping. They both contradict the fundamental premise of this book, namely that there is something amazing to attain and understand and that there are specific, reproducible methods that can help you do that. Here is a detailed analysis of what is wrong with these and related perspectives…

…followed by a detailed analysis of what’s wrong with this position, which he compared to “let[ting] a blind and partially paralyzed untrained stroke victim perform open-heart surgery on your child based on the notion that they are already an accomplished surgeon but just have to realize it”.

This isn’t to say that MCTB isn’t a spiritual book, or that it shies away from mysticism or the transcendent. MCTB is very happy to discuss mysticism and the transcendent. It just quarantines the mystery within a carefully explained structure of rationally-arranged progress, so that it looks something like “and at square 41B in our perfectly rectangular grid you’ll encounter a mind-state which is impossible to explain even in principle, here are a few woefully inadequate metaphors for this mind-state so you’ll know when you’ve found it and should move on to square 41C.”

This is a little jarring. But – Ingram argues – it’s also very Buddhist. If you read the sutras with an open mind, the Buddha sounds a lot more like an ER doctor than a hippie. MCTB has a very Protestant fundamentalist feeling of digging through the exterior trappings of a religion to try to return to the purity of its origins. As far as I can tell, it succeeds – and in succeeding helped me understand Buddhism a whole lot better than anything else I’ve read.

II.

Ingram follows the Buddha in dividing the essence of Buddhism into three teachings: morality, concentration, and wisdom.

Morality seems like the odd one out here. Some Buddhists like to insist that Buddhism isn’t really a “religion”. It’s less like Christianity or Islam than it is like (for example) high intensity training at the gym – a highly regimented form of practice that improves certain faculties if pursued correctly. Talking about “morality” makes this sound kind of hollow; nobody says you have to be a good person to get bigger muscles from lifting weights.

MCTB gives the traditional answer: you should be moral because it’s the right thing to do, but also because it helps meditation. The same things that make you able to sleep at night with a clear mind make you able to meditate with a clear mind:

One more great thing about the first training [morality] is that it really helps with the next training: concentration. So here’s a tip: if you are finding it hard to concentrate because your mind is filled with guilt, judgment, envy or some other hard and difficult thought pattern, also work on the first training, kindness. It will be time well spent.

That leaves concentration (samatha) and wisdom (vipassana). You do samatha to get a powerful mind; you get a powerful mind in order do to vipassana.

Samatha meditation is the “mindfulness” stuff you’re always hearing about: concentrate on the breath, don’t let yourself get distracted, see if you can just attend to the breath and nothing else for minutes or hours. I read whole books about this before without understanding why it was supposed to be good, aside from vague things like “makes you feel more serene”. MCTB gives two reasons: first, it gets you into jhanas. Second, it prepares you for vipassana.

Jhanas are unusual mental states you can get into with enough concentration. Some of them are super blissful. Others are super tranquil. They’re not particularly meaningful in and of themselves, but they can give you heroin-level euphoria without having to worry about sticking needles in your veins. MCTB says, understatedly, that they can be a good encouragement to continue your meditation practice. It gives a taxonomy of eight jhanas, and suggests that a few months of training in samatha meditation can get you to the point where you can reach at least the first.

But the main point of samatha meditation is to improve your concentration ability so you can direct it to ordinary experience. Become so good at concentrating that you can attain various jhanas – but then, instead of focusing on infinite bliss or whatever other cool things you can do with your new talent, look at a wall or listen to the breeze or just try to understand the experience of existing in time.

This is vipassana (“insight”, “wisdom”) meditation. It’s a deep focus on the tiniest details of your mental experience, details so fleeting and subtle that without a samatha-trained mind you’ll miss them entirely. One such detail is the infamous “vibrations”, so beloved of hippies. Ingram notes that every sensation vibrates in and out of consciousness at a rate of between five and forty vibrations per second, sometimes speeding up or slowing down depending on your mental state. I’m a pathetic meditator and about as far from enlightenment as anybody in this world, but with enough focus even I have been able to confirm this to be true. And this is pretty close to the frequency of brain waves, which seems like a pretty interesting coincidence.

But this is just an example. The point is that if you really, really examine your phenomenological experience, you realize all sorts of surprising things. Ingram says that one early insight is a perception of your mental awareness of a phenomenon as separate from your perception of that phenomenon:

This mental impression of a previous sensation is like an echo, a resonance. The mind takes a crude impression of the object, and that is what we can think about, remember, and process. Then there may be a thought or an image that arises and passes, and then, if the mind is stable, another physical pulse. Each one of these arises and vanishes completely before the other begins, so it is extremely possible to sort out which is which with a stable mind dedicated to consistent precision and not being lost in stories. This means the instant you have experienced something, you know that it isn’t there any more, and whatever is there is a new sensation that will be gone in an instant. There are typically many other impermanent sensations and impressions interspersed with these, but, for the sake of practice, this is close enough to what is happening to be a good working model.

Engage with the preceding paragraphs. They are the stuff upon which great insight practice is based. Given that you know sensations are vibrating, pulsing in and out of reality, and that, for the sake of practice, every sensation is followed directly by a mental impression, you now know exactly what you are looking for. You have a clear standard. If you are not experiencing it, then stabilize the mind further, and be clearer about exactly when and where there are physical sensations.

With enough of this work, you gain direct insight into what Buddhists call “the three characteristics”. The first is impermanence, and is related to all the stuff above about how sensations flicker and disappear. The second is called “unsatisfactoriness”, and involves the inability of any sensation to be fulfilling in some fundamental way. And the last is “no-self”, an awareness that these sensations don’t really cohere into the classic image of a single unified person thinking and perceiving them.

The Buddha famously said that “life is suffering”, and placed the idea of suffering – dukkha – as the center of his system. This dukkha is the same as the “unsatisfactoriness” above.

I always figured the Buddha was talking about life being suffering in the sense that sometimes you’re poor, or you’re sick, or you have a bad day. And I always figured that making money or exercising or working to make your day better sounded like a more promising route to dealing with this kind of suffering than any kind of meditative practice. Ingram doesn’t disagree that things like bad days are examples of dukkha. But he explains that this is something way more fundamental. Even if you were having the best day of your life and everything was going perfectly, if you slowed your mind down and concentrated perfectly on any specific atomic sensation, that sensation would include dukkha. Dukkha is part of the mental machinery.

MCTB acknowledges that all of this sounds really weird. And there are more depths of insight meditation, all sorts of weird things you notice when you look deep enough, that are even weirder. It tries to be very clear that nothing it’s writing about is going to make much sense in words, and that reading the words doesn’t really tell you very much. The only way to really make sense of it is to practice meditation.

When you understand all of this on a really fundamental level – when you’re able to tease apart every sensation and subsensation and subsubsensation and see its individual components laid out before you – then at some point your normal model of the world starts running into contradictions and losing its explanatory power. This is very unpleasant, and eventually your mind does some sort of awkward Moebius twist on itself, adopts a better model of the world, and becomes enlightened.

III.

The rest of the book is dedicated to laying out, in detail, all the steps that you have to go through before this happens. In Ingram’s model – based on but not identical to the various models in various Buddhist traditions – there are fifteen steps you have to go through before “stream entry” – the first level of enlightenment. You start off at the first step, after meditating some number of weeks or months or years you pass to the second step, and so on.

A lot of these are pretty boring, but Ingram focuses on the fourth step, Arising And Passing Away. Meditators in this step enter what sounds like a hypomanic episode:

In the early part of this stage, the meditator’s mind speeds up more and more quickly, and reality begins to be perceived as particles or fine vibrations of mind and matter, each arising and vanishing utterly at tremendous speed…As this stage deepens and matures, meditators let go of even the high levels of clarity and the other strong factors of meditation, perceive even these to arise and pass as just vibrations, not satisfy, and not be self. They may plunge down into the very depths of the mind as though plunging deep underwater to where they can perceive individual frames of reality arise and pass with breathtaking clarity as though in slow motion […]

Strong sensual or sexual feelings and dreams are common at this stage, and these may have a non-discriminating quality that those attached to their notion of themselves as being something other than partially bisexual may find disturbing. Further, if you have unresolved issues around sexuality, which we basically all have, you may encounter aspects of them during this stage. This stage, its afterglow, and the almost withdrawal-like crash that can follow seem to increase the temptation to indulge in all manner of hedonistic delights, particularly substances and sex. As the bliss wears off, we may find ourselves feeling very hungry or lustful, craving chocolate, wanting to go out and party, or something like that. If we have addictions that we have been fighting, some extra vigilance near the end of this stage might be helpful.

This stage also tends to give people more of an extroverted, zealous or visionary quality, and they may have all sorts of energy to pour into somewhat idealistic or grand projects and schemes. At the far extreme of what can happen, this stage can imbue one with the powerful charisma of the radical religious leader.

Finally, at nearly the peak of the possible resolution of the mind, they cross something called “The Arising and Passing Event” (A&P Event) or “Deep Insight into the Arising and Passing Away”…Those who have crossed the A&P Event have stood on the ragged edge of reality and the mind for just an instant, and they know that awakening is possible. They will have great faith, may want to tell everyone to practice, and are generally evangelical for a while. They will have an increased ability to understand the teachings due to their direct and non-conceptual experience of the Three Characteristics. Philosophy that deals with the fundamental paradoxes of duality will be less problematic for them in some way, and they may find this fascinating for a time. Those with a strong philosophical bent will find that they can now philosophize rings around those who have not attained to this stage of insight. They may also incorrectly think that they are enlightened, as what they have seen was completely spectacular and profound. In fact, this is strangely common for some period of time, and thus may stop practicing when they have actually only really begun.

This is a common time for people to write inspired dharma books, poetry, spiritual songs, and that sort of thing. This is also the stage when people are more likely to join monasteries or go on great spiritual quests. It is also worth noting that this stage can look an awful lot like a manic episode as defined in the DSM-IV (the current diagnostic manual of psychiatry). The rapture and intensity of this stage can be basically off the scale, the absolute peak on the path of insight, but it doesn’t last. Soon the meditator will learn what is meant by the phrase, “Better not to begin. Once begun, better to finish!”

If this last part sounds ominous, it probably should. If the fourth stage looks like a manic episode, the next five or six stages all look like some flavor of deep clinical depression. Ingram discusses several spiritual traditions and finds that they all warn of an uncanny valley halfway along the spiritual path; he himself adopts St. John’s phrase “Dark Night Of The Soul”. Once you have meditated enough to reach the A&P Event, you’re stuck in the (very unpleasant) Dark Night Of The Soul until you can meditate your way out of it, which could take months or years.

Ingram’s theory is that many people have had spiritual experiences without deliberately pursuing a spiritual practice – whether this be from everyday life, or prayer, or drugs, or even things you do in dreams. Some of these people accidentally cross the A&P Event, reach the Dark Night Of The Soul, and – not even knowing that the way out is through meditation – get stuck there for years, having nothing but a vague spiritual yearning and sense that something’s not right. He says that this is his own origin story – he got stuck in the Dark Night after having an A&P Event in a dream at age 15, was low-grade depressed for most of his life, and only recovered once he studied enough Buddhism to realize what had happened to him and how he could meditate his way out:

When I was about 15 years old I accidentally ran into some of the classic early meditation experiences described in the ancient texts and my reluctant spiritual quest began. I did not realize what had happened, nor did I realize that I had crossed something like a point of no return, something I would later call the Arising and Passing Away. I knew that I had had a very strange dream with bright lights, that my entire body and world had seemed to explode like fireworks, and that afterwards I somehow had to find something, but I had no idea what that was. I philosophized frantically for years until I finally began to realize that no amount of thinking was going to solve my deeper spiritual issues and complete the cycle of practice that had already started.

I had a very good friend that was in the band that employed me as a sound tech and roadie. He was in a similar place, caught like me in something we would later call the Dark Night and other names. He also realized that logic and cognitive restructuring were not going to help us in the end. We looked carefully at what other philosophers had done when they came to the same point, and noted that some of our favorites had turned to mystical practices. We reasoned that some sort of nondual wisdom that came from direct experience was the only way to go, but acquiring that sort of wisdom seemed a daunting task if not
impossible […]

I [finally] came to the profound realization that they have actually worked all of this stuff out. Those darn Buddhists have come up with very simple techniques that lead directly to remarkable results if you follow instructions and get the dose high enough. While some people don’t like this sort of cookbook approach to meditation, I am so grateful for their recipes that words fail to express my profound gratitude for the successes they have afforded me. Their simple and ancient practices revealed more and more of what I sought. I found my experiences filling in the gaps in the texts and teachings, debunking the myths that pervade the standard Buddhist dogma and revealing the secrets meditation teachers routinely keep to themselves. Finally, I came to a place where I felt comfortable writing the book that I had been looking for, the book you now hold in your hands.

Once you meditate your way out of the Dark Night, you go through some more harrowing experiences, until you finally reach the fifteenth stage, Fruiition, and achieve “stream entry” – the first level of enlightenment. Then you do it all again on a higher level, kind of like those video games where when you beat the game you get access to New Game+ . Traditionally it takes four repetitions of the spiritual path before you attain complete perfect enlightenment, but Ingram suggests this is metaphorical and says it took him approximately twenty-seven repetitions over seven years.

He also says – and here his usual lucidity deserted him and I ended up kind of confused – that once you’ve achieved stream entry, you’re going to be going down paths whether you like it or not – the “stream” metaphor is apt insofar as it suggests being borne along by a current. The rest of your life – even after you achieve complete perfect enlightenment – will be spent cycling through the fifteen stages, with each stage lasting a few days to months.

This seems pretty bad, since the stages look a lot like depression, mania, and other more arcane psychiatric and psychological problems. Even if you don’t mind the emotional roller coaster, a lot of them sound just plain exhausting, with your modes of cognition and perception shifting and coming into question at various points. MCTB offers some tips for dealing with this – you can always slow your progress down the path by gorging on food, refusing to meditate, and doing various other unspiritual things, but the whole thing lampshades a question that MCTB profoundly fails at giving anything remotely like an answer to:

IV.

Why would you want to do any of this?

The Buddha is supposed to have said: “I gained nothing whatsoever from Supreme Enlightenment, and for that reason it is called Supreme Enlightenment”. And sure, that’s the enigmatic Zen-sounding sort of statement we expect from our spiritual leaders. But if Buddhist practice is really difficult, and makes you perceive every single sensation as profoundly unsatisfactory in some hard-to-define way, and can plunge you into a neverending depression which you might get out of if you meditate hard enough, and then gives you a sort of permanent annoying low-grade bipolar disorder even if you succeed, then we’re going to need something better than pithy quotes.

Ingram dedicates himself hard to debunking a lot of the things people would use to fill the gap. Pages 261-328 discuss the various claims Buddhist schools have made about enlightenment, mostly to deny them all. He has nothing but contempt for the obviously silly ones, like how enlightened people can fly around and zap you with their third eyes. But he’s equally dismissive of things that sort of seem like the basics. He denies claims about how enlightened people can’t get angry, or effortlessly resist temptation, or feel universal unconditional love, or things like that. Some of this he supports with stories of enlightened leaders behaving badly; other times he cites himself as an enlightened person who frequently experiences anger, pain, and the like. Once he’s stripped everything else away, he says the only thing one can say about enlightenment is that it grants a powerful true experience of the non-dual nature of the world.

But still, why would we want to get that? I am super in favor of knowledge-for-knowledge’s-sake, but I’ve also read enough Lovecraft to have strong opinions about poking around Ultimate Reality in ways that tend to destroy your mental health.

The best Ingram can do is this:

I realize that I am not doing a good job of advertising enlightenment here, particularly following my descriptions of the Dark Night. Good point. My thesis is that those who must find it will, regardless of how it is advertised. As to the rest, well, what can be said? Am I doing a disservice by not selling it like nearly everyone else does? I don’t think so. If you want grand advertisements for enlightenment, there is a great stinking mountain of it there for you partake of, so I hardly think that my bringing it down to earth is going to cause some harmful deficiency of glitz in the great spiritual marketplace.

[Meditation teacher] Bill Hamilton had a lot of great one-liners, but my favorite concerned insight practices and their fruits, of which he said, “Highly recommended, can’t tell you why.” That is probably the safest and most accurate advertisement for enlightenment that I have ever heard.

V.

I was reading MCTB at the same time I read Surfing Uncertainty, and it was hard not to compare them. Both claim to be guides to the mysteries of the mind – one from an external scientific perspective, the other from an internal phenomenological perspective. Is there any way to link them up?

Remember this quote from Surfing Uncertainty?:

Plausibly, it is only because the world we encounter must be parsed for action and intervention that we encounter, in experience, a relatively unambiguous determinate world at all. Subtract the need for action and the broadly Bayesian framework can seem quite at odds with the phenomenal facts about conscious perceptual experience: our world, it might be said, does not look as if it is encoded in an intertwined set of probability density distributions. Instead, it looks unitary and, on a clear day, unambiguous…biological systems, as mentioned earlier, may be informed by a variety of learned or innate “hyperpriors” concerning the general nature of the world. One such hyperprior might be that the world is usually in one determinate state or another.

Taken seriously, it suggests that some of the most fundamental factors of our experience are not real features of the sensory world, but very strong assumptions to which we fit sense-data in order to make sense of them. And Ingram’s theory of vipassana meditation looks a lot like concentrating really hard on our actual sense-data to try to disentangle them from the assumptions that make them cohere.

In the same way that our priors “snap” phrases like “PARIS IN THE THE SPRINGTIME” to a more coherent picture with only one “the”, or “snap” our saccade-jolted and blind-spot-filled visual world into a reasonable image, maybe they snap all of this vibrating and arising and passing away into something that looks like a permanent stable image of the world.

And in the same way that concentrating on “PARIS IN THE THE SPRINGTIME” really hard without any preconceptions lets you sniff out the extra “the”, so maybe enough samatha meditation lets you concentrate on the permanent stable image of the world until it dissolves into whatever the brain is actually doing. Maybe with enough dedication to observing reality as it really is rather than as you predict it to be, you can expose even the subjective experience of an observer as just a really strong hyperprior on all of the thought-and-emotion-related sense-data you’re getting.

That leaves dukkha, this weird unsatisfactoriness that supposedly inheres in every sensation individually as well as life in general. If the goal of the brain is minimizing prediction error, if all of our normal forms of suffering like hunger and thirst and pain are just special cases of predictive error in certain inherent drives, then – well, this is a very fundamental form of badness which is inherent in all sensation and perception, and which a sufficiently-concentrated phenomenologist might be able to notice directly. Relevant? I’m not sure.

Mastering The Core Teachings Of The Buddha is a lucid guide to issues surrounding meditation practice and a good rational introduction to the Buddhist system. Parts of it are ultimately unsatisfactory, but apparently this is true of everything, so whatever.


Also available for free download here

Wed, 13 Sep 2017 6:41:06 EDT

Epistemic status: mostly facts, a few speculations.

TW: lots of mentions of violence, abuse, and rape.

There is a tremendous difference, in pre-modern societies, between those that farmed with the plow and those that farmed with the hoe.

If you’re reading this, you live in a plow culture, or are heavily influenced by one. Europe, the Middle East, and most of Asia developed plow cultures. These are characterized by reliance on grains such as wheat and rice, which provide a lot of calories per acre in exchange for a lot of hard physical labor.  They also involve large working livestock, such as horses, donkeys, and oxen.

Hoe cultures, by contrast, arose in certain parts of sub-Saharan Africa, the Americas, southeast Asia, and Oceania.

Hoe agriculture is sometimes called horticulture, because it is more like planting a vegetable garden than farming.  You clear land with a machete and dig it with a hoe.  This works for crops such as bananas, breadfruit, coconuts, taro, yam, calabashes and squashes, beans, and maize.  Horticulturalists also keep domestic animals like chickens, dogs, goats, sheep, and pigs — but never cattle. They may hunt or fish.  They engage in small-scale home production of pottery and cloth.[1]

Hoe agriculture is extremely productive per hour of labor, much more so than preindustrial grain farming, but requires a vast amount of land for a small population. Horticulturists also tend to practice shifting cultivation, clearing new land when the old land is used up, rather than repeatedly plowing the same field — something that is only possible when fertile land is “too cheap to meter.”  Hoe cultures therefore have lots of leisure, but low population density, low technology, and few material objects.[1]

I live with a toddler, so I’ve seen a lot of the Disney movie Moana, which had a lot of consultation with Polynesians to get the culture right. This chipper little song is a pretty nice illustration of hoe culture: you see people digging with hoes, carrying bananas and fish, singing about coconuts and taro root, making pottery and cloth, and you see a pig and a chicken tripping through the action.

Hoe Culture and Gender Roles

Ester Boserup, in her 1970 book Woman’s Role in Economic Development [2], notes that in hoe cultures women do the hoeing, while in plow cultures men do the plowing.

This is because plowing is so physically difficult that men, with greater physical strength, have a comparative advantage at agricultural labor, while they have no such advantage in horticulture.

Men in hoe cultures clear the land (which is physically challenging; machete-ing trees is quite the upper-body workout), hunt, and engage in war. But overall, hour by hour, they spend most of their time in leisure.  (Or in activities that are not directly economically productive, like politics, ritual, or the arts.)

Women in hoe cultures, as in all known human cultures, do most of the childcare.  But hoeing is light enough work that they can take small children into the fields with them and watch them while they plant and weed. Plowing, hunting, and managing large livestock, by contrast, are forms of work too heavy or dangerous to accommodate simultaneous childcare.

The main gender difference between hoe and plow cultures is, then, that women in hoe cultures are economically productive while women in plow cultures are largely not.

This has strong implications for marriage customs.  In a plow culture, a husband supports his wife; in a hoe culture, a wife supports her husband.

Correspondingly, plow cultures tend to have a tradition of dowry (the bride’s parents compensate the groom financially for taking an extra mouth to feed off their hands) while hoe cultures tend to practice bride price (the groom compensates the bride’s family financially for the loss of a working woman) or bride service (the groom labors for the bride’s family, again as compensation for taking her labor.)

Hoe cultures are much more likely to be polygamous than plow cultures.  Since land is basically free, a man in a hoe culture is rich in proportion to how much labor he can accumulate — and labor means women. The more wives, the more labor.  In a plow culture, however, extra labor must come from men, which usually means hired labor, or slaves or serfs.  Additional wives would only mean more mouths to feed.

Because hoe cultures need women for labor, they allow women more autonomy.  Customs like veiling or seclusion (purdah) are infeasible when women work in the fields.  Hoe-culture women can usually divorce their husbands if they pay back the bride-price.

Barren women, widows, and unchaste women or rape victims in pre-modern plow cultures often face severe stigma (and practices like sati and honor killings) which do not occur in hoe cultures. Women everywhere are valued for their reproductive abilities, and men everywhere have an evolutionary incentive to prefer faithful mates; but in a hoe culture, women have economic value aside from reproduction, and thus society can’t afford to kill them as soon as their reproductive value is diminished.

“Matriarchy” is considered a myth by modern anthropologists; there is no known society, present or past, where women ruled. However, there are matrilinear societies, where descent is traced through the mother, and matrilocal societies, where the groom goes to live near the bride and her family.  All matrilinear and matrilocal societies in Africa are hoe cultures (though some hoe cultures are patrilinear and/or patrilocal.)[3]

The Seneca, a Native American people living around Lake Ontario, are a good example of a hoe culture where women enjoyed a great deal of power. [4] Traditionally, they cultivated the Three Sisters: maize, beans, and squash.  The women practiced horticulture, led councils, had rights over all land, and distributed food and household stores within the clan.  Descent was matrilineal, and marriages (which were monogamous) were usually arranged by the mothers. Of the Seneca wife, Henry Dearborn noted wistfully in his journal, “She lives with him from love, for she can obtain her own means of support better than he.”  Living, childrearing, and work organization was communal within a clan (living within a longhouse) and generally organized by elder women.

Hoe and Plow Cultures Today

A 2012 study [5] found that people descended from plow cultures are more likely to agree with the statements “When jobs are scarce, men should have more right to a job than women” and “On the whole, men make better political leaders than women do” than people descended from hoe cultures.

“Traditional plough-use is positively correlated with attitudes reflecting gender inequality and negatively correlated with female labor force participation, female firm ownership, and female participation in politics.”  This remains true after controlling for a variety of societal variables, such as religion, race, climate, per-capita GDP, history of communism, civil war, and others.

Even among immigrants to Europe and the US, history of ancestral plow-use is still strongly linked to female labor force participation and attitudes about gender roles.

Patriarchy Through a Materialist Lens

Friedrich Engels, in The Origin of the Family, Private Property and the State, was the first to argue that patriarchy was a consequence of the rise of (plow) agriculture.  Alesina et al summarize him as follows:

He argued that gender inequality arose due to the intensification of agriculture, which resulted in the emergence of private property, which was monopolized by men. The control of private property allowed men to subjugate women and to introduce exclusive paternity over their children, replacing matriliny with patrilineal descent, making wives even more dependent on husbands and their property. As a consequence, women were no longer active and equal participants in community life.

Hoe societies (and hunter-gatherer societies) have virtually no capital. Land can be used, but not really owned, as its produce is unreliable or non-renewable, and its boundaries are too large to guard. Technology is too primitive for any tool to be much of a capital asset.  This is why they are poor in material culture, and also why they are egalitarian; nobody can accumulate more than his neighbors if there just isn’t any way to accumulate stuff at all.

I find the materialistic approach to explaining culture appealing, even though I’m not a Marxist.  Economic incentives — which can be inferred by observing the concrete facts of how a people makes its living — provide elegant explanations for the customs, traditions, and ideals that emerge in a culture.  We do not have to presume that those who live in other cultures are stupid or fundamentally alien; we can assume they respond to incentives just as we do.  And, when we see the world through a materialist lens, we do not hope to change culture by mere exhortation. Oppression occurs when people see an advantage in oppressing; it is subdued when the advantage disappears, or when the costs become too high.  Individual people can follow their consciences even when it differs from the surrounding pressures of their culture, but when we talk about aggregates and whole populations, we don’t expect personal heroism to shift systems by itself.

A materialist analysis of gender relations would say that women are not going to escape oppression until they are economically independent.  And, even in the developed world, women mostly are not.

Women around the world, including in America, are much more likely to live in poverty than men.  This is because women have lower-paying jobs and struggle to support single-mother households. Women everywhere do most of the childcare, and most women have children at some point in their lives, so an economy that does not allow a woman to support and care for children with her own labor is not an economy that will ever allow most women to be economically independent.

Just working outside the home does not make a woman economically independent. If a family is living in a “two-income trap”[6], in which the wife’s income is just enough to pay for the childcare she does not personally provide, then the wife’s net economic contribution to the family is zero.

Sure, much of the “gender pay gap” disappears after controlling for college major and career choice [7][8]. Men report more interest in making a lot of money and being leaders, while women report more interest in being helpful and working with people rather than things. But a lot of this is probably due to the fact that most women rationally assume that they will take time to raise children, and that their husband will be the primary breadwinner, so they are less likely to make early education and career choices on the basis of earning the most money.

Economist Claudia Goldin believes the main reason for the gender pay gap is the cost of temporal flexibility; women want more work flexibility in order to raise children, and so they are paid less.  Childless men and women have virtually no wage disparity.[9]

Since women who will ever have children (which is most women) are still usually economically dependent on men even in the developed world, and strongly disadvantaged if they don’t have a male provider, is it any wonder that women are still more submissive and agreeable, higher in neuroticism and mood disorders, and subject to greater pressure to appeal sexually?  Their livelihood still depends on finding a mate to support them.

In order to change the economic incentives to make women financially independent, it would have to be no big deal to be a single mother. This probably means an economy whose resources were shifted from luxury towards leisure. Mothers of young children need a lot of time away from economic work; if we “bought” time instead of fancy goods with our high-tech productivity gains, a single mother in a technological economy might be able to support children by herself.  But industrial-age workplaces are not set up to allow employees flexibility, and modern states generally put up heavy barriers to easy, flexible self-employment or ultra-frugal living, through licensing laws, zoning regulations, and quality regulations on goods.

Morality and Religion under Hoe Societies

It’s hard to trust what we read about hoe-culture mores, because these generally aren’t societies that develop writing, and what we read is filtered through the opinions of Western researchers or missionaries. But, as far as I can tell, they are mostly animist and polytheist cultures. There are many “spirits” or “gods”, some friendly and some unfriendly, but none supreme.  Magical practices (“if you do this ritual, you’ll get that outcome”)  seem to be common.

Monotheist and henotheist cultures (one god, or one god above all other gods, usually male) seem to be more of a plow-culture thing, though not all plow cultures follow that pattern.

The presence of goddesses doesn’t correlate that much to the condition of women in a society, contrary to the (now falsified) belief that pre-agrarian societies were matriarchal and goddess-worshipping.

The Code of Handsome Lake is an interesting example of a moral and religious code written by a man from a hoe culture. Handsome Lake was a religious reformer among the Iroquois in the 18th century.  His Code is heavily influenced by Christianity (his account of Hell and of the apocalypse closely follow the New Testament and are not found in earlier Iroquois beliefs) but includes some distinctively Iroquois features.

Notably, he was strongly against spousal and child abuse, and in favor of family harmony, including this touching passage:

“Parents disregard the warnings of their children. When a child says, “Mother, I want you to stop wrongdoing,” the child speaks straight words and the Creator says that the child speaks right and the mother must obey. Furthermore the Creator proclaims that such words from a child are wonderful and that the mother who disregards then takes the wicked part. The mother may reply, “Daughter, stop your noise. I know better than you. I am the older and you are but a child. Think not that you can influence me by your speaking.” Now when you tell this message to your people say that it is wrong to speak to children in such words.”

Are Hoe Societies Good?

They’re not paradise. (Though, note that Adam and Eve were gardeners in Eden.)

As stated before, horticulturalists are poor. People in hoe cultures don’t necessarily have less to eat than their pre-modern agrarian peers, but they have less stuff, and they are much poorer than anyone in industrialized societies.

Polygamy also has distinct disadvantages.  It promotes venereal disease. It also excludes a population of unmarried men from society, which leads to violence and exposes the excluded men to poverty and isolation.

And you can’t replicate hoe societies across the globe even if you wanted to.  Hoe agriculture is so land-intensive that it couldn’t possibly support a population of seven billion.

Furthermore, while women in hoe societies have more autonomy and are subject to less gendered violence than women in pre-modern plow societies, it’s not clear how that compares to women in modern societies with rule of law. Hoe societies are still traditionalist and communitarian. Men’s and women’s spheres are still separate. Life in a hoe society is not going to exactly match a modern feminist’s ideal.  These aren’t WEIRD people, they’re something quite different, for better or for worse, and it’s hard to know exactly how the experience is different just by reading a few papers.

Hoe cultures are interesting not because we should model ourselves after them, but because they are an existence proof that non-patriarchal societies can exist for millennia.  Conservatives can always argue that a new invention hasn’t been proved stable or sustainable. Hoe cultures have been proved incredibly long-lasting.

References

[1]Braudel, Fernand. Civilization and capitalism, 15th-18th century: The structure of everyday life. Vol. 1. Univ of California Press, 1992.

[2]Boserup, Ester. Woman’s role in economic development. Earthscan, 2007.

[3]Goody, Jack, and Joan Buckley. “Inheritance and women’s labour in Africa.” Africa 43.2 (1973): 108-121.

[4]Jensen, Joan M. “Native American women and agriculture: A Seneca case study.” Sex Roles 3.5 (1977): 423-441.

[5]Alesina, Alberto, Paola Giuliano, and Nathan Nunn. “On the origins of gender roles: Women and the plough.” The Quarterly Journal of Economics 128.2 (2013): 469-530.

[6]Warren, Elizabeth, and Amelia Warren Tyagi. The two-income trap: Why middle-class parents are going broke. Basic Books, 2007.

[7]Daymont, Thomas N., and Paul J. Andrisani. “Job preferences, college major, and the gender gap in earnings.” Journal of Human Resources (1984): 408-428.

[8]Zafar, Basit. “College major choice and the gender gap.” Journal of Human Resources 48.3 (2013): 545-595.

[9]Waldfogel, Jane. “Understanding the” family gap” in pay for women with children.” The Journal of Economic Perspectives 12.1 (1998): 137-156.



Wed, 13 Sep 2017 6:05:14 EDT

[Epistemic status: Total wild speculation]

I.

The predictive processing model offers compelling accounts of autism and schizophrenia. But Surfing Uncertainty and related sources I’ve read are pretty quiet about depression. Is there a possible PP angle here?

Chekroud (2015) has a paper trying to apply the model to depression. It’s scholarly enough, and I found it helpful in figuring out some aspects of the theory I hadn’t yet understood, but it’s pretty unambitious. The overall thesis is something like “Predictive processing says high-level beliefs shape our low-level perceptions and actions, so maybe depressed people have some high-level depressing beliefs.” Don’t get me wrong, CBT orthodoxy is great and has cured millions of patients – but in the end, this is just CBT orthodoxy with a neat new coat of Bayesian paint.

There’s something more interesting in Section 7.10 of Surfing Uncertainty, “Escape From The Darkened Room”. It asks: if the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”.

Section 7.10 gives a kind of hand-wave-y answer here, saying that of course organisms have some drives, and probably it makes sense for them to desire novelty and explore new options, and so on. Overall this isn’t too different from PCT’s idea of “intrinsic error”, and as long as we remember that it’s not really predicting anything in particular it seems like a fair response.

But I notice that this whole “sit in a dark room and never leave” thing sounds a lot like what depressed people say they wish they could do (and how the most severe cases of depression actually end up). Might there be a connection? Either a decrease in the mysterious intrinsic-error-style factors that counterbalance the dark room scenario, or an increase in the salience of prediction error that makes failures less tolerable?

(also, there’s one way to end all prediction error forever, and it’s something depressed people think about a lot)

II.

Corlett, Fritch, and Fletcher claim that an amphetamine-induced mania-like state may involve pathologically high confidence in neural predictions. I don’t remember if they took the obvious next step and claimed that depression was the opposite, but that sounds like another fruitful avenue to explore. So: what if depression is pathologically low confidence in neural predictions?

Chekroud’s theory of depression as high-level-depressing-beliefs bothers me because there are so many features of depression that aren’t cognitive or emotional or related to any of these higher-level functions at all. Depressed people move more slowly, in a characteristic pattern called “psychomotor retardation”. They display perceptual abnormalities. They’re more likely to get sick. There are lots of results like this.

Depression has to be about something more than just beliefs; it has to be something fundamental to the nervous system. And low confidence in neural predictions would do it. Since neural predictions are the basic unit of thought, encoding not just perception but also motivation, reward, and even movement – globally low confidence levels would have devastating effects on a whole host of processes.

Perceptually, they would make sense-data look less clear and distinct. Depressed people describe the world as gray, washed-out, losing its contrast. This is not metaphorical. You can do psychophysical studies on color perception in depressed people, you can stick electrodes on their eyeballs, and all of this will tell you that depressed people literally see the world in washed-out shades of gray. Descriptions of their sensory experience sound intuitively like the sensory experience you would get if all your sense organs were underconfident in their judgments.

Mechanically, they would make motor movements less forceful. Remember, in PP movements are “active inferences” – the body predicts that the limb it wants to move is somewhere else, then counts on the motor system’s drive toward minimizing prediction error to do the rest. If you predictions are underconfident, your movements are insufficiently forceful and you get the psychomotor retardation that all the pathologists describe in depressed people. And what’s the closest analog to depressive psychomotor retardation? Parkinsonian bradyphrenia. What causes Parkinsonian bradyphrenia? We know the answer to this one – insufficient dopamine, where dopamine is known to encode the confidence level of motor predictions.

Motivationally – well, I’m less certain, I still haven’t found a good predictive processing account of motivation I understand on an intuitive level. But if we draw the analogy to perceptual control theory, some motivations (like hunger) are probably a kind of “intrinsic error” that can be modeled as higher-level processes feeding reference points to lower-level control systems. If we imagine the processes predicting eg hunger, then predicting with low confidence sure sounds like the sort of thing where you should be less hungry. If they’re predicting “you should get out of bed”, then predicting that with low confidence sure sounds like the sort of thing where you don’t feel a lot of motivation to get out of bed.

I’m hesitant to take “low self-confidence” as a gimme – it seems relying too much on a trick of the English language. But I think there really is a connection. Suppose that you’re taking a higher-level math class and you’re really bad at it. No matter how hard you study, you always find the material a bit confusing and are unsure whether you’re applying the concepts correctly. Eventually you start feeling kind of like a loser, you decide the math class isn’t for you, and you move on to something else where you’re more talented. Your low confidence in your beliefs (eg answers to test questions) and actions (eg problem-solving strategies) create general low self-confidence and feelings of worthlessness. Eventually you decide math isn’t for you and decide to drop the class.

If you have global low confidence, the world feels like a math class you don’t understand that you can’t escape from. This feeling might be totally false – you might be getting everything right – but you still feel that way. And there’s no equivalent to dropping out of the math class – except committing suicide, which is how far too many depressed people end up.

One complicating factor – how do we explain depressed people’s frequent certainty that they’ll fail? A proper Bayesian, barred from having confident beliefs about anything, will be maximally uncertain about whether she’ll fail or succeed – but some depressed people have really strong opinions on this issue. I’m not really sure about this, and admit it’s a point against this theory. I can only appeal to the math class example again – if there was a math class where I just had no confidence about anything I thought or said, I would probably be pretty sure I’d fail there too.

(just so I’m not totally just-so-storying here, here’s a study of depressed people’s probability calibration, which shows that – yup – they’re underconfident!)

This could tie into the “increased salience of prediction error” theory in Part I. If for some reason the brain became “overly conservative” – if it assigned very high cost to a failed prediction relative to the benefit of a successful prediction – then it would naturally lower its confidence levels in everything, the same way a very conservative better who can’t stand losing money is going to make smaller bets.

III.

But why would low confidence cause sadness?

Well, what, really, is emotion?

Imagine the world’s most successful entrepreneur. Every company they found becomes a multibillion-dollar success. Every stock they pick shoots up and never stops. Heck, even their personal life is like this. Every vacation they take ends out picture-perfect and creates memories that last a lifetime; every date they go on leads to passionate soul-burning love that never ends badly.

And imagine your job is to advise this entrepreneur. The only advice worth giving would be “do more stuff”. Clearly all the stuff they’re doing works, so aim higher, work harder, run for President. Another way of saying this is “be more self-confident” – if they’re doubting whether or not to start a new project, remind them that 100% of the things they’ve ever done have been successful, odds are pretty good this new one will too, and they should stop wasting their time second-guessing themselves.

Now imagine the world’s most unsuccessful entrepreneur. Every company they make flounders and dies. Every stock they pick crashes the next day. Their vacations always get rained-out, their dates always end up with the other person leaving halfway through and sticking them with the bill.

What if your job is advising this guy? If they’re thinking of starting a new company, your advice is “Be really careful – you should know it’ll probably go badly”. If they’re thinking of going on a date, you should warn them against it unless they’re really sure. A good global suggestion might be to aim lower, go for low-risk-low-reward steady payoffs, and wait on anything risky until they’ve figured themselves out a little bit more.

Corlett, Frith and Fletcher linked mania to increased confidence. But mania looks a lot like being happy. And you’re happy when you succeed a lot. And when you succeed a lot, maybe having increased confidence is the way to go. If happiness were a sort of global filter that affected all your thought processes and said “These are good times, you should press really hard to exploit your apparent excellence and not worry too much about risk”, that would be pretty evolutionarily useful. Likewise, if sadness were a way of saying “Things are going pretty badly, maybe be less confidence and don’t start any new projects”, that would be useful too.

Depression isn’t normal sadness. But if normal sadness lowers neural confidence a little, maybe depression is the pathological result of biological processes that lower neural confidence. To give a total fake example which I’m not saying is what actually happens, if you run out of whatever neurotransmitter you use to signal high confidence, that would give you permanent pathological low confidence and might look like depression.

One problem with this theory is the time course. Sure, if you’re eternally successful, you should raise your confidence. But eternally successful people are rarely eternally happy. If we’re thinking of happiness-as-felt-emotion,itt seems more like they’re happy for a few hours after they win an award or make their first million or whatever, then go back down to baseline. I’m not sure it makes sense to start lots of new projects in the hour after you win an award.

One way of resolving this: maybe happiness is the derivative of neural confidence? It’s the feeling of your confidence levels increasing, the same way acceleration is the feeling of your speed increasing?

Of course, that’s three layers of crackpot – its own layer, under the layer of emotions as confidence level, under the layer of depression as change in prediction strategies. Maybe I should dial back my own confidence levels and stop there.


Wed, 13 Sep 2017 5:13:48 EDT

Epistemic Status: speculative. We’ve got some amateur Biblical exegesis in here, and some mentions of abuse.

I’m starting to believe that patriarchy is the root of destructive authoritarianism, where patriarchy simply means the system of social organization where families are hierarchical and headed by the father. To wit:

  • Patriarchy justifies abuse of wives by husbands and abuse of children by parents
  • The family is the model of the state; pretty much everybody, from Confucius to Plato, believes that governmental hierarchy evolved from familial hierarchy; rulers from George Washington to Ataturk are called “the father of his country”
  • There is no clear separation between hierarchy and abuse. The phenomenon of dominant/submissive behavior among primates closely parallels what humans would consider domestic abuse.

Abuse in Mammalian Context

A study of male vervet monkeys [1] gives an illustration of what I mean by abuse.

Serotonin levels closely track a monkey’s status in the dominance hierarchy. When a monkey is dominant, his serotonin is high, and is sustained at that high level by observing submissive displays from other monkeys.  The more serotonin a dominant monkey has in his system, the more affection and the less aggression he displays; you can see this experimentally by injecting him with a serotonin precursor. When a high status monkey is full of serotonin, he relaxes and becomes more tolerant towards subordinates[2]; the subordinates, feeling less harassed, offer him fewer submissive displays; this rapidly drops the dominant’s serotonin levels, leaving him more anxious and irritable; he then engages in more dominance displays; the submissive monkeys then display more submission, thereby raising the dominant’s serotonin level and starting all over again.

This cycle (known as regulation-dysregulation theory, or RDT) is basically the same as the cycle of abuse in humans, whose stages are rising tension (the dominant is low in serotonin), acute violence (dominance display), reconciliation/honeymoon (the dominant’s serotonin spikes after the subordinate submits), and calm (the dominant is high in serotonin and tolerant towards subordinates.)

In each case, tolerance extends only as long as submissive behavior continues.  Anger, threats, and violence are the result of any slackening of submissive displays.  I consider this to be a working definition of both dominance and abuse: the abuser is easily slighted and considers any lèse-majesté to be grounds for an outburst.

Most conditions of oppression among humans follow this pattern.  Slaves would be harshly punished for “disrespecting” masters, subordinates must show “respect” to gangsters and warlords on pain of violence, despots require rituals of submission or tribute, etc.  I believe it to be an ancient and even pre-human pattern.

The prototypical opposite of freedom, I think, is slavery, imprisonment, or captivity.  Concepts like “rights” are more modern and less universal. But even ancient peoples would agree that to be subject to the arbitrary will of another, and not free to physically escape from him, is an unhappy state. These are more or less the conditions that cause CPTSD — kidnapping, imprisonment and institutionalization, concentration camps and POW camps, slavery, and domestic abuse — situations in which one is at another’s mercy for a prolonged period of time and unable to escape.

A captive subordinate must appease the abuser in order to avoid retaliation; this has a soul-warping effect. Symptoms of CPTSD include “a chronic and pervasive sense of helplessness, paralysis of initiative, shame, guilt, self-blame, a sense of defilement or stigma” and “attributing total power to the perpetrator, becoming preoccupied with the relationship to the perpetrator, including a preoccupation with revenge, idealization or paradoxical gratitude, seeking approval from the perpetrator, a sense of a special relationship with the perpetrator or acceptance of the perpetrator’s belief system or rationalizations.”  In other words, captives are at risk for developing something like Nietzsche’s “slave morality”, characterized by shame, submission, and appeasement towards the perpetrator.

Here’s John Darnielle talking about the thing:

“My stepfather wanted me to write Marxist poetry; if it didn’t serve the revolution, it wasn’t worthwhile.” I asked him what his mother thought, and he let out a sad laugh. “You have to understand the dynamic of the abused household. What you think doesn’t matter. Your thoughts are passing. They are positions you adopt to survive.”

The physical behaviors of shame (gaze aversion, shifty eyes, nervous smiles, downcast head, and slouched, forward-leaning postures)[3] are also common mammalian appeasement displays; subordinate monkeys and apes also have a “fear smile” and don’t meet the gaze of dominants.[4] It seems quite clear that the psychological problem of chronic shame as a result of abuse is a result of having to engage in prolonged appeasement behavior on pain of punishment.

A subordinate primate is not a healthy primate. Robert Sapolsky [5] has an overview article about how low-ranked primates are more stressed and more susceptible to disease in hierarchical species.

“When the hierarchy is stable in species where dominant individuals actively subjugate subordinates, it is the latter who are most socially stressed; this can particularly be the case in the most extreme example of a stable hierarchy, namely, one in which rank is hereditary. This reflects the high rates of physical and psychological harassment of subordinates, their relative lack of social control and predictability, their need to work harder to obtain food, and their lack of social outlets such as grooming or displacing aggression onto someone more subordinate.”

…The inability to physically avoid dominant individuals is associated with stress, and the ease of avoidance varies by ecosystem. The spatial constraints of a two-dimensional terrestrial habitat differ from those of a three-dimensional arboreal or aquatic setting, and living in an open grassland differs from living in a plain dense with bushes. As an extreme example, subordinate animals in captivity have many fewer means to evade dominant individuals than they would in a natural setting.

This coincides with the CPTSD model — social stress correlates with inability to escape.

The physiological results of social stress are cardiovascular and immune:

Prolonged stress adversely affects cardiovascular function, producing (i) hypertension and elevated heart rate; (ii) platelet aggregation and increased circulating levels of lipids and cholesterol, collectively promoting atherosclerotic plaque formation in injured blood vessels; (iii) decreased levels of protective high-density lipoprotein (HDL) cholesterol and/or elevated levels of endangering low-density lipoprotein (LDL) cholesterol; and (iv) vasoconstriction of damaged coronary arteries…In general, mild to moderate transient stressors enhance immunity, particularly the first phase of the immune response, namely innate immunity. Later phases of the stress response are immunosuppressive, returning immune function to baseline. Should the later phase be prolonged by chronic stress, immunosuppression can be severe enough to compromise immune activation by infectious challenges (47, 48). In contrast, a failure of the later phase can increase the risk of the immune overactivity that constitutes autoimmunity.

Autoimmune disorders and weakened disease resistance are characteristic of people with PTSD as well.

Being a captive abuse victim is bad for one’s physical and mental health.  While abuse is “natural” (it appears frequently in nature), it is bad for flourishing in a quite direct and unmistakable way.  Individuals are not, in general, better off under conditions of captivity and abuse.

This abuse/dominance/submission/CPTSD thing is basically about dysfunctions in the second circuit in Leary’s eight-circuit model.  It’s the part of the mind that forms intuitions about social power relations.  Every social interaction between humans has some dominance/submission content; this is normal and probably inevitable, given our mammalian heritage. But Leary’s model is somewhat developmental — to be stuck in the mindset of dominance/submission means that you cannot reach the “higher” functions, such as intellectual thought or more mature moral reasoning.  Prolonged abuse can make people so stuck in submission that they cannot think.

Morality-As-Submission vs. Morality-As-Pattern

Most primates have something like abuse, and thus I’d believe all human societies have it. Patriarchal societies have a normative form of abuse: if the hierarchical family is established as standard, then husbands have certain rights of control and violence over wives, and parents have certain rights of control and violence over children.  In societies with land ownership and monarchs, there are also rights of control and violence of landowners over serfs and slaves, and of rulers over subjects.  Historically, higher-population agrarian societies (think Sumer or neolithic China) had larger and firmer hierarchies than earlier hunter-gatherer and horticultural societies, and probably worse treatment of women.  As Sapolsky notes, stable and particularly inherited hierarchies put greater stress on subordinates. (More about that in a later post.)

To give a stereotypical picture, think of patriarchal agrarian society as Blue in the Spiral Dynamics paradigm.  (This is horoscopey and ahistorical but it gives good archetypes.)  Blue culture means grain cultivation, pyramids and ziggurats, god-kings, temple sacrifices, and the first codes of law.

Not all humans are descended from agrarian-patriarchal cultures, but almost all Europeans and Asians are.

When you have stability, high population, and accumulation of resources, as intensive agriculture allows, you begin to have laws and authorities in a much stronger sense than tribal elders.  Your kings can be richer; your monuments can last longer.  I believe that notions of the absolute and the eternal in morality or religion might develop alongside the ability to have physically permanent objects and lasting power.

And, so, I suspect that this is the origin of the belief that to do right means to obey the father/king, and the worship of supreme gods modeled after a father or king.

To say morality is obedience is not merely to say that it is moral to obey.  Rather, we’re talking about divine command theory.  Goodness is identified with the will of the dominant individual. Inside this headspace, you ask “but what would morality even be if it weren’t a rock to crush me or a chain to bind me?”  It’s fear and submission melded with a sense of the rightness and absolute legitimacy of the dominator.

The “Song of the Sea” is considered by modern Biblical scholars to be the chronologically oldest part of the Bible, dating from the 15th to 5th centuries BC, and echoing praise songs to Mesopotamian gods and kings. God is here no abstract principle or sole creator; he is a “man of war” who defeats other peoples and their gods in battle.  He is to be worshiped not because he is good but because he is fearsome.

But philosophers, even in patriarchal societies, have often had some notion of a “good” which is less like a terrifying warlord and more like a natural law, a pattern in the universe, something to discern rather than someone to submit to.

The ancient Egyptians had ma’at and the Chinese had Heaven, as concepts of abstract justice which wicked earthly rulers could fall short of.  The ancient Greeks had logos, a faculty of reason or speech that allowed one to discern what was good.

Plato neatly disposes of divine command theory in the Euthyphro : if “good” is simply what the gods want, then what should one do if the gods disagree? Since in Greek mythology the gods plainly do disagree, the Good must be something that lies beyond the mere opinion of a powerful individual, human or divine.

As Ben Hoffman put it:

When morality is seen as rules society imposes on us to keep us in line, the superego or parent part is the internalized voice of moral admonition. Likewise, I suspect that in contemporary societies this often includes the internalized voice of the schoolteacher telling you how to do the assignment. This internalized voice of authority feels like an external force compelling you. People often feel tempted to rebel against their own superego or internalized parent.

By contrast, logos and sattva are not seen as internalized narratives – they are described as perceptive faculties. You see what’s right, by seeing the deep structure of reality. The same thing that lets you see the deep patterns in mathematics, lets you see the deep decision-theoretic symmetries underlying truly moral behavior.

This is why it matters so much that theologians such as Maimonides and Augustine were so insistent on the point that God has no body and anthropomorphic references in the Bible are metaphors, and why this point had to be repeated so often and seemed so difficult for their contemporaries to grasp. (Seriously, read The Guide to the Perplexed. It explains separately how each individual Biblical reference to a body part of God is a metaphor — it’s a truly incredible amount of repetition.)

If God has no body, this means that modern (roughly post-Roman-Empire) Jews and Christians worship something more like a principle of goodness than a warlord, even if God is frequently likened to a father or king.  It’s not “might makes right”, but “right makes right.”

The abuse-victim logic of morality-as-submission can have no concept that might might not make right.

But more “mature” ethical philosophies, even if they emerge from authoritarian societies — Christian, Jewish, Confucian, Classical Greek, to name a few that I’m familiar with — can be used as grounds to oppose tyranny and abuse, because they contain the concept of a pattern of justice that transcends the will of any particular man.

Once you can generalize, once you can see pattern, once you notice that humans disagree and kings can be toppled, you have the potential to escape the second-circuit, primate-level, dominant/submissive paradigm.  You can ask “what is right?” and not just “who’s on top?”

An Example of Morality-As-Submission: The Golden Calf

It is generally bad scholarship to read the literal text of the Bible as evidence for what contemporary Jews or Christians believe; that ignores thousands of years of interpretation.  But if you just look at the Bible without context, raw, you can get some kind of an unfiltered impression of the mindset of whoever wrote it — which is quite different from how moderns (religious or not) think, but which still influences us deeply.

So let’s look at Exodus 32:34.

The People of Israel, impatient with Moses taking so long on Mount Sinai, build a golden calf and worship it. Now God gets mad.

7 And the LORD spoke unto Moses: ‘Go, get thee down; for thy people, that thou broughtest up out of the land of Egypt, have dealt corruptly; 8 they have turned aside quickly out of the way which I commanded them; they have made them a molten calf, and have worshipped it, and have sacrificed unto it, and said: This is thy god, O Israel, which brought thee up out of the land of Egypt.’ 9 And the LORD said unto Moses: ‘I have seen this people, and, behold, it is a stiffnecked people.

“Stiff-necked”, meaning stubborn. Meaning “you just do as you damn well please.”  Meaning “you have a will, you choose to do things besides obey me, and that is just galling.”  This is abuser/authoritarian logic: the abuser feels entitled to obedience and especially submission. To be stiff-necked is not to bow the neck.

10 Now therefore let Me alone, that My wrath may wax hot against them, and that I may consume them; and I will make of thee a great nation.’ 11 And Moses besought the LORD his God, and said: ‘LORD, why doth Thy wrath wax hot against Thy people, that Thou hast brought forth out of the land of Egypt with great power and with a mighty hand? 12 Wherefore should the Egyptians speak, saying: For evil did He bring them forth, to slay them in the mountains, and to consume them from the face of the earth? Turn from Thy fierce wrath, and repent of this evil against Thy people. 13 Remember Abraham, Isaac, and Israel, Thy servants, to whom Thou didst swear by Thine own self, and saidst unto them: I will multiply your seed as the stars of heaven, and all this land that I have spoken of will I give unto your seed, and they shall inherit it for ever.’ 14 And the LORD repented of the evil which He said He would do unto His people.

Moses pleads with God to remember his promises and not kill everyone. He even calls the plan of genocide “evil”!  And God, who is here not an implacable force of justice but out-of-control angry, calms down in response to the pleading and moderates his behavior.

But then Moses comes down the mountain, and he gets angry, and he slaughters, not everyone, but 3000 men.

27 And he [Moses] said unto them: ‘Thus saith the LORD, the God of Israel: Put ye every man his sword upon his thigh, and go to and fro from gate to gate throughout the camp, and slay every man his brother, and every man his companion, and every man his neighbour.’ 28 And the sons of Levi did according to the word of Moses; and there fell of the people that day about three thousand men.

 

Notice how, if you’re at all familiar with abusive family dynamics, God is the primary abusive parent, and Moses is the less-abusive, appeasing parent, who tries to protect the children somewhat but still terrorizes them.

Now, God is going to make sure the Israelites know how grateful to be for his mercy and should beware lest he does anything worse:

1. And the LORD spoke unto Moses: ‘Depart, go up hence, thou and the people that thou hast brought up out of the land of Egypt, unto the land of which I swore unto Abraham, to Isaac, and to Jacob, saying: Unto thy seed will I give it– 2. and I will send an angel before thee; and I will drive out the Canaanite, the Amorite, and the Hittite, and the Perizzite, the Hivite, and the Jebusite– 3. unto a land flowing with milk and honey; for I will not go up in the midst of thee; for thou art a stiffnecked people; lest I consume thee in the way.’ 4. And when the people heard these evil tidings, they mourned; and no man did put on him his ornaments.  5 And the LORD said unto Moses: ‘Say unto the children of Israel: Ye are a stiffnecked people; if I go up into the midst of thee for one moment, I shall consume thee; therefore now put off thy ornaments from thee, that I may know what to do unto thee.’

Note the mourning and the refusal to put on ornaments. You have to show contrition, you can’t relax and make merry, as long as the parent is angry. It’s a submission behavior. The whole house has to be thrown into gloom until the parent says your punishment is over.

Now Moses goes into the Tent of Meeting to pray, very humbly, for God’s forgiveness of the people.  And here, in this context, is where you find the famous Thirteen Attributes of God’s Mercy.

6. And the LORD passed by before him, and proclaimed: ‘The LORD, the LORD, God, merciful and gracious, long-suffering, and abundant in goodness and truth;  7 keeping mercy unto the thousandth generation, forgiving iniquity and transgression and sin; and that will by no means clear the guilty; visiting the iniquity of the fathers upon the children, and upon the children’s children, unto the third and unto the fourth generation.’ 8. And Moses made haste, and bowed his head toward the earth, and worshipped. 9. And he said: ‘If now I have found grace in Thy sight, O Lord, let the Lord, I pray Thee, go in the midst of us; for it is a stiffnecked people; and pardon our iniquity and our sin, and take us for Thine inheritance.’

God is “long-suffering” because he doesn’t kill literally everyone, when he is begged not to.  This “mercy” is more like the “tolerance” that dominant primates display when they get “enough” appeasement behaviors from subordinates.  Of course, people have long taken this passage as an inspiration for real mercy and grace; but in context and without theological interpretation that is not what it looks like.

Now, there’s a long interval of the new tablets of the law being brought down, and instructions being given for the tabernacle and how to give sin-offerings. Eight days later, in Leviticus 10,  God’s explained how to give a sin-offering and Aaron and his sons are actually going to do it, to make atonement for their sins…

…and they do it WRONG.

1. And Nadab and Abihu, the sons of Aaron, took each of them his censer, and put fire therein, and laid incense thereon, and offered strange fire before the LORD, which He had not commanded them. 2. And there came forth fire from before the LORD, and devoured them, and they died before the LORD. 3. Then Moses said unto Aaron: ‘This is it that the LORD spoke, saying: Through them that are nigh unto Me I will be sanctified, and before all the people I will be glorified.’ And Aaron held his peace. 4. And Moses called Mishael and Elzaphan, the sons of Uzziel the uncle of Aaron, and said unto them: ‘Draw near, carry your brethren from before the sanctuary out of the camp.’  5 So they drew near, and carried them in their tunics out of the camp, as Moses had said. 6 And Moses said unto Aaron, and unto Eleazar and unto Ithamar, his sons: ‘Let not the hair of your heads go loose, neither rend your clothes, that ye die not, and that He be not wroth with all the congregation; but let your brethren, the whole house of Israel, bewail the burning which the LORD hath kindled. And ye shall not go out from the door of the tent of meeting, lest ye die; for the anointing oil of the LORD is upon you.’ And they did according to the word of Moses.

Not only does the appeasement ritual of the sin-offering have to be done, it has to be done exactly right, and if you make an error, the world will explode. And note  the form of the error — the priests take initiative, they light a fire that God didn’t specifically tell them to light.  “Did I tell you to light that?”  And now, since God is angry, nobody else is allowed to act upset about the punishment, lest they get in trouble too.

These are not abstract theological ideas that the authors got out of nowhere. These are things that happen in families.

Growing in Poisoned Soil

I don’t mean to make this an anti-religious rant, or imply that religious people systematically support domestic abuse and tyranny. It was, after all, the story of Exodus that inspired American slaves in their fight for freedom.

The point is that this pattern — abuser-logic and abuse-victim logic — is a recurrent feature in the moral intuitions of everyone in a culture with patriarchal roots.

Here we have punishment, not as a deterrent or as a natural consequence of wrong action, but as rage, the fury of an authority who didn’t get the proper “respect.”

Here we have appeasement of that rage interpreted as the virtue of “humility” or “atonement.”

Here we have an intuitive sense that even generic moral words like “should” or “ought” are blows; they are what a dominant individual forces upon a subordinate.

Look at Psalm 51.  This is a prayer of repentance; this is what David sings after he realizes that he was wrong to commit adultery and murder. Sensible things to repent of, no doubt. But the internal logic, though beautiful and emotionally resonant, is crazypants.

“Behold, I was brought forth in iniquity, and in sin did my mother conceive me.” (Wait, you didn’t do anything wrong when you were a fetus, we’re talking about what you did wrong just now.)

“Purge me with hyssop, and I shall be clean; wash me, and I shall be whiter than snow.”  (Yes, guilt does create a desire for cleansing; but you’re expecting God to do the washing?  Only an external force can make you clean?)

“Hide Thy face from my sins, and blot out all mine iniquities.”  (Um, I’m pretty sure your victim is still dead.)

“The sacrifices of God are a broken spirit; a broken and a contrite heart, O God, Thou wilt not despise.” (AAAAAAAAAAA.)

Even legitimate guilt for serious wrongdoing gets conflated with submission and “broken-spiritedness” and pleading for mercy and an intuition of innate taintedness.  This is how morality works when you process it through the second circuit, through the native mammalian intuitions around dominance/submission.

It’s natural, it’s human, it’s easy to empathize with — and it’s quite insane.

It’s also, I think, related to problems specific to women.

If women are traditionally subordinate in your society — and forty years of women’s lib is nowhere near enough to overcome thousands of years of tradition — then women will disproportionately suffer domestic abuse, and even those who don’t will still inherit the kinds of intuitions that captives always do.

A “good girl” is obedient, “innocent” (i.e. lacking in experience, especially sexual experience), and never takes initiative, because initiative can get you in trouble. A “good girl” internalizes that to be “good” is simply to submit and to appease and please.

How can you possibly eliminate those dysfunctions until you attack their roots?

Women have higher rates of depression and anxiety than men. Girl toddlers also have higher rates of shame in response to failure than boy toddlers [6].  Women also have a significantly lower salivary cortisol response to social stress than men.[7] Blunted cortisol response to stress is what you see in PTSD, CFS, and atypical depression, which are all more common in women than men; it occurs more in low-status individuals than high-status ones.[8][9] The psychological and physiological problems most specific to women are also the illnesses associated with low social status and chronic shame.

If we have a society that runs on shame and appeasement, especially for women, then women will be hurt.  Everything we do and think today, including modern liberalism, is built on a base that includes granting legitimacy to abusive power.  I don’t mean this in the sense of “everything is tainted, you must see the world through mud-colored glasses”, but in the sense that this is where our inheritance comes from, these influences are still visible, this is the soil we grew from.

It’s not trivial to break away and create alternatives. People do.  Every concept of goodness-as-pattern or of universal justice is an alternative to abuse-logic, which is always personal and emotional.  But it’s hard to break away completely.

References 

[1]McGuire, Michael T., M. J. Raleigh, and C. Johnson. “Social dominance in adult male vervet monkeys: Behavior-biochemical relationships.” Information (International Social Science Council) 22.2 (1983): 311-328.

[2]Gilbert, Paul, and Michael T. McGuire. “Shame, status, and social roles: Psychobiology and evolution.” (1998).

[3]Keltner, Dacher. “Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame.” Journal of personality and social psychology 68.3 (1995): 441.

[4]Leary, Mark R., and Robin M. Kowalski. Social anxiety. Guilford Press, 1997.

[5]Sapolsky, Robert M. “The influence of social hierarchy on primate health.” Science 308.5722 (2005): 648-652.

[6]Lewis, Michael, Steven M. Alessandri, and Margaret W. Sullivan. “Differences in shame and pride as a function of children’s gender and task difficulty.” Child development 63.3 (1992): 630-638.

[7]Kirschbaum, Clemens, Stefan Wüst, and Dirk Hellhammer. “Consistent sex differences in cortisol responses to psychological stress.” Psychosomatic Medicine 54.6 (1992): 648-657.

[8]Gruenewald, Tara L., Margaret E. Kemeny, and Najib Aziz. “Subjective social status moderates cortisol responses to social threat.” Brain, behavior, and immunity 20.4 (2006): 410-419.

[9]”Threat, Social-Evaluative, and Self-Conscious Emotion.”Miller & Tangney, 1994



Mon, 11 Sep 2017 16:20:37 EDT

[Epistemic status: I guess instincts clearly exist, so take this post more as an expression of confusion than as a claim that they don’t.]

Predictive processing isn’t necessarily blank-slatist. But its focus on building concepts out of attempts to generate/predict sense data poses a problem for theories of innate knowledge. PP is more comfortable with deviations from a blank slate that involve the rules of cognition than with those that involve the contents of cognition.

For example, the theory shouldn’t mind the existence of genes for IQ. If the brain works on Bayesian math, some brains might be able to do the calculations more effectively than others. It shouldn’t even mind claims like “girls are more emotional than boys” – that’s just a question of how different hormones affect the Bayesian weighting of logical vs. emotional input.

But evolutionary psychologists make claims like “Men have been evolutionarily programmed to like women with big breasts, because those are a sign of fertility.” Forget for a second whether this is politically correct, or cross-culturally replicable, or anything like that. From a neurological point of view, how could this possibly work?

In Clark’s version of PP, infants laboriously construct all their priors out of sensory evidence. Object permanence takes months. Sensory coordination – the belief that eg the auditory and visual streams describe the same world, so that the same object might be both visible and producing sound – is not assumed. Clark even flirts with the possibility that some really basic assumptions might be learned:

Plausibly, it is only because the world we encounter must be parsed for action and intervention that we encounter, in experience, a relatively unambiguous determinate world at all. Subtract the need for action and the broadly Bayesian framework can seem quite at odds with the phenomenal facts about conscious perceptual experience: our world, it might be said, does not look as if it is encoded in an intertwined set of probability density distributions. Instead, it looks unitary and, on a clear day, unambiguous…biological systems, as mentioned earlier, may be informed by a variety of learned or innate “hyperpriors” concerning the general nature of the world. One such hyperprior might be that the world is usually in one determinate state or another.

I realize he’s not coming out and saying that maybe babies see the world as a probability distribution over hypotheses and only gradually “figure out” that a determinate world is more pragmatic. But he’s sure coming closer to saying that than anybody else I know.

In any case, we work up from these sorts of deep hyperpriors to testing out new models and ideas. Presumably we eventually gain concepts like “breast” after a lot of trial-and-error in which we learn that they generate successful predictions about the sensory world.

In this model, the evolutionary psychological theory seems like a confusion of levels. How do our genes reach out and grab this particular high-level category in the brain, “breast”, to let us know that we’re programmed to find it attractive?

To a first approximation, all a gene does is code for a protein. How, exactly, do you design a protein that makes men find big-breasted women attractive? I mean, I can sort of imagine that if you know what neurons carry the concept of “breast”, you can sort of wire them up to whatever region of the hypothalamus handles sexual attraction, so that whenever you see breasts you feel attraction. But number one, are you sure there’s a specific set of neurons that carry the concept “breast”? And number two, how do you get those neurons (and no others) to express a certain gene?

And if you want to posit an entire complicated breast-locating system made up of hundreds of genes, remember that we only have about 20,000 genes total. Most of these are already involved in doing things like making the walls of lysosomes flexible enough or something really boring like that. Really it’s a miracle that a mere 20,000 genes can make a human at all. So how many of these precious resources do you want to take up constructing some kind of weird Rube-Goldbergesque breast-related brain circuit?

The only excuse I can think of for the evo psych perspective is that it obviously works sometimes. Animals do have instincts; it can’t be learning all the way down.

Sometimes when we really understand those instincts, they do look like weird Rube Goldberg contraptions made of brain circuits. The classic example is baby gulls demanding food from their mother. Adult gulls have a red dot on their beaks, and the baby bird algorithm seems to be “The first thing you see with a red dot is your mother; demand food from her.” Maybe “red dot” is primitive enough that it’s easier to specify genetically than “thing that looks like a mother bird”?

The clearest example I can think of where animals clearly have an instinctive understanding of a high level concept is sex/gender – a few gay humans and penguins aside, Nature seems pretty good at keeping its creatures heterosexual. But this is one of the rare cases where evolution might really want to devote some big fraction of the 20,000 genes it has to work with to building a Rube Goldberg circuit.

Also, maybe we shouldn’t set those few gender-nonconforming humans aside. Remember, autistic people have some kind of impairment in top-down prior-based processing relative to the bottom-up evidence-based kind, and they’re about eight times more likely to be trans than the general population. It sure looks like there’s some kind of process in which people have to infer their gender. And even though evolution seems to be shouting some really loud hints, maybe if you weigh streams of evidence in unusual ways you can end up somewhere unexpected. Evolution may be able to bias the process or control its downstream effects, but it doesn’t seem able to literally hard-code it.

Someone once asked me how to distinguish between good and bad evolutionary psychology. One heuristic might be to have a strong prior against any claim in which genes can just reach into the level of already-formed concepts and tweak them around, unless there’s a really strong reason for evolution to go through a lot of trouble to make it happen.


Mon, 11 Sep 2017 16:14:12 EDT

IMG_7600-671x1024.jpg

This is an account of how magical thinking made us modern. When people talk about magical thinking, it is usually as a cognitive feature of children, uneducated people, the mushy-minded, or the mentally ill. If we notice magical thinking in ourselves, it is with a pang of shame: literate adults are supposed to be more […]

Thu, 7 Sep 2017 16:39:05 EDT

[Related to: It’s Bayes All The Way Up, Why Are Transgender People Immune To Optical Illusions?, Can We Link Perception And Cognition?]

I.

Sometimes I have the fantasy of being able to glut myself on Knowledge. I imagine meeting a time traveler from 2500, who takes pity on me and gives me a book from the future where all my questions have been answered, one after another. What’s consciousness? That’s in Chapter 5. How did something arose out of nothing? Chapter 7. It all makes perfect intuitive sense and is fully vouched by unimpeachable authorities. I assume something like this is how everyone spends their first couple of days in Heaven, whatever it is they do for the rest of Eternity.

And every so often, my fantasy comes true. Not by time travel or divine intervention, but by failing so badly at paying attention to the literature that by the time I realize people are working on a problem it’s already been investigated, experimented upon, organized into a paradigm, tested, and then placed in a nice package and wrapped up with a pretty pink bow so I can enjoy it all at once.

The predictive processing model is one of these well-wrapped packages. Unbeknownst to me, over the past decade or so neuroscientists have come up with a real theory of how the brain works – a real unifying framework theory like Darwin’s or Einstein’s – and it’s beautiful and it makes complete sense.

Surfing Uncertainty isn’t pop science and isn’t easy reading. Sometimes it’s on the border of possible-at-all reading. Author Andy Clark (a professor of logic and metaphysics, of all things!) is clearly brilliant, but prone to going on long digressions about various esoteric philosophy-of-cognitive-science debates. In particular, he’s obsessed with showing how “embodied” everything is all the time. This gets kind of awkward, since the predictive processing model isn’t really a natural match for embodiment theory, and describes a brain which is pretty embodied in some ways but not-so-embodied in others. If you want a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”, this is your book.

It’s also your book if you want to learn about predictive processing at all, since as far as I know this is the only existing book-length treatment of the subject. And it’s comprehensive, scholarly, and very good at giving a good introduction to the theory and why it’s so important. So let’s be grateful for what we’ve got and take a look.

II.

Stanislas Dehaene writes of our senses:

We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the “blind spot” where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, stabilized for our eye and head movements, and massively reinterpreted based on our previous experience of similar visual scenes. All these operations unfold unconsciously—although many of them are so complicated that they resist computer modeling. For instance, our visual system detects the presence of shadows in the image and removes them. At a glance, our brain unconsciously infers the sources of lights and deduces the shape, opacity, reflectance, and luminance of the objects.

Predictive processing begins by asking: how does this happen? By what process do our incomprehensible sense-data get turned into a meaningful picture of the world.

The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary.

The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.

The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”. It uses its knowledge of concepts to make predictions – not in the form of verbal statements, but in the form of expected sense data. It makes some guesses about what you’re going to see, hear, and feel next, and asks “Like this?” These predictions gradually move down all the cognitive layers to generate lower-level predictions. If that uniformed guy was a policeman, how would that affect the various objects in the scene? Given the answer to that question, how would it affect the distribution of edges in the scene? Given the answer to that question, how would it affect the raw-sense data received?

Both streams are probabilistic in nature. The bottom-up sensory stream has to deal with fog, static, darkness, and neural noise; it knows that whatever forms it tries to extract from this signal might or might not be real. For its part, the top-down predictive stream knows that predicting the future is inherently difficult and its models are often flawed. So both streams contain not only data but estimates of the precision of that data. A bottom-up percept of an elephant right in front of you on a clear day might be labelled “very high precision”; one of a a vague form in a swirling mist far away might be labelled “very low precision”. A top-down prediction that water will be wet might be labelled “very high precision”; one that the stock market will go up might be labelled “very low precision”.

As these two streams move through the brain side-by-side, they continually interface with each other. Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can. This can end up a couple of different ways.

First, the sense data and predictions may more-or-less match. In this case, the layer stays quiet, indicating “all is well”, and the higher layers never even hear about it. The higher levels just keep predicting whatever they were predicting before.

Second, low-precision sense data might contradict high-precision predictions. The Bayesian math will conclude that the predictions are still probably right, but the sense data are wrong. The lower levels will “cook the books” – rewrite the sense data to make it look as predicted – and then continue to be quiet and signal that all is well. The higher levels continue to stick to their predictions.

Third, there might be some unresolvable conflict between high-precision sense-data and predictions. The Bayesian math will indicate that the predictions are probably wrong. The neurons involved will fire, indicating “surprisal” – a gratuitiously-technical neuroscience term for surprise. The higher the degree of mismatch, and the higher the supposed precision of the data that led to the mismatch, the more surprisal – and the louder the alarm sent to the higher levels.

When the higher levels receive the alarms from the lower levels, this is their equivalent of bottom-up sense-data. They ask themselves: “Did the even-higher-levels predict this would happen?” If so, they themselves stay quiet. If not, they might try to change their own models that map higher-level predictions to lower-level sense data. Or they might try to cook the books themselves to smooth over the discrepancy. If none of this works, they send alarms to the even-higher-levels.

All the levels really hate hearing alarms. Their goal is to minimize surprisal – to become so good at predicting the world (conditional on the predictions sent by higher levels) that nothing ever surprises them. Surprise prompts a frenzy of activity adjusting the parameters of models – or deploying new models – until the surprise stops.

All of this happens several thousand times a second. The lower levels constantly shoot sense data at the upper levels, which constantly adjust their hypotheses and shoot them down at the lower levels. When surprise is registered, the relevant levels change their hypotheses or pass the buck upwards. After umpteen zillion cycles, everyone has the right hypotheses, nobody is surprised by anything, and the brain rests and moves on to the next task. As per the book:

To deal rapidly and fluently with an uncertain and noisy world, brains like ours have become masters of prediction – surfing the waves and noisy and ambiguous sensory stimulation by, in effect, trying to stay just ahead of them. A skilled surfer stays ‘in the pocket’: close to, yet just ahead of the place where the wave is breaking. This provides power and, when the wave breaks, it does not catch her. The brain’s task is not dissimilar. By constantly attempting to predict the incoming sensory signal we become able – in ways we shall soon explore in detail – to learn about the world around us and to engage that world in thought and action.

The result is perception, which the PP theory describes as “controlled hallucination”. You’re not seeing the world as it is, exactly. You’re seeing your predictions about the world, cashed out as expected sensations, then shaped/constrained by the actual sense data.

III.

Enough talk. Let’s give some examples. Most of you have probably seen these before, but it never hurts to remind:

dalmatian_cow1.png

This demonstrates the degree to which the brain depends on top-down hypotheses to make sense of the bottom-up data. To most people, these two pictures start off looking like incoherent blotches of light and darkness. Once they figure out what they are (spoiler) the scene becomes obvious and coherent. According to the predictive processing model, this is how we perceive everything all the time – except usually the concepts necessary to make the scene fit together come from our higher-level predictions instead of from clicking on a spoiler link.

topdown2.gif

This demonstrates how the top-down stream’s efforts to shape the bottom-up stream and make it more coherent can sometimes “cook the books” and alter sensation entirely. The real picture says “PARIS IN THE THE SPRINGTIME” (note the duplicated word “the”!). The top-down stream predicts this should be a meaningful sentence that obeys English grammar, and so replaces the the bottom-up stream with what it thinks that it should have said. This is a very powerful process – how many times have I repeated the the word “the” in this paragraph alone without you noticing?

fake_writing.png

A more ambiguous example of “perception as controlled hallucination”. Here your experience doesn’t quite deny the jumbled-up nature of the letters, but it superimposes a “better” and more coherent experience which appears naturally alongside.

Next up – this low-quality video of an airplane flying at night. Notice how after an instant, you start to predict the movement and characteristics of the airplane, so that you’re no longer surprised by the blinking light, the movement, the other blinking light, the camera shakiness, or anything like that – in fact, if the light stopped blinking, you would be surprised, even though naively nothing could be less surprising than a dark portion of the night sky staying dark. After a few seconds of this, the airplane continuing on its (pretty complicated) way just reads as “same old, same old”. Then when something else happens – like the camera panning out, or the airplane making a slight change in trajectory – you focus entirely on that, the blinking lights and movement entirely forgotten or at least packed up into “airplane continues on its blinky way”. Meanwhile, other things – like the feeling of your shirt against your skin – have been completely predicted away and blocked from consciousness, freeing you to concentrate entirely on any subtle changes in the airplane’s motion.

In the same vein: this is Rick Astley’s “Never Going To Give You Up” repeated again and again for ten hours (you can find some weird stuff on YouTube). The first hour, maybe you find yourself humming along occasionally. By the second hour, maybe it’s gotten kind of annoying. By the third hour, you’ve completely forgotten it’s even on at all.

But suppose that one time, somewhere around the sixth hour, it skipped two notes – just the two syllables “never”, so that Rick said “Gonna give you up.” Wouldn’t the silence where those two syllables should be sound as jarring as if somebody set off a bomb right beside you? Your brain, having predicted sounds consistent with “Never Gonna Give You Up” going on forever, suddenly finds its expectations violated and sends all sorts of alarms to the higher levels, where they eventually reach your consciousness and make you go “What the heck?”

IV.

Okay. You’ve read a lot of words. You’ve looked at a lot of pictures. You’ve listened to “Never Gonna Give You Up” for ten hours. Time for the payoff. Let’s use this theory to explain everything.

1. Attention. In PP, attention measures “the confidence interval of your predictions”. Sense-data within the confidence intervals counts as a match and doesn’t register surprisal. Sense-data outside the confidence intervals fails and alerts higher levels and eventually consciousness.

This modulates the balance between the top-down and bottom-up streams. High attention means that perception is mostly based on the bottom-up stream, since every little deviation is registering an error and so the overall perceptual picture is highly constrained by sensation. Low attention means that perception is mostly based on the top-down stream, and you’re perceiving only a vague outline of the sensory image with your predictions filling in the rest.

There’s a famous experiment which you can try below – if you’re trying it, make sure to play the whole video before moving on:

About half of subjects, told to watch the players passing the ball, don’t notice the gorilla. Their view of the ball-passing is closely constrained by the bottom-up stream; they see mostly what is there. But their view of the gorilla is mostly dependent on the top-down stream. Their confidence intervals are wide. Somewhere in your brain is a neuron saying “is that a guy in a gorilla suit?” Then it consults the top-down stream, which says “This is a basketball game, you moron”, and it smooths out the anomalous perception into something that makes sense like another basketball player.

But if you watch the video with the prompt “Look for something strange happening in the midst of all this basketball-playing”, you see the gorilla immediately. Your confidence intervals for unusual things are razor-thin; as soon as that neuron sees the gorilla it sends alarms to higher levels, and the higher levels quickly come up with a suitable hypothesis (“there’s a guy in a gorilla suit here”) which makes sense of the new data.

There’s an interesting analogy to vision here, where the center of your vision is very clear, and the outsides are filled in in a top-down way – I have a vague sense that my water bottle is in the periphery right now, but only because I kind of already know that, and it’s more of a mental note of “water bottle here as long as you ask no further questions” than a clear image of it. The extreme version of this is the blind spot, which gets filled in entirely with predicted imagery despite receiving no sensation at all.

2. Imagination, Simulation, Dreaming, Etc. Imagine a house. Now imagine a meteor crashing into the house. Your internal mental simulation was probably pretty good. Without even thinking about it, you got it to obey accurate physical laws like “the meteor continues on a constant trajectory”, “the impact happens in a realistic way”, “the impact shatters the meteorite”, and “the meteorite doesn’t bounce back up to space like a basketball”. Think how surprising this is.

In fact, think how surprising it is that you can imagine the house at all. This really high level concept – “house” – has been transformed in your visual imaginarium into a pretty good picture of a house, complete with various features, edges, colors, et cetera (if it hasn’t, read here). This is near-miraculous. Why do our brains have this apparently useless talent?

PP says that the highest levels of our brain make predictions in the form of sense data. They’re not just saying “I predict that guy over there is a policeman”, they’re generating the image of a policeman, cashing it out in terms of sense data, and colliding it against the sensory stream to see how it fits. The sensory stream gradually modulates it to fit the bottom-up evidence – a white or black policeman, a mustached or clean-shaven policeman. But the top-down stream is doing a lot of the work here. We are able to imagine the meteor, using the same machinery that would guide our perception of the meteor if we saw it up in the sky.

All of this goes double for dreaming. If “perception is controlled hallucination” caused by the top-down drivers of perception constrained by bottom-up evidence, then dreams are those top-down drivers playing around with themselves unconstrained by anything at all (or else very weakly constrained by bottom-up evidence, like when it’s really cold in your bedroom and you dream you’re exploring the North Pole).

A lot of people claim higher levels of this – lucid dreaming, astral projection, you name it, worlds exactly as convincing as our own but entirely imaginary. Predictive processing is very sympathetic to these accounts. The generative models that create predictions are really good; they can simulate the world well enough that it rarely surprises us. They also connect through various layers to our bottom-level perceptual apparatus, cashing out their predictions in terms of the lowest-level sensory signals. Given that we’ve got a top-notch world-simulator plus perception-generator in our heads, it shouldn’t be surprising when we occasionally perceive ourselves in simulated worlds.

3. Priming. I don’t mean the weird made-up kinds of priming that don’t replicate. I mean the very firmly established ones, like the one where, if you flash the word “DOCTOR” at a subject, they’ll be much faster and more skillful in decoding a series of jumbled and blurred letters into the word “NURSE”.

This is classic predictive processing. The top-down stream’s whole job is to assist the bottom-up stream in making sense of complicated fuzzy sensory data. After it hears the word “DOCTOR”, the top-down stream is already thinking “Okay, so we’re talking about health care professionals”. This creeps through all the lower levels as a prior for health-care related things; when the sense organs receive data that can be associated in a health-care related manner, the high prior helps increase the precision of this possibility until it immediately becomes the overwhelming leading hypothesis.

4. Learning. There’s a philosophical debate – which I’m not too familiar with, so sorry if I get it wrong – about how “unsupervised learning” is possible. Supervised reinforcement learning is when an agent tries various stuff, and then someone tells the agent if it’s right or wrong. Unsupervised learning is when nobody’s around to tell you, and it’s what humans do all the time.

PP offers a compelling explanation: we create models that generate sense data, and keep those models if the generated sense data match observation. Models that predict sense data well stick around; models that fail to predict the sense data accurately get thrown out. Because of all those lower layers adjusting out contingent features of the sensory stream, any given model is left with exactly the sense data necessary to tell it whether it’s right or wrong.

PP isn’t exactly blank slatist, but it’s compatible with a slate that’s pretty fricking blank. Clark discusses “hyperpriors” – extremely basic assumptions about the world that we probably need to make sense of anything at all. For example, one hyperprior is sensory synchronicity – the idea that our five different senses are describing the same world, and that the stereo we see might be the source of the music we hear. Another hyperprior is object permanence – the idea that the world is divided into specific objects that stick around whether or not they’re in the sensory field. Clark says that some hyperpriors might be innate – but says they don’t have to be, since PP is strong enough to learn them on its own if it has to. For example, after enough examples of, say, seeing a stereo being smashed with a hammer at the same time that music suddenly stops, the brain can infer that connecting the visual and auditory evidence together is a useful hack that helps it to predict the sensory stream.

I can’t help thinking here of Molyneux’s Problem, a thought experiment about a blind-from-birth person who navigates the world through touch alone. If suddenly given sight, could the blind person naturally connect the visual appearance of a cube to her own concept “cube”, which she derived from the way cubes feel? In 2003, some researchers took advantage of a new cutting-edge blindness treatment to test this out; they found that no, the link isn’t intuitively obvious to them. Score one for learned hyperpriors.

But learning goes all the way from these kinds of really basic hyperpriors all the way up to normal learning like what the capital of France is – which, if nothing else, helps predict what’s going to be on the other side of your geography flashcard, and which high-level systems might keep as a useful concept to help it make sense of the world and predict events.

5. Motor Behavior. About a third of Surfing Uncertainty is on the motor system, it mostly didn’t seem that interesting to me, and I don’t have time to do it justice here (I might make another post on one especially interesting point). But this has been kind of ignored so far. If the brain is mostly just in the business of making predictions, what exactly is the motor system doing?

Based on a bunch of really excellent experiments that I don’t have time to describe here, Clark concludes: it’s predicting action, which causes the action to happen.

This part is almost funny. Remember, the brain really hates prediction error and does its best to minimize it. With failed predictions about eg vision, there’s not much you can do except change your models and try to predict better next time. But with predictions about proprioceptive sense data (ie your sense of where your joints are), there’s an easy way to resolve prediction error: just move your joints so they match the prediction. So (and I’m asserting this, but see Chapters 4 and 5 of the book to hear the scientific case for this position) if you want to lift your arm, your brain just predicts really really strongly that your arm has been lifted, and then lets the lower levels’ drive to minimize prediction error do the rest.

Under this model, the “prediction” of a movement isn’t just the idle thought that a movement might occur, it’s the actual motor program. This gets unpacked at all the various layers – joint sense, proprioception, the exact tension level of various muscles – and finally ends up in a particular fluid movement:

Friston and colleagues…suggest that precise proprioceptive predictions directly elicit motor actions. This means that motor commands have been replaced by (or as I would rather say, implemented by) proprioceptive predictions. According to active inference, the agent moves body and sensors in ways that amount to actively seeking out the sensory consequences that their brains expect. Perception, cognition, and action – if this unifying perspective proves correct – work together to minimize sensory prediction errors by selectively sampling and actively sculpting the stimulus array. This erases any fundamental computational line between perception and the control of action. There remains [only] an obvious difference in direction of fit. Perception here matches hural hypotheses to sensory inputs…while action brings unfolding proprioceptive inputs into line with neural predictions. The difference, as Anscombe famously remarked, is akin to that between consulting a shopping list (thus letting the list determine the contents of the shopping basket) and listing some actually purchased items (thus letting the contents of the shopping basket determine the list). But despite the difference in direction of fit, the underlying form of the neural computations is now revealed as the same.

6. Tickling Yourself. One consequence of the PP model is that organisms are continually adjusting out their own actions. For example, if you’re trying to predict the movement of an antelope you’re chasing across the visual field, you need to adjust out the up-down motion of your own running. So one “hyperprior” that the body probably learns pretty early is that if it itself makes a motion, it should expect to feel the consequences of that motion.

There’s a really interesting illusion called the force-matching task. A researcher exerts some force against a subject, then asks the subject to exert exactly that much force against something else. Subjects’ forces are usually biased upwards – they exert more force than they were supposed to – probably because their brain’s prediction engines are “cancelling out” their own force. Clark describes one interesting implication:

The same pair of mechanisms (forward-model-based prediction and the dampening of resulting well-predicted sensation) have been invoked to explain the unsettling phenomenon of ‘force escalation’. In force escalation, physical exchanges (playground fights being the most common exemplar) mutually ramp up via a kind of step-ladder effect in which each person believes the other one hit them harder. Shergill et al describe experiments that suggest that in such cases each person is truthfully reporting their own sensations, but that those sensations are skewed by the attenuating effects of self-prediction. Thus, ‘self-generated forces are perceived as weaker than externally generated forces of equal magnitude.’

This also explains why you can’t tickle yourself – your body predicts and adjusts away your own actions, leaving only an attenuated version.

7. The Placebo Effect. We hear a lot about “pain gating” in the spine, but the PP model does a good job of explaining what this is: adjusting pain based on top-down priors. If you believe you should be in pain, the brain will use that as a filter to interpret ambiguous low-precision pain signals. If you believe you shouldn’t, the brain will be more likely to assume ambiguous low-precision pain signals are a mistake. So if you take a pill that doctors assure you will cure your pain, then your lower layers are more likely to interpret pain signals as noise, “cook the books” and prevent them from reaching your consciousness.

Psychosomatic pain is the opposite of this; see Section 7.10 of the book for a fuller explanation.

8. Asch Conformity Experiment. More speculative, and not from the book. But remember this one? A psychologist asked subjects which lines were the same length as other lines. The lines were all kind of similar lengths, but most subjects were still able to get the right answer. Then he put the subjects in a group with confederates; all of the confederates gave the same wrong answer. When the subject’s turn came, usually they would disbelieve their eyes and give the same wrong answer as the confederates.

The bottom-up stream provided some ambiguous low-precision bottom-up evidence pointing toward one line. But in the final Bayesian computation, those were swamped by the strong top-down prediction that it would be another. So the middle layers “cooked the books” and replaced the perceived sensation with the predicted one. From Wikipedia:

Participants who conformed to the majority on at least 50% of trials reported reacting with what Asch called a “distortion of perception”. These participants, who made up a distinct minority (only 12 subjects), expressed the belief that the confederates’ answers were correct, and were apparently unaware that the majority were giving incorrect answers.

9. Neurochemistry. PP offers a way to a psychopharmacological holy grail – an explanation of what different neurotransmitters really mean, on a human-comprehensible level. Previous attempts to do this, like “dopamine represents reward, serotonin represents calmness”, have been so wildly inadequate that the whole question seems kind of disreputable these days.

But as per PP, the NMDA glutamatergic system mostly carries the top-down stream, the AMPA glutamatergic system mostly carries the bottom-up stream, and dopamine mostly carries something related to precision, confidence intervals, and surprisal levels. This matches a lot of observational data in a weirdly consistent way – for example, it doesn’t take a lot of imagination to think of the slow, hesitant movements of Parkinson’s disease as having “low motor confidence”.

10. Autism. Various research in the PP tradition has coalesced around the idea of autism as an unusually high reliance on bottom-up rather than top-down information, leading to “weak central coherence” and constant surprisal as the sensory data fails to fall within pathologically narrow confidence intervals.

Autistic people classically can’t stand tags on clothing – they find them too scratchy and annoying. Remember the example from Part III about how you successfully predicted away the feeling of the shirt on your back, and so manage never to think about it when you’re trying to concentrate on more important things? Autistic people can’t do that as well. Even though they have a layer in their brain predicting “will continue to feel shirt”, the prediction is too precise; it predicts that next second, the shirt will produce exactly the same pattern of sensations it does now. But realistically as you move around or catch passing breezes the shirt will change ever so slightly – at which point autistic people’s brains will send alarms all the way up to consciousness, and they’ll perceive it as “my shirt is annoying”.

Or consider the classic autistic demand for routine, and misery as soon as the routine is disrupted. Because their brains can only make very precise predictions, the slightest disruption to routine registers as strong surprisal, strong prediction failure, and “oh no, all of my models have failed, nothing is true, anything is possible!” Compare to a neurotypical person in the same situation, who would just relax their confidence intervals a little bit and say “Okay, this is basically 99% like a normal day, whatever”. It would take something genuinely unpredictable – like being thrown on an unexplored continent or something – to give these people the same feeling of surprise and unpredictability.

This model also predicts autistic people’s strengths. We know that polygenic risk for autism is positively associated with IQ. This would make sense if the central feature of autism was a sort of increased mental precision. It would also help explain why autistic people seem to excel in high-need-for-precision areas like mathematics and computer programming.

11. Schizophrenia. Converging lines of research suggest this also involves weak priors, apparently at a different level to autism and with different results after various compensatory mechanisms have had their chance to kick in. One especially interesting study asked neurotypicals and schizophrenics to follow a moving light, much like the airplane video in Part III above. When the light moved in a predictable pattern, the neurotypicals were much better at tracking it; when it was a deliberately perverse video specifically designed to frustrate expectations, the schizophrenics actually did better. This suggests that neurotypicals were guided by correct top-down priors about where the light would be going; schizophrenics had very weak priors and so weren’t really guided very well, but also didn’t screw up when the light did something unpredictable. Schizophrenics are also famous for not being fooled by the “hollow mask” (below) and other illusions where top-down predictions falsely constrain bottom-up evidence. My guess is they’d be more likely to see both ‘the’s in the “PARIS IN THE THE SPRINGTIME” image above.

spinning_mask.gif

The exact route from this sort of thing to schizophrenia is really complicated, and anyone interested should check out Section 2.12 and the whole of Chapter 7 from the book. But the basic story is that it creates waves of anomalous prediction error and surprisal, leading to the so-called “delusions of significance” where schizophrenics believe that eg the fact that someone is wearing a hat is some sort of incredibly important cosmic message. Schizophrenics’ brains try to produce hypotheses that explain all of these prediction errors and reduce surprise – which is impossible, because the prediction errors are random. This results in incredibly weird hypotheses, and eventually in schizophrenic brains being willing to ignore the bottom-up stream entirely – hence hallucinations.

All this is treated with antipsychotics, which antagonize dopamine, which – remember – represents confidence level. So basically the medication is telling the brain “YOU CAN IGNORE ALL THIS PREDICTION ERROR, EVERYTHING YOU’RE PERCEIVING IS TOTALLY GARBAGE SPURIOUS DATA” – which turns out to be exactly the message it needs to hear.

An interesting corollary of all this – because all of schizophrenics’ predictive models are so screwy, they lose the ability to use the “adjust away the consequences of your own actions” hack discussed in Part 5 of this section. That means their own actions don’t get predicted out, and seem like the actions of a foreign agent. This is why they get so-called “delusions of agency”, like “the government beamed that thought into my brain” or “aliens caused my arm to move just now”. And in case you were wondering – yes, schizophrenics can tickle themselves.

12. Everything else. I can’t possibly do justice to the whole of Surfing Uncertainty, which includes sections in which it provides lucid and compelling PP-based explanations of hallucinations, binocular rivalry, conflict escalation, and various optical illusions. More speculatively, I can think of really interesting connections to things like phantom limbs, creativity (and its association with certain mental disorders), depression, meditation, etc, etc, etc.

The general rule in psychiatry is: if you think you’ve found a theory that explains everything, diagnose yourself with mania and check yourself into the hospital. Maybe I’m not at that point yet – for example, I don’t think PP does anything to explain what mania itself is. But I’m pretty close.

IV.

This is a really poor book review of Surfing Uncertainty, because I only partly understood it. I’m leaving out a lot of stuff about the motor system, debate over philosophical concepts with names like “enactivism”, descriptions of how neurons form and unform coalitions, and of course a hundred pages of apologia along the lines of “this may not look embodied, but if you squint you’ll see how super-duper embodied it really is!”. As I reread and hopefully come to understand some of this better, it might show up in future posts.

But speaking of philosophical debates, there’s one thing that really struck me about the PP model.

Voodoo psychology suggests that culture and expectation tyrannically shape our perceptions. Taken to an extreme, objective knowledge is impossible, since all our sense-data is filtered through our own bias. Taken to a very far extreme, we get things like What The !@#$ Do We Know?‘s claim that the Native Americans literally couldn’t see Columbus’ ships, because they had no concept of “caravel” and so the percept just failed to register. This sort of thing tends to end by arguing that science was invented by straight white men, and so probably just reflects straight white maleness, and so we should ignore it completely and go frolic in the forest or something.

Predictive processing is sympathetic to all this. It takes all of this stuff like priming and the placebo effect, and it predicts it handily. But it doesn’t give up. It (theoretically) puts it all on a sound mathematical footing, explaining exactly how much our expectations should shape our reality, and in which ways our expectation should shape our reality. I feel like someone armed with predictive processing and a bit of luck should have been able to predict that placebo effect and basic priming would work, but stereotype threat and social priming wouldn’t. Maybe this is total retrodictive cheating. But I feel like it should be possible.

If this is true, it gives us more confidence that our perceptions should correspond – at least a little – to the external world. We can accept that we may be misreading “PARIS IN THE THE SPRINGTIME” while remaining confident that we wouldn’t misread “PARIS IN THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE THE SPRINGTIME” as containing only one “the”. Top-down processing very occasionally meddles in bottom-up sensation, but (as long as you’re not schizophrenic), it sticks to an advisory role rather than being able to steamroll over arbitrary amounts of reality.

The rationalist project is overcoming bias, and that requires both an admission that bias is possible, and a hope that there’s something other than bias which we can latch onto as a guide. Predictive processing gives us more confidence in both, and helps provide a convincing framework we can use to figure out what’s going on at all levels of cognition.


Tue, 5 Sep 2017 17:05:40 EDT

I.

Philosopher Amanda Askell questions the practice of moral offsetting.

Offsetting is where you compensate for a bad thing by doing a good thing, then consider yourself even. For example, an environmentalist takes a carbon-belching plane flight, then pays to clean up the same amount of carbon she released.

This can be pretty attractive. If you’re really environmentalist, but also really want to take a vacation to Europe, you could be pretty miserable not knowing whether your vacation is worth the cost to the planet. But if you can calculate that it would take about $70 to clean up more carbon than you release, that’s such a small addition to the overall cost of the trip that you can sigh with relief and take the flight guilt-free.

Or use offsets instead of becoming vegetarian. An typical person’s meat consumption averages 0.3 cows and 40 chickens per year. Animal Charity Evaluators believes that donating to a top animal charity this many animals’ lives for less than $5; others note this number is totally wrong and made up. But it’s hard to believe charities could be less cost-effective than just literally buying the animals; this would fix a year’s meat consumption offset price at around $500. Would I pay between $5 and $500 a year not to have to be a vegetarian? You bet.

Askell is uncomfortable with this concept for the same reasons I was when I first heard about it. Can we kill an enemy, then offset it with enough money to save somebody else’s life? Smash other people’s property, then give someone else enough money to buy different property? Can Bill Gates nuke entire cities for fun, then build better cities somewhere else?

She concludes:

There are a few different things that the harm-based ethicist could say in response to this, however. First, they could point out that as the immorality of the action increases, it becomes far less likely that performing this action and morally offsetting is the best option available, even out of those options that actualists would deem morally relevant. Second, it is very harmful to undermine social norms where people don’t behave immorally and compensate for it (imagine how terrible it would be to live in a world where this was acceptable). Third, it is – in expectation – bad to become the kind of person who offsets their moral harms. Such a person will usually have a much worse expected impact on the world than someone who strives to be as moral as they can be.

I think that these are compelling reasons to think that, in the actual world, we are – at best – morally permitted to offset trivial immoral actions, but that more serious immoral actions are almost always not the sorts of things we can morally offset. But I also think that the fact that these arguments all depend on contingent features of the world should be concerning to those who defend harm-based views in ethics.

I think Askell gets the right answer here – you can offset carbon emissions but not city-nuking. And I think her reasoning sort of touches on some of the important considerations. But I also think there’s a much more elegant theory that gives clear answers to these kinds of questions, and which relieves some of my previous doubts about the offsetting idea.

II.

Everything below is taken from vague concepts philosophers talk about all the time, but which I can’t find a single good online explanation of. I neither deserve credit for anything good about the ideas, nor can avoid blame for any mistakes or confusions in the phrasing. That having been said: consider the distinction between axiology, morality, and law.

Axiology is the study of what’s good. If you want to get all reductive, think of it as comparing the values of world-states. A world-state where everybody is happy seems better than a world-state where everybody is sad. A world-state with lots of beautiful art is better than a world-state containing only featureless concrete cubes. Maybe some people think a world-state full of people living in harmony with nature is better than a world-state full of gleaming domed cities, and other people believe the opposite; when they debate the point, they’re debating axiology.

Morality is the study of what the right thing to do is. If someone says “don’t murder”, they’re making a moral commandment. If someone says “Pirating music is wrong”, they’re making a moral claim. Maybe some people believe you should pull the lever on the trolley problem, and other people believe you shouldn’t; when they debate the point, they’re debating morality.

(this definition elides a complicated distinction between individual conscience and social pressure; fixing that would be really hard and I’m going to keep eliding it)

Law is – oh, come on, you know this one. If someone says “Don’t go above the speed limit, there’s a cop car behind that corner”, that’s law. If someone says “my state doesn’t allow recreational marijuana, but it will next year”, that’s law too. Maybe some people believe that zoning restrictions should ban skyscrapers in historic areas, and other people believe they shouldn’t; when they debate the point, they’re debating law.

These three concepts are pretty similar; they’re all about some vague sense of what is or isn’t desirable. But most societies stop short of making them exactly the same. Only the purest act-utilitarianesque consequentialists say that axiology exactly equals morality, and I’m not sure there is anybody quite that pure. And only the harshest of Puritans try to legislate the state law to be exactly identical to the moral one. To bridge the whole distance – to directly connect axiology to law and make it illegal to do anything other than the most utility-maximizing action at any given time – is such a mind-bogglingly bad idea that I don’t think anyone’s even considered it in all of human history.

These concepts stay separate because they each make different compromises between goodness, implementation, and coordination.

One example: axiology can’t distinguish between murdering your annoying neighbor vs. not donating money to save a child dying of parasitic worms in Uganda. To axiology, they’re both just one life snuffed out of the world before its time. If you forced it to draw some distinction, it would probably decide that saving the child dying of parasitic worms was more important, since they have a longer potential future lifespan.

But morality absolutely draws this distinction: it says not-murdering is obligatory, but donating money to Uganda is supererogatory. Even utilitarians who deny this distinction in principle will use it in everyday life: if their friend was considering not donating money, they would be a little upset; if their friend was considering murder, they would be horrified. If they themselves forgot to donate money, they’d feel a little bad; if they committed murder in the heat of passion, they’d feel awful.

Another example: Donating 10% of your income to charity is a moral rule. Axiology says “Why not donate all of it?”, Law says “You won’t get in trouble even if you don’t donate any of it”, but at the moral level we set a clear and practical rule that meshes with our motivational system and makes the donation happen.

Another example: “Don’t have sex with someone who isn’t mature enough to consent” is a good moral rule. But it doesn’t make a good legal rule; we don’t trust police officers and judges to fairly determine whether someone’s mature enough in each individual case. A society which enshrined this rule in law would be one where you were afraid to have sex with anyone at all – because no matter what your partner’s maturity level, some police officer might say your partner seemed immature to them and drag you away. On the other hand, elites could have sex with arbitrarily young people, expecting police and judges to take their side.

So the state replaces this moral rule with the legal rule “don’t have sex with anyone below age 18”. Everyone knows this rule doesn’t perfectly capture reality – there’s no significant difference between 17.99-year-olds and 18.01-year-olds. It’s a useful hack that waters down the moral rule in order to make it more implementable. Realistically it gets things wrong sometimes; sometimes it will incorrectly tell people not to have sex with perfectly mature 17.99-year-olds, and other times it will incorrectly excuse sex with immature 18.01-year-olds. But this beats the alternative, where police have the power to break up any relationship they don’t like, and where everyone has to argue with everybody else about whether their relationships are okay or not.

A final example: axiology tells us a world without alcohol would be better than our current world: ending alcoholism could avert millions of deaths, illnesses, crimes, and abusive relationships. Morality only tells us that we should be careful drinking and stop if we find ourselves becoming alcoholic or ruining our relationships. And the law protests that it tried banning alcohol once, but it turned out to be unenforceable and gave too many new opportunities to organized crime, so it’s going to stay out of this one except to say you shouldn’t drink and drive.

So fundamentally, what is the difference between axiology, morality, and law?

Axiology is just our beliefs about what is good. If you defy axiology, you make the world worse.

At least from a rule-utilitarianesque perspective, morality is an attempt to triage the infinite demands of axiology, in order to make them implementable by specific people living in specific communities. It makes assumptions like “people have limited ability to predict the outcome of their actions”, “people are only going to do a certain amount and then get tired”, and “people do better with bright-line rules than with vague gradients of goodness”. It also admits that it’s important that everyone living in a community is on at least kind of the same page morally, both in order to create social pressure to follow the rules, and in order to build the social trust that allows the community to keep functioning. If you defy morality, you still make the world worse. And you feel guilty. And you betray the social trust that lets your community function smoothly. And you get ostracized as a bad person.

Law is an attempt to formalize the complicated demands of morality, in order to make them implementable by a state with police officers and law courts. It makes assumptions like “people’s vague intuitive moral judgments can sometimes give different results on the same case”, “sometimes police officers and legislators are corrupt or wrong”, and “we need to balance the benefits of laws against the cost of enforcing them”. It also tries to avert civil disorder or civil war by assuring everybody that it’s in their best interests to appeal to a fair universal law code rather than try to solve their disagreements directly. If you defy law, you still get all the problems with defying axiology and morality. And you make your country less peaceful and stable. And you go to jail.

In a healthy situation, each of these systems reinforces and promotes the other. Morality helps you implement axiology from your limited human perspective, but also helps prevent you from feeling guilty for not being God and not being able to save everybody. The law helps enforce the most important moral and axiological rules but also leaves people free enough to use their own best judgment on how to pursue the others. And axiology and morality help resolve disputes about what the law should be, and then lend the support of the community, the church, and the individual conscience in keeping people law-abiding.

In these healthy situations, the universally-agreed priority is that law trumps morality, and morality trumps axiology. First, because you can’t keep your obligations to your community from jail, and you can’t work to make the world a better place when you’re a universally-loathed social outcast. But also, because you can’t work to build strong communities and relationships in the middle of a civil war, and you can’t work to make the world a better place from within a low-trust defect-defect equilibrium. But also, because in a just society, axiology wants you to be moral (because morality is just a more-effective implementation of axiology), and morality wants you to be law-abiding (because law is just a more-effective way of coordinating morality). So first you do your legal duty, then your moral duty, and then if you have energy left over, you try to make the world a better place.

(Katja Grace has some really good writing on this kind of stuff here)

In unhealthy situations, you can get all sorts of weird conflicts. Most “moral dilemmas” are philosophers trying to create perverse situations where axiology and morality give opposite answers. For example, the fat man version of the trolley problem sets axiology (“it’s obviously better to have a world where one person dies than a world where five people die”) against morality (“it’s a useful rule that people generally shouldn’t push other people to their deaths”). And when morality and state law disagree, you get various acts of civil disobedience, from people hiding Jews from the Nazis all the way down to Kentucky clerks refusing to perform gay marriages.

I don’t have any special insight into these. My intuition (most authoritative source! is never wrong!) says that we should be very careful reversing the usual law-trumps-morality-trumps-axiology order, since the whole point of having more than one system is that we expect the systems to disagree and we want to suppress those disagreements in order to solve important implementation and coordination problems. But I also can’t deny that for enough gain, I’d reverse the order in a heartbeat. If someone told me that by breaking a promise to my friend (morality) I could cure all cancer forever (axiology), then f@$k my friend, and f@$k whatever social trust or community cohesion would be lost by the transaction.

III.

With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality.

Emitting carbon doesn’t violate any moral law at all (in the stricter sense of morality used above). It does make the world a worse place. But there’s no unspoken social agreement not to do it, it doesn’t violate any codes, nobody’s going to lose trust in you because of it, you’re not making the community any less cohesive. If you make the world a worse place, it’s perfectly fine to compensate by making the world a better place. So pay to clean up some carbon, or donate to help children in Uganda with parasitic worms, or whatever.

Eating meat doesn’t violate any moral laws either. Again, it makes the world a worse place. But there aren’t any bonds of trust between humans and animals, nobody’s expecting you not to eat meat, there aren’t any written or unwritten codes saying you shouldn’t. So eat the meat and offset it by making the world better in some other way.

(the strongest counterargument I can think of here is that you’re not betraying animals, but you might be betraying your fellow animals-rights-activists! That is, if they’re working to establish a social norm against meat-eating, the sort of thing where being spotted with a cheeseburger on your plate produces the same level of horror as being spotted holding a bloody knife above a dead body, then your meat-eating is interfering with their ability to establish that norm, and this is a problem that requires more than just offsetting the cost of the meat involved)

Murdering someone does violate a moral law. The problem with murder isn’t just that it creates a world in which one extra person is dead. If that’s all we cared about, murdering would be no worse than failing to donate money to cure tropical diseases, which also kills people.

(and the problem isn’t just that it has some knock-on effects in terms of making people afraid of crime, or decreasing the level of social trust by 23.5 social-trustons, or whatever. If that were all, you could do what 90% of you are probably already thinking – “Just as we’re offsetting the murder by donating enough money to hospitals to save one extra life, couldn’t we offset the social costs by donating enough money to community centers to create 23.5 extra social-trustons?” There’s probably something like that which would work, but along with everything else we’re crossing a Schelling fence, breaking rules, and weakening the whole moral edifice. The cost isn’t infinite, but it’s pretty hard to calculate. If we’re positing some ridiculous offset that obviously outweighs any possible cost – maybe go back to the example of curing all cancer forever – then whatever, go ahead. If it’s anything less than that, be careful. I like the metaphor of these three systems being on three separate tiers – rather than two Morality Points being worth one Axiology Point, or whatever – exactly because we don’t really know how to interconvert them)

This is more precise than Askell’s claim that we can offset “trivial immoral actions” but not “more serious” ones. For example, suppose I built an entire power plan that emitted one million tons of carbon per year. Sounds pretty serious! But if I offset that with environmental donations or projects that prevented 1.1 million tons of carbon somewhere else, I can’t imagine anyone having a problem with it.

On the other hand, consider spitting in a stranger’s face. In the grand scheme of things, this isn’t so serious – certainly not as serious as emitting a million tons of carbon. But I would feel uncomfortable offsetting this with a donation to my local Prevent Others From Spitting In Strangers’ Face fund, even if the fund worked.

Askell gave a talk where she used the example of giving your sister a paper cut, and then offsetting that by devoting your entire life to helping the world and working for justice and saving literally thousands of people. Pretty much everyone agrees that’s okay. I guess I agree it’s okay. Heck, I guess I would agree that murdering someone in order to cure cancer forever would be okay. But now we’re just getting into the thing where you bulldoze through moral uncertainty by making the numbers so big that it’s impossible to be uncertain about them. Sure. You can do that. I’d be less happy about giving my sister a paper cut, and then offsetting by preventing one paper cut somewhere else. But that seems to be the best analogy to the “emit one ton of carbon, prevent one ton of carbon” offsetting we’ve been talking about elsewhere.

I realize all this is sort of hand-wavy – more of a “here’s one possible way we could look at these things” rather than “here’s something I have a lot of evidence is true”. But everyone – you, me, Amanda Askell, society – seems to want a system that tells us to offset carbon but not murder, and when we find such a system I think it’s worth taking it seriously.


Mon, 4 Sep 2017 15:44:14 EDT

Our new book, The Elephant in the Brain, argues that hidden motives drive much of our behavior. If so, then to make fiction seem realistic, those who create it will need to be aware of such hidden motives. For example, back in 2009 I wrote:

Impro, a classic book on theatre improvisation, convincingly shows that people are better actors when they notice how status moves infuse most human interactions. Apparently we are designed to be very good at status moves, but to be unconscious of them.

The classic screenwriting text Story, by Robert McKee, agrees more generally, and explains it beautifully:

Text means the sensory surface of a work of art. In film, it’s the images onscreen and the soundtrack of dialogue, music, and sound effects. What we see. What we hear. What people say. What people do. Subtext is the life under that surface – thoughts and feelings both known and unknown, hidden by behavior.

Nothing is what it seems. This principle calls for the screen-writer’s constant awareness of the duplicity of life, his recognition that everything exists on at least two levels, and that, therefore, he must write a simultaneous duality: First, he must create a verbal description of the sensory surface of life, sight and sound, activity and talk. Second, he must create the inner world of conscious and unconscious desire, action and reaction, impulse and id, genetic and experiential imperatives. As in reality, so in fiction: He must veil the truth with a living mask, the actual thoughts and feelings of characters behind their saying and doing.

An old Hollywood expression goes “If the scene is about what the scene is about, you’re in deep shit.” It means writing “on the nose,” writing dialogue and activity in which a character’s deepest thoughts and feelings are expressed by what the character says and does – writing the subtext directly into the text.

Writing this, for example: Two attractive people sit opposite each other at a candlelit table, the lighting glinting off the crystal wineglasses and the dewy eyes of the lovers. Soft breezes billow the curtains. A Chopin nocturne plays in in the background. The lovers reach across the table, touch hands, look longingly in each others’ eyes, say, “I love you, I love you” .. and actually mean it. This is an unactable scene and will die like a rat in the road. ..

An actor forced to do the candlelit scene might attack it like this: “Why have these people done out of their way to create this movie scene? What’s with the candlelight, soft music, billowing curtains? Why don’t they just take their pasta to the TV set like normal people? What’s wrong with this relationship? Because isn’t that life? When do the candles come out? When everything’s fine? No. When everything’s fine we take our pasta to the TV set like normal people. So from that insight the actor will create a subtext. Now as we watch, we think: “He says he loves her and maybe he does, but look, he’s scared of losing her. He’s desperate.” Or from another subtext: “He says he loves her, but look, he’s setting her up for bad news. He’s getting ready to walk out.”

The scene is not about what it seems to be about. Its about something else. And it’s that something else – trying to regain her affection or softening her up for the barkeep – that will make the scene work. There’s always a subtext, and inner life that contrasts with or contradicts the text. Given this, the actor will create a multi layered work that allows us to see through the text to the truth that vibrates beyond the eyes, voice and gestures of life. ..

In truth, it’s virtually impossible for anyone, even the insane, to fully express what’s going on inside. No matter how much we wish to manifest our deepest feelings, they elude us. We never fully express the truth, for in fact we rarely know it. .. Nor does this mean that we can’t write powerful dialogue in which desperate people try to them the truth. It simply means that the most passionate moments must conceal an even deeper level. ..

Subtext is present even when a character is alone. For if no one else is watching us, we are. We wear masks to thinner our true selves from ourselves. Not only do individuals wear masks, but institutions do as well and hire public relations experts to keep them in place. (pp.252-257)


1 2 Next »

Got a web page? Want to add this automatically updating news feed? Just copy and paste the code below into your HTML.