Feed URL:

Fri, 21 Jul 2017 7:34:30 EDT

We were only pretending to engage with each other. But it wasn’t our fault. We had to be, because talking about bad faith is Not OK.

Related: Assuming Positive Intent, Bad intent is a disposition, not a feeling

Dialogue on a friend’s Facebook wall

He wrote: While patriarchy hurts men, it oppresses women.
I asked: What does oppress mean here?
He responded: “The exercise of authority or power in a burdensome, or unjust manner." Same thing it has always meant.
I asked: Do I understand you right that you think women, but not men, are victims of the exercise of authority or power in a burdensome, or unjust manner?
He responded: Yes. To think otherwise is to be willfully ignorant.
But later in the same comment: I am saying that our society is cruel and unjust to both men and women, but its more cruel and unjust to women.

At this point I called him out for arguing in bad faith. But with a bit of perspective, I can see that I was arguing in bad faith as well.

What went wrong?

Oppression is a loaded term without a clear consistent definition. There were reasonable things he might have meant, and there were unreasonable things he might have meant, and I wanted to know what he meant. So went my narrative. So I asked what it meant in this context.

The point at which I started expecting a bad-faith interaction is the point at which I asked what oppression meant in this context, and was given a dictionary definition, that clearly wasn’t the definition he was using.

There's no plausible model on which that was a sincere attempt to help me understand what he meant. Corroborating evidence - the definition he gave is the first definition offered on the first Google result I got for the term.

But if he wasn’t trying to help me understand, what was he doing? Offering an Official Definition was a purely defensive move, an attempt to score points for "answering the question" (or avoid social penalties for not doing so) on the assumption that people aren't keeping track of the overall content, but instead only responding to each transaction as a one-off.

One honest alternative would have been to try to think about why I might be confused about the meaning of "oppress" in this context, and explain what it meant here. But defensive moves are often responses to perceived bad faith on the other side. The other thing he could have done - and perhaps the more realistic option - was to actually tell me that he didn't think my question was in good faith, and explain why.

Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.

And I’m guilty of the same offense. When I asked my second question, I too was being dishonest. I didn’t expect an honest answer, but was pretending that I did.

I'm so sorry for that ghost I made you be
Only one of us was real and that was me

I heard the snake was baffled by his sin
He shed his scales to find the snake within
But born again is born without a skin
The poison enters into everything

And I wish there was a treaty we could sign
I do not care who takes this bloody hill
I'm angry and I'm tired all the time
I wish there was a treaty, I wish there was a treaty
Between your love and mine


Fri, 21 Jul 2017 7:22:47 EDT

A tribesman from a hot place points at what you’re wearing. “What is that?”

“A jacket,” you say.

“What is a jacket?” he asks.

What he wants to know is the purpose for which the jacket is used, and so you tell him “It keeps me warm. It protects me from the sun. It is very fashionable.”

A computer compiling information about the world is trying to fill in gaps in knowledge. It scans you and asks “what is that?”

“A jacket,” you say.

“What is a jacket?” the computer asks.

What the computer wants to know is what it matches to most closely in its existing stored knowledge. You tell it, “It is like a trenchcoat, a sweater, a coat, or a hoodie.”

An alien artist is unfamiliar with the structure of your world. It gestures its tendrils at you and asks “what is that?”

“A jacket,” you say.

“What is a jacket?” the alien asks.

What the alien wants to know is what it is that gives rise to the jacket, what the essence of jacketness is. You tell it, “It is a bunch of pieces of fabric stitched together with some thread.”

These are three ways in which a word can be ‘defined’ – the role it plays in the world around it (the up-definition), synonyms (lateral-definition), and the parts which construct the thing (down-definition).

Generally speaking, up-definitions are the most commonly used and the most practical. What we want to know about an object is what we can do with it. The same is applied to concepts – Love is “the thing we have for our children or parents,” surprise is “the thing that happens at a birthday you thought everyone forgot about,” and “existence” is “all this stuff you’re looking at.”

Up-definitions is also one of those things that can ‘feel like’ a satisfactory answer when what you really need is a down-definition. Discussions about morality frequently fall into the up-definition trap, where everybody’s idea of ‘wrong’ is a strictly functional thing, and then people get into conflicts over why different functional ideas are clashing with each other.

I’ve seen a few discussions of free will that also fail to recognize down-definitions; the up definition of free will is something like ‘making decisions independently’ or ‘conscious choices’ – or lateral definitions like “agency” or “my soul.” To ask about a down-definition is to ask about the fabric and thread of free will, about what little bits that idea has been built out of. Generally the down-definition I like the best is “a specific subjective sense”.

Up-definitions are useful, but down-definitions aid in presenting a more cohesive idea of what your mind is doing when it thinks. With some concepts it’s difficult to put any down-definition into words, but paying attention to the feeling of thinking about the concepts can also suffice.

Probably all concepts we use are built out of many smaller concepts, and those built out of smaller still, and oftentimes we forget this so deeply that as soon as we identify an idea like free will, we view it and wield it as a solid unit, and our debates with others feature challenging how our solid units serve functionally in the world around us. It’s like knowing how to swordfight without any knowledge of what swords are made out of – it works just fine, but it’s not holistic, and might one day prevent advancing to an expert level.

 

 


Thu, 20 Jul 2017 4:25:57 EDT

Epistemic Status: Public service reminder (I want to be able to link to this in the future)

Almost all changes are bad.

Image result for i want things to be different oh no

People forget that. They say they want change. They say things like:

At the end of the day, I want to see change come about, whatever it would take. Whatever it would take to see change come about, I would welcome. – Chumbawamba, Be With You

What they actually want one of those rare, carefully chosen good, friendly changes. They do exist within change space.

Change space, like mind space, is deep and wide. Friendly change space isn’t quite to change space what friendly mind space is to mind space, but before you apply any filters of common sense, it’s remarkably close.

The more optimized things currently are, the less likely any given change is to be good.

The more time people have had to optimize other things around the current state of the thing you are trying to change, the less likely any given change is to be good.

The more effort people have put into optimizing other things, based on the thing you are looking to change, the less likely any given change is to be good. You could break a lot of things.

When you break those things, you cause harm. Since people hate losses more than they love gains, even a net improvement can make a person or group feel worse off.

If you do break things, often they stay broken. It is usually a lot harder to build or repair something than it is to break that thing.

The faster and bigger you make changes, the more other things you are likely to break, and the more critically you will break them. At a minimum, even when your change is strictly for the better, those things then must change to adapt. In many cases, they are broken entirely, beyond repair, and this goes on to break other things.

Modern life is highly optimized. It’s far from perfect. Often it is optimizing for the wrong things. Nevertheless, it is highly, highly optimized. We certainly do not lack for hill climbers. In some ways it is too optimized!

If I currently have a highly optimized package of things, such as a house or apartment (which is implicitly a collection of things tied to my location), often I will have put a lot of work and sunk costs into the current package of goods I’ve assembled, so breaking up that package or adding new things to it is pretty bad if I still need to pay market price. You can easily make everything better, and still mostly make things worse until equilibrium establishes itself again.

Older people have had more time to optimize and have less time to optimize again, so it makes sense that they hate change even more than others. They should.

Seriously, though, before you go around talking about how everything is awful and we need to call for revolution and change everything right away because the world is a nightmare, I would like to remind you that things are really, really good right now relative to baseline. We have a lot of nice things, even if we’re having trouble building new ones. More people than ever, in absolute and in percentage terms, have high technology, connection, information and entertainment. More of us have enough food and water and shelter and peace and prosperity, and we mostly have our freedom and mostly get along pretty well all things considered.

Respect that. Do not screw with this lightly.

By all means, get mad about the planetary death rate and the people who are still starving. Get mad about housing costs and lack of employment opportunities and infection rates. Rage at the irrationality of it all. Realize that unless we do something to prevent it, unfriendly artificial general intelligence will probably destroy all value in the universe. We might want to do something about that.

First, count your blessings.

Once that’s done, yes, we need change. The only constant is change, other things are changing and breaking your things, forcing them too to change, and all that. We’re all slowly dying of old age and forces move to wipe out all value in the universe, plus we’re getting pretty tired of the same old restaurants and albums and TV shows.

So what are we to do?

First of all, we don’t choose our change at random. No matter how bad a person is at deciding what to do, they’re a lot better than random.

We hopefully don’t fall for ‘something must be done, this is something, therefore we must do it,’ even if we do occasionally go for ‘something should be done, doesn’t matter as much what it is, this is something reasonable, so let’s do that’ since we also don’t want to fall for ‘something better could be done, so until we figure it out no one change anything.’

We try to explore and experiment, so that we have a better idea of what we are breaking before we implement things on too large a scale, and we think carefully about what the consequences might be.

Then we change things anyway, because we have to, but with our eyes open.

We don’t capture all the upside of our changes; companies that provide valuable services, and people who do useful work, usually get only a small fraction of the value they create. In exchange, neither do they end up paying for most of the things they break and the harm they do. Which is how it must be. The policy question is where we can usefully change the rules about what people can change, and what they get in exchange, in a way that gets better results than letting events run their course.

On an individual level, you hope enough things roughly offset that the incentives are good enough, and you respond to them.

Them hopefully, as Raymond often advises, we make good choices.

 

 

 

 

 

 

 

 


Wed, 19 Jul 2017 3:25:33 EDT

Let’s play a game. Some of you may already know this game; if so, please don’t spoil it for everyone else.

Everybody, pick a number between 0 and 100. Whoever guesses closest to 2/3rds of the average of all the guesses, wins.

These kinds of puzzles are called Keynesian Beauty Contests and they are interesting, as they explore a unique area of ideaspace. Most kinds of contests and direct: Guess The Correct Answer. But the KBC adds a layer of indirection to the process. You’re not supposed to guess The Correct Answer. You’re supposed to guess What Everyone Thinks The Correct Answer Is. When experiments are run comparing the results of these different types of questions, researchers get radically different results.

It occurs to me that, in a sense, leadership means “winning the Keysean Beauty Contest game”. We like to imagine that great leaders do The Right Thing, but if they did that, they wouldn’t stay in charge for long. Like it or not, a large group of people will not all agree on what The Right Thing is. Perhaps they have different starting data, coming to different conclusions (which may or may not be legitimate). Perhaps they have conflicting interests, and associate The Right Thing (generally) with The Right Thing for them. Perhaps they are not the smartest people, and cannot properly grasp The Right Thing. Or perhaps it’s simply a noisy communciations channel, and something gets corrupted between their brains and yours.

Leaders find themselves playing this game. Their actions are constrained by popular opinion, and they must, in general, attempt to predict what everyone thinks the right thing is, and do that. Often times, this will already coincide with what they think the right thing is (this somewhat follows from the definition of “good leader”), but sometimes there will be a conflict. When such a conflict arises, what do they do?

There is a flaw in my formulation. Doing “What everyone thinks the right thing is” pre-supposes that everyone has coherent ideas of what the right thing is. It pre-supposes that people have ideas about the right thing, that are communicable to and understandable by a third party. It also pre-supposes that these ideas have a foundation. They will not change suddenly or arbitrarily. But are these facts true?

Here is a graph of Trump and Clinton approval ratings last year. It claims to be from Nate Silver, though I found it in a Google image search so maybe it is not. In any case, it highlights my point.

Imagine the polling question is changed from “who do you want to win the 2016 presidential race?” to the equivalent question “who do you want to have control over the button that ends life on planet Earth?” While somewhat manipulative, my question is still literally true. Now, while imagining that all along we were asking this manipulative question of survey respondents, ask yourself: do you really think that peoples’ answer to this question would swing so wildly? Do you really, truly believe that a rational person would on day 1 say “I have evaluated the data and decided that Trump is to be trusted with the nuclear football”, but, only a month later, change their minds and say “woops actually no he’s bad, I’d rather Hillary have it”?

I don’t believe this, and this implies a rather unpleasant conclusion: public opinion (“The Thing Everyone Thinks The Right Thing To Do Is”) doesn’t real. It does not represent the deep-seated beliefs and goals of the populace, at least not entirely. There is a component of it, a large component, that is highly context sensitive and subject to change with no fundamental reason. This admits a second, degenerate strategy, for rulers who aren’t really that great: The people don’t like your leadership? Let’s dissolve the people and elect a new one!

Framed another way: there are two different kinds of “what people think the right thing is” (hereby shortened to “the standard”). There is the descriptive standard, the platonic ideal of what I’m talking about. This is the honest expression of peoples’ collective goals, “the real answer” to the KBC. But there are also prescriptive standards. After all, if there is a lot of leeway in what people think The Standard is, why not pressure that leeway to shift in a way that makes your life easier?

This may be a mechanism by which preference falsification cascades happen. Because of the indirection of the question “what do you think other people think”, someone who influences sources of information can put a thumb on the scale. Depending one’s influence within society, as long as the gap between “what people think” and “what people think that people think” doesn’t get too big or noticeable, a bad leader can coast along on prescriptive standards for quite some time.

Even worse than this, though, is the power of prescription in the hands of a would-be social reformer. Someone who sincerely believes in the moral superiority of their position. Convinced of the fundamental goodness of their position, they will not exert as much self-control over the prescription of standards.

Ideally, if done slowly enough, this can cause lasting changes in society and, possibly, for the better. By slowly convincing everyone that everyone else is becoming more pro-social, you can incentivize everyone to become more pro-social! But it’s a knife edge. Human nature is only so malleable. And each individual case of success empowers more and larger future prescriptions.

This is a viable model to understand a lot of the culture war that we’ve seen recently. There have been moral reformers at the top. Undoubtably some have been cynical and self-serving, but we’ll exercise charity today and say that they sincerely believe in the goodness of what they’re dong. But they long ago stopped being good leaders. As I said earlier, good leaders honestly and fairly reflect the standards that are already there. They do not attempt to change those standards through manipulative fudging of the numbers. Ours, on the other hand, do.

Maybe this was ok. Maybe it was even good. But it can only go so far. We appear now to have become caught up in a feedback loop. Leaders twist, manipulate, and falsify information at the margins in order to push the prescriptive standard towards an ideal. They they observe the twisted reports as evidence that their prescription changed the descriptive standard. They follow through with another round of prescription, unaware that the first one had not yet completed.

Eventually, that gets us to where we are now: bubbled off elites, attempting to do good, but provided with such absurdly incorrect information that everything they try falls somewhere between ineffective and dangerous. Meanwhile, frank and honest discussion of the descriptive standard becomes impossible, as enough people outwardly (and possibly inwardly?) accept the prescription as the description, and write off dissidents as evil or stupid. It’s a powder keg, waiting to go off.

It’s as if someone came along to the 2/3rds game and started telling everyone: “2/3rds of the average will be X. You should pick X”. It might work once or twice, but it will not work forever.

The solution? Radical honesty and realism. A recognition that people aren’t perfect, and we shouldn’t pretend they are. A deep-seated skepticism towards utopians. A healthy, rational, self-informed and self-aware electorate that is not snowed by manipulated data. Open and honest communication between people, so that people have a well-grounded idea of what others think. A recognition that the power of leaders to change culture is dangerous and should be avoided.



Mon, 17 Jul 2017 13:28:07 EDT

When people talk about general intelligence in humans, they tend to talk about measured IQ. While a lot of variation in IQ is really just variation in brain health, and probably related to variation in general health, there are at least two distinct modes of general intelligence in humans: fluid intelligence and crystallized intelligence.

Fluid intelligence is pretty much anything you can use a spatial metaphor to think about, and is measured pretty directly by Raven's Progressive Matrices. It's used for puzzle-solving.

Crystallized intelligence, on the other hand, relies on your conceptual vocabulary. You can do analogical reasoning with it – so it lends itself to a fortiori style arguments.

I don't think it's just a coincidence that I know of two main ways people have discovered disjunctive, structural reasoning – once in geometry, and once in the courts.

Geometry and the rules of structured argument

Supposedly, in ancient Greece, it was common for temples or other religious sites to have an inscription above the door saying, “let no impious (or, sometimes, unjust) person enter.” But, according to legend, above Plato’s academy was the inscription, “let no ungeometrical person enter.” No one unschooled in geometry. Why? Because mathematical reasoning was believed to be a prerequisite for serious philosophical inquiry.

This legend has some origin in fact. The Greek philosophical tradition was originally not distinct from mathematics – the Pythagoreans were a cult of mathematician-musician-philosophers, and Plato himself, in the dialogue Meno, has Socrates demonstrate a proposition in geometry as a paradigmatic case of how one might come to truly learn.

Nor is it a coincidence that the Greeks gave the science of shape, quantity, and number, the name mathematics, Greek for “that which is learnable.”

Nor is it a coincidence that J.S. Bach, whose music is a regular favorite of mathematicians, intellectuals, and people on psychedelic trips, whose music is more like a structured argument, more symmetrical, than any other great music that's survived intact, who was one of the inspirations for the great modern mathematical artwork Gödel, Escher, Bach, was also a member of the Pythagorean Society

How does spatial reasoning lead to formal, logical reasoning?

Perhaps at some point before the invention of mathematics as a subject, people needed to reason about the shapes of stones in order to do architecture, or to construct tools or devices. Eventually, to survey land for planning or tax assessment.

At first, your spatial intuitions will be good enough. You can rotate objects in your mind to see whether they'll fit together properly, before constructing them, as long as they're only a few, and not too complex. In Use the Native Architecture, Marcello Herreshoff points out that mathematical reasoning is much easier if we use our minds' native hardware for solving math problems, instead of algebraic formalism:

Some things the brain can do quickly and intuitively, and some things the brain has to emulate using many more of the brain’s native operations. […] In particular, visualizing things is part of the brain’s native architecture, but abstract symbolic manipulation has to be learned.  Thus, visualizing mathematics is usually a good idea.

When was the last time you made a sign error?

When was the last time you visualized something upside-down by mistake?

I thought so.

But eventually you come up against problems that have too many parts to just directly use your spatial intuitions. So you come up with rules of thumb, like that the circumference of a circle is about thrice its diameter. But these will not always yield consistent results.

It’s not enough that this part fits, and separately that part fits – to build an archway, or even just a colonnade, all the parts have to fit together at the same time.

With enough investigation, you can start to develop rules that generalize better, by composing them from simpler rules that are easier to check in the general case. If your spatial intuitions are already sure that triangles have this property, and they're sure that things with this property have that property, then you know that triangles have that property too. This is enough for beautiful purely visual proofs like the classic visual demonstration of the Pythagorean theorem, which uses only one word: "Behold!".

Spatial reasoning has the nice quality that you're always already thinking structurally. A line connecting A and B also always connects B and A. An argument that one angle in a triangle is large is an argument that the sum of the other two must be small. It's just a matter of backing out the attributes of the thing, from your intuitions, and then using that to extend your predictive power to places your intuitions aren't big enough for.

If you pursue geometry long enough, and have enough of a tradition of verbal articulation – say, because you have a tradition of public deliberation through debate, like the Athenians – you eventually arrive somewhere very different. Euclid's Elements starts with a fairly tractable set of rules and definitions, and demonstrates a series of logical propositions, proceeding from shapes, to magnitudes, to multitudes. It's not obvious starting out, that multitudes are at all the same sort of thing as the things your spatial reasoning works on – for one thing, they're discrete, not continuous – for another, they don't seem to have dimensionality – and yet, the same sort of reasoning that was tested on geometry and found valid, can prove things about numbers.

And then, from this, it's easy enough to generalize the idea of formal argument, predication, syllogism, analogy, and logical implication.

The actual sequence of developments seems to support this, at least among the Greeks; the Pythagoreans substantially predate Aristotle's formalization of the rule of logic.

More than a millennium after Euclid compiled the Elements, it was still directly inspiring philosophers to aspire to a higher standard of rigor. Aubrey relates this story about Thomas Hobbes:

He was forty years old before he looked on geometry; which happened accidentally. Being in a gentleman's library Euclid's Elements lay open, and 'twas the forty-seventh proposition in the first book. He read the proposition. 'By G ,' said he, 'this is impossible!' So he reads the demonstration of it, which referred him back to such a proof; which referred him back to another, which he also read. Et sic deinceps, that at last he was demonstratively convinced of that truth. This made him in love with geometry.

Note what went on here. Hobbes thought the proposition absurd. Then, he saw that it referred back to other claims. He granted, that if those things were true, then this thing would surely be true as well. He looked back at the prior proofs, and saw that they were well-structured, so long as you granted their premises. Then, finally, he got to the beginning of the book, and found the earliest proofs persuasive, so long as he accepted the axioms – which seemed fine. Then, and only then, did he assent – immediately and enthusiastically – to the final proposition.

This is very different from being hit with a lot of independent arguments or pieces of evidence for a point of view. It’s not that Hobbes was eventually beaten into submission by a great many arguments. There was a single argument, with a formal structure. Nor did the argument gradually make an impression on him over time – he went from total incredulity to total belief, perhaps within a few minutes. Rather, he could affirm the structure of the argument – assess its validity – before he had properly evaluated its soundness and truth. Once he'd traced it back to a beginning that he was persuaded was sound, the information cascaded like a line of dominos, to the final proposition that he had been investigating in the first place.

This is not well modeled by the “marketplace of ideas” alone, or by the notion of independent “memes” competing for mindshare. Something else was going on here. And I think it’s quite likely that this was what made his Leviathan such a carefully argued and well thought-out book, even though it never attained the sort of certainty one might find in geometry.

Rotating a shape or imagining how multiple shapes might fit together is more than just associative reasoning – there is a structure to it, and anyone can make inferences on this basis. Excepting ungeometrical persons.

The rules of law

The second way humans reached towards generalized principles of reasoning seems to have come from our innate talent for keeping track of social norms.

Social groups have an interest in enforcing norms and punishing people who break the rules. People have to track what the rules are and make inferences about what would violate the norm. Even if they want to secretly violate norms, they need to know what's forbidden, and be able to track who could know. And they need to keep an internal record of what's where.

Crows, ravens, and other corvids are clearly not generally intelligent the way humans are – there's a lot less they can do, and their brains are much, much smaller than ours are. And yet, they seem to have some facility for analogical reasoning and prospective planning. They hide food, and don't want other crows to steal it. They can hide the food better if they keep track of when they might be observed (h/t Corvid Research). This requires keeping track of particular known facts, but also being able to think from multiple perspectives in order to reason about consequences. As a result, they can use tools and plan ahead, both faculties that seem related to what makes humans so powerful, faculties that require recursive and structured thinking.

While corvids seem to have a specialized capacity for inference and analogy about things related to food storage, concealment, and retrieval, humans seem to have a specialized capacity for inference and analogy about social rules.

The Wason card task is a classic test of facility at logical inference, which most people are not very good at, but humans tend to perform better when presented with the same logic problem, when filled in with content about social rules. Sociopaths, however, showed no such improvement. The Economist's summary is good here, as is The Last Psychiatrist's summary. I'll quote The Economist here:

[The] first presentation might be of four cards, each with a number on one side and a colour on the other. The cards are placed on a table to show 3, 8, red and brown. The rule to be tested is: “If a card shows an even number on one side, then it is red on the other.” Which cards do you need to turn over to tell if the rule has been broken?

That sounds simple, but most people get it wrong. Now consider this problem. The rule to be tested is: “If you borrow the car, then you have to fill the tank with petrol.” Once again, you are shown four cards, one side of which says who did or did not borrow the car and the other whether or not that person filled the tank:

Dave did not borrow the car
Helen borrowed the car
Brianne filled up the tank with petrol
Kirk did not fill up the tank with petrol

Once again, also, you have to decide which cards to turn to see if the rule was broken.

In terms of formal logic, the problems are the same. But most people have an easier time answering the second one than the first. (In both cases it is cards number two and four that need to be turned.)

[…] When the two researchers probed the prisoners' abilities on the general test, they discovered that the psychopaths did just as well—or just as poorly, if you like—as everyone else. In this case the average score for all was to get it right about a fifth of the time. For problems cast as social contracts or as questions of risk avoidance, by contrast, non-psychopaths got it right about 70% of the time. Psychopaths scored much less—around 40%—and those in the middle of the psychopathy scale scored midway between the two.

The Wason test suggests that analysing social contracts and analysing risk are what evolutionary psychologists call cognitive modules—bundles of mental adaptations that act like bodily organs in that they are specialised to a particular job. This new result suggests that in psychopaths these modules have been switched off.

But, again, the human mind has limited scope. Even with native hardware designed to help on a class of tasks, if you're managing a large enough group, you need to formalize the structure. So courts started noticing patterns. Once they've decided a case on a principle, all cases where the principle applies even more strongly are implicitly decided the same way.

Law students, at least in the United States, are familiar with oddities such as the rule against perpetuities. The reason this rule is famous is that it pops up in lots of cases it seems like it shouldn't, because rules have to be consistently applied – nonobvious conclusions are the result of applying and composing simple principles.

The rule is basically that you can't create a trust to hold assets for many generations. The point of this rule is that without such a limitation, typical rates of return would quickly lead to a situation where trusts that just kept reinvesting their assets would dominate the economy, and be able to impose their will on the comparatively asset-poor living. Since this consequence seemed bad, lawmakers forbade it.

However, there are lots of cases where someone could accidentally set something like this up. For instance, a legacy left in trust for the not-yet-born children of someone still living could easily have an unlikely but possible loophole that lets it last beyond the maximum legal term. As a result, it's common practice to include weird stipulations that definitely satisfy the rule, as upper bounds, to stop the whole thing from being struck down. A trust created in my grandmother's will terminates no later than twenty-one years after the death of the last surviving member of the British royal family who was alive when the trust was created.

This is nuts. In fact, the common practice of copying large volumes of legal boilerplate word-for-word comes precisely from a desire to avoid the need to engage in novel structural thinking, in order to avoid introducing errors of this sort. This sort of application is what happens when people try to copy old magical spells without understanding magic.

But while this sort of structural inference about social rules may be something of a lost art – I've been involved in contracts where the legal team of the other side was literally unable to explain the meaning of a clause in the contract they'd asked me to sign - the faculty that created this kind of text, when generalized and put to the right sort of work, is extremely useful. It enables coordination over long stretches of time and space. It is necessary for the meaningful rule of law. It enables later scholars to build on the work of earlier ones. Many types of contracts would be infeasible without it; how could you possibly expect an insurance contract to mean anything without structured thinking including complex conditionals?

The Talmud, a record of a very different legal tradition, provides some more examples. Like other legal traditions, it admits of analogical “a fortiori” or “kal vakhomer” argumentation – if X motivates Y, then a stronger version of X must motivate Y at least as much. There's an attempt to apply precedents consistently. But there are some other oddities that seem to me to capture the flavor of this style of structural thinking very well.

For instance, when discussing the evidence for a legal opinion, the Talmud will often address a series of arguments for the proposition, only to point out the flaws of each argument in turn, rejecting them one by one. Then, at the end, an argument is provided for which there is no refutation, so it is accepted as a valid justification for the opinion.

This is not the sort of thing one does if one is just trying to figure out what the law is. In that case, the Talmud would only consider the strongest arguments.

Nor is this the sort of thing one does to maximize the rhetorical force applied towards the favored conclusion. If the Talmud were trying to do that, it would have knocked down a series of arguments against the position. Instead, it only makes sense if you care about what constitutes an acceptable argument, not just which legal conclusion happens to be true in this case. If you wanted to be clear, not just on which things you think are true, but which premises they depend on and which premises they don't. In short, if you cared about the structure, not just the content. Because the structure affects every part of the law.

There's another quirk of Talmudic discourse that I find illuminating, and is illustrated by this example from the Babylonian Talmud, Berachot 40a:

MISHNAH. If one says over the fruit of the tree the benediction, 'who creates the fruit of the ground', he has performed his obligation, but if he said over the produce of the ground, 'who createst the fruit of the tree', he has not performed his obligation. If he says, 'by whose word all things exist' over any of them, he has performed his obligation.

GEMARA. What authority maintains that the essence of the tree is in the ground? – R. Nahman b. Isaac replied: It is R. Judah, as we have learnt: If the spring has dried up or the tree has been cut down,2 he brings the first-fruits but does not make the declaration.3 R. Judah, however, says that he both brings them and makes the declaration.4

(2) If one has gathered first-fruits, and before he takes them to Jerusalem the spring which fed the tree dries up, or the tree is cut down. (3) V. Deut. XXVI, 5-10, because it contains the words 'of the land which Thou, O Lord, hast given me', and the land is valueless without the tree or the spring. (4) Because the land is the essence, not the tree; v. Bik. I, 61

Rabbi Judah was already identified with the opinion that the important thing about a tree is the potentially productive land it stands on, and not vice versa. Then, there was another, unattributed opinion supporting this position. The response was to attribute this opinion to Rabbi Judah. Why?

This is not the sort of thing a critical historian would do. If you were mainly interested in which historical individual said the thing, you'd never treat agreement in principle as evidence of personal identity. An historian would look to things like historical context to explain someone's opinion, or perhaps to matters of word usage to determine whether it might plausibly have been said by someone of that era.

But, to the Talmudic mind, the important thing is not to identify a single historical person, but a single line of argument. The enactive details of issuing an opinion on a particular law, in a particular time and place, are just accidental – what's real and important is the principle. And, when Rabbi Judah makes a judgment based on that principle, he is ipso facto making all the judgments implied by the principle. The attribution of the anonymous opinion to Rabbi Judah isn't to be taken literally, or enactively, but structurally. The law has no respect for persons:

Ye shall not respect persons in judgment; but ye shall hear the small as well as the great; ye shall not be afraid of the face of man; for the judgment is God's

–Deuteronomy 1:17

There are different ways law is organized - in some cases codes will try to account for everything and then judges are supposed to directly apply the code, in other cases there's gradual accretion of precedent. But generally, there's the implication that the law already has an opinion, even if we haven't yet fully worked out what it is.

This is literally orthodox Jewish doctrine: the "Oral Torah" was already given at Mount Sinai along with the written bible, even though it wasn't written down fully until the Talmud. The Talmud is, of course, a record of active investigation and debate. In what sense could it possibly have already been revealed? Perhaps in the same sense that while the practice of geometry was invented, its content was only discovered through that practice.

Thus, we again have the idea of logical implication working both forwards and backwards, timelessly. This is very different from the idea of argument as a way of building momentum for a case, racking up evidence on one side. The court has to decide, not just this case, but every case this one will be precedent for, and all the upstream implications of principle.

So, an adversarial system of law doesn't just have two important parts (the adversaries), it has three: a bias one way, a bias the other way, and a structural, symmetrist judge deciding what's allowable and what's not, what gets counted and what doesn't. And this third feature is what distinguishes a trial by law from a trial by combat, or a vulgar political debate.

Universal claims about the behavior of nature are often called descriptions of natural laws by analogy to this sort of reasoning – a law is the sort of thing that is true everywhere, so that you can reason disjunctively about it. Newton gave us three laws of motion in his geometric text, the Mathematical Principles of Natural Philosophy.

So, there's a second pathway to structural thinking, based on a more specialized faculty common to most humans. Perhaps excepting sociopaths.

On that day, the Lord shall be one, and his name one.

Both law and geometry start with cognitive domains humans evolved to be especially good at dealing with (spatial configurations of physical objects, social configurations of norms), a specialized faculty of structured interpretation, and then some process of generalization. They both end up using structural features of language like predication, negation, and recursion, to articulate and generalize abstract structures. In both cases, human beings had a sense that they were thus being connected with the divine.

Of course, there is in a sense only one such thing as a general intelligence. Just like any Universal Turing Machine can simulate any other with enough translational labor, any faculty that enables general intelligence can do anything in the domain of any other general-intelligence faculty, even if they have different specialties.

In the algebraic revolution, mathematics – once powered by our brains' spatial reasoning modules – was recast as a verbal formalism. This enabled qualitatively different advances, but could express everything geometry had already discovered.

Noam Chomsky is famous for his claim that there is an universal faculty of grammar among humans – that some basic language structures are universal, even if we have to learn how to fill them in. A key part of universal grammar is a way for parts of sentences to refer, not just to objects out there in the material world, but other parts of sentences, recursively. This is necessary for things like complex conditionals.

It does seem like humans are unique in our ability to learn grammar. Other primates can learn some signs and meanings, parrots can learn spoken words, and cetaceans seem to be doing something, but as far as we know, only humans can conceive and describe abstract structural models to order our ideas. This is a likely explanation for humans’ unique ability to control our environment.

However, even we humans do not seem to have a uniform, fully general ability to implement universal grammar. At the least, skill in applying it to any given domain seems to vary quite a lot.

When writing about actors and scribes I mentioned how the President of the United States does not seem to be using properly recursive language. But this problem was already well-known to people trying to hire programmers. Coding Horror, in Why Can't Programmers.. Program?, provides a good overview:

I was incredulous when I read this observation from Reginald Braithwaite:

Like me, the author is having trouble with the fact that 199 out of 200 applicants for every programming job can't write code at all. I repeat: they can't write any code whatsoever.

The author he's referring to is Imran, who is evidently turning away lots of programmers who can't write a simple program:

After a fair bit of trial and error I've discovered that people who struggle to code don't just struggle on big problems, or even smallish problems (i.e. write a implementation of a linked list). They struggle with tiny problems.

So I set out to develop questions that can identify this kind of developer and came up with a class of questions I call "FizzBuzz Questions" […] An example of a Fizz-Buzz question is the following:

Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".

Most good programmers should be able to write out on paper a program which does this in a under a couple of minutes. Want to know something scary? The majority of comp sci graduates can't. I've also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.

Dan Kegel had a similar experience hiring entry-level programmers:

A surprisingly large fraction of applicants, even those with masters' degrees and PhDs in computer science, fail during interviews when asked to carry out basic programming tasks. For example, I've personally interviewed graduates who can't answer "Write a loop that counts from 1 to 10" or "What's the number after F in hexadecimal?" Less trivially, I've interviewed many candidates who can't use recursion to solve a real problem. These are basic skills; anyone who lacks them probably hasn't done much programming.

Speaking on behalf of software engineers who have to interview prospective new hires, I can safely say that we're tired of talking to candidates who can't program their way out of a paper bag. If you can successfully write a loop that goes from 1 to 10 in every language on your resume, can do simple arithmetic without a calculator, and can use recursion to solve a real problem, you're already ahead of the pack!

Between Reginald, Dan, and Imran, I'm starting to get a little worried. I'm more than willing to cut freshly minted software developers slack at the beginning of their career. Everybody has to start somewhere. But I am disturbed and appalled that any so-called programmer would apply for a job without being able to write the simplest of programs. That's a slap in the face to anyone who writes software for a living.

The vast divide between those who can program and those who cannot program is well known. I assumed anyone applying for a job as a programmer had already crossed this chasm. Apparently this is not a reasonable assumption to make. Apparently, FizzBuzz style screening is required to keep interviewers from wasting their time interviewing programmers who can't program.

Lest you think the FizzBuzz test is too easy – and it is blindingly, intentionally easy – a commenter to Imran's post notes its efficacy:

I'd hate interviewers to dismiss [the FizzBuzz] test as being too easy - in my experience it is genuinely astonishing how many candidates are incapable of the simplest programming tasks.

[…]

It's a shame you have to do so much pre-screening to have the luxury of interviewing programmers who can actually program. It'd be funny if it wasn't so damn depressing.

These problems are, in some sense, very simple. They don't require detailed technical knowledge. As you can see, it is difficult for competent programmers to comprehend the arrangement of a mind that cannot solve such a problem trivially. But they do require an understanding that language can be something other than declarative: the ability to track conditionals and recursion. The ability to think with formal structure. To think in grammar.

References   [ + ]

1. The Babylonian Talmud: Seder Zera’im, ed. Rabbi Dr. I. Epstein, trans. Maurice Simon (The Soncino Press: London, 1961), p. 248

Sun, 16 Jul 2017 13:36:24 EDT

Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.

Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.

Now consider the question of how “human-like” something is. Many indicators may be relevant to judging this, and one may draw many implications from such a judgment. In principle this concept of “human-like” could be high dimensional, so that there are many separate packages of indicators relevant for judging matching packages of implications. But anecdotally, humans seem to have a tendency to “anthropomorphize,” that is, to treat non-humans as if they were somewhat human in a simple low-dimensional way that doesn’t recognize many dimensions of difference. That is, things just seem more or less human. So the more ways in which something is human-like, the more you can reasonably guess that it will be human like in other ways. This tendency appears in a wide range of ordinary environments, and its targets include plants, animals, weather, planets, luck, sculptures, machines, and software.

We feel more morally responsible for how we treat more human-like things. We are more inclined to anthropomorphize things that seem more similar to humans in their actions or appearance, when we more desire to make sense of our environment, and when we more desire social connection. When these conditions are less met, we are more inclined to “dehumanize”, that is to treat human things as less than fully human. We also dehumanize to feel less morally responsible for our treatment of out-groups.

One study published in Science in 2007 asked 2400 people to make 78 pair-wise comparisons between 13 characters (a baby, chimp, dead woman, dog, fetus, frog, girl, God, man, vegetative man, robot, woman, you) on 18 mental capacities and 6 evaluation judgements. An “experience” factor explained 88% of capacity variation, being correlated with capacities for hunger, fear, pain, pleasure, rage, desire, personality, consciousness, pride, embarrassment, and joy. This factor had a strong 0.85 correlation with a desire to avoid harm to the character. A second “agency” factor explained 8% of the variance, being correlated with capacities for self-control, morality, memory, emotion recognition, planning, communication, and thought. This factor had a strong 0.82 correlation with a desire to punish for wrongdoing. Both factors correlated with liking a character, wanting it to be happy, and seeing it as having a soul (Gray et al. 2007).

Though it would be great to get more data, especially on more than 13 characters, this study does confirm the usual anecdotal description that anthropomorphizing is essentially a low dimensional phenomena. And if true, this fact has implications for how biological humans would treat ems.

My colleague Bryan Caplan insists that because ems would not be made out of familiar squishy carbon-based biochemicals, humans would feel confident that ems have no conscious feelings, and thus eagerly enslave and harshly treat ems, as Bryan says that our moral reluctance is the main reason why most humans today are not harshly treated slaves. However, this in essence claims the existence of a big added factor explaining judgements related to “human-like”, a factor beyond those seen in the above survey.

After all, “consciousness” is already one of the items included in the above survey. But it was just one among many contributors to the main experience factor; it wasn’t overwhelming compare to the rest. And I’m pretty sure that if one tried to add being made of biochemicals as a predictor of this main factor, it would help but remain only one weak predictor among many. You might think that these survey participants are wrong, of course, but we are trying to estimate what typical people will think in the future, not what is philosophically correct.

I’m also pretty sure that while the “robot” in the study was rated low on experience, that was because it was rated low on capacities like for pain, pleasure, rage, desire, and personality. Ems, being more articulate and expressive than most humans, could quickly convince most biological humans that they act very much like creatures with such capacities. You might claim that humans will all insist on rating anything not made of biochemicals as all very low on all such capacities, but that is not what we see in the above survey, nor what we see in how people react to fictional robot characters, such as from Westworld or Battlestar Galactica. When such characters act very much like creatures with these key capacities, they are seen as creatures that we should avoid hurting. I offer to bet $10,000 at even odds that this is what we will see in an extended survey like the one above that includes such characters.

Bryan also says that an ability to select most ems from scans of the few best suited humans implies that ems are extremely docile. While today when we select workers we often value docility, we value many other features more, and tradeoffs between available features result in the most desired workers being far from the most docile. Bryan claims that such tradeoffs will disappear once you can select from among a billion or more humans. But today when we select the world’s best paid actors, musicians, athletes, and writers, a few workers can in fact supply the entire world in related product categories, and we can in fact select from everyone in the world to fill those roles. Yet those roles are not filled with extremely docile people. I don’t see why this tradeoff shouldn’t continue in an age of em.


Sun, 16 Jul 2017 13:25:06 EDT

Last month I talked a little bit about the Hollow Mask Illusion as a clue to the Bayesian operations going on “below the hood” in the brain. Today I want to go a little bit deeper into what the SSC survey results can tell us here. This is a list of a bunch of different variables I tested in the survey, and the percent of each group who saw the Mask Illusion and Dancer Illusion as ambiguous. “RR” is relative risk:

illusionplan1.png

I don’t have p-values listed here, but almost all the Hollow Mask results, and a few of the Spinning Dancer results, were significant at p ≤ 0.01. And beyond the individual results, a few things jump out of the data in general. The Hollow Mask results and Spinning Dancer results are always in the same direction. And it always seems to be the weirder group who see more ambiguity in the illusion. Yes, schizophrenics see more ambiguity than non-schizophrenics. But transhumanists also see more ambiguity than non-transhumanists. Polyamorous people see more than monogamous people, gay people see more than straight people, EAs see more than non-EAs, et cetera. Where there’s no clear weirder/less-weird dichotomy, it seems like it’s the lower-functioning group that has more ambiguity. High school dropouts rather than PhDs. Single people rather than married people.

So there seems to be a picture where high rates of perceptual ambiguity are linked to being weirder and (sometimes, in a very weak statistical way) lower-functioning.

But why stop there? The dream is to connect this to some sort of intuitively-meaningful cognitive variable.

For a sense of “intuitively meaningful cognitive variable”, consider something like those four-letter things you get on the Myers-Briggs test. Go ahead and interject that Myers-Briggs is unscientific, and no better than astrology, and inferior to the Five Factor Model in every way. But everyone who says that always ends up being INTJ/INTP. And a survey found that SSC readers are about ten times more likely to be INTJ/INTP than the general population, p ≤ 0.001. Without necessarily claiming that the underlying classification cleaves reality at the joints, or even that it gives you more information than you put into the personality test that generates it, differences in cognitive styles seems real. I don’t know how fundamental they are – it could just be something as silly as a freshman philosophy professor who encouraged you to think logically or something – but they seem real.

And they seems different than the variables on the Big Five. The Big Five measures personality. Myers-Briggs claims – maybe wrongly, but at least it claims – to measure how you reason about things. Maybe everyone’s had the experience of meeting someone who seems very smart, but who just reasons in a very different way than they do.

And if the Bayesian brain hypothesis is right, and perception and reason really do draw on the same fundamental processes, then I wonder if we could isolate some differences in reasoning by measuring differences in perception. Could perception of certain optical illusions predict responses to certain cognitive biases? Could that go on to predict things like whether people like analytic or continental philosophy, whether they’re early-adopters or traditionalists, whether they think people are basically good or basically evil?

I know this is an overly ambitious research program. But remember: the studies looking for the genetic underpinning of political opinions usually implicate NMDA receptors, the same receptors most likely involved in the Hollow Mask. And there was a small but highly significant correlation between Mask perception and political opinion on the survey. I agree this is crazy, but I don’t want to say it’s impossible just yet.

On the the next survey, I want to include a whole battery of illusions, including multiple examples of the same illusion asked in different ways, and different illusions that seem to be measuring the same thing. For an example of the latter, take the “saw duplicate thes” item on the table above. This was a question asking if people had noticed various duplications of the word “the” I had put in the survey (like the one in the second and third words of this paragraph). People who noticed the duplicates were more than twice as likely to see ambiguity in the Hollow Mask as others, the highest result other than schizophrenia itself. This confirms my hypothesis that there’s some underlying similarity between these two illusions. If I can get enough of these, then I can eliminate noise and get a better idea of the underlying mental process that might be generating all of these.

With luck, I might end up with a couple of different factors that predict illusion perception. Then I would want to see if those factors also predict performance on reasoning problems (like cognitive biases) and on high-level beliefs (like liberal versus conservative).

The big question is whether some non-neurological factor influences perception of illusions – like maybe just trying really hard to see them. I’m not sure how to adjust for that, except to say that the pattern here doesn’t really look like that. The Dancer illusion was the one most susceptible to increased effort, and it got the weakest results. On a quick check, it doesn’t look like this is all due to something obvious like gender or age. But maybe there’s still some confounding factor that I’ve missed.


Mon, 10 Jul 2017 6:35:41 EDT

Consider the following procedure:

  1. Create unreasonably high standards that people are supposed to follow.
  2. Watch as people fail to meet them and thereby accumulate “debt”.
  3. Provide a way for people to discharge their debt by sacrificing their agency to some entity (concrete or abstract).

This is a common way to subjugate people and extract resources from them.  Some examples:

  • Christianity: Christianity defines many natural human emotions and actions as “sins” (i.e. things that accumulate debt), such that almost all Christians sin frequently.  Even those who follow all the rules have “original sin”.  Christianity allows people to discharge their debt by asking Jesus to bear their sins (thus becoming subservient to Jesus/God).
  • The Western education system: Western schools (and many non-Western schools) create unnatural standards of behavior that are hard for students to follow.  When students fail to meet these standards, they are told they deserve punishments including public humiliation and being poor as an adult.  School doesn’t give a way to fully discharge debts, leading to anxiety and depression in many students and former students, but people can partially discharge debt by admitting that they are in an important sense subservient to the education system (e.g. accepting domination from the more-educated boss in the workplace).
  • Effective altruism: The drowning child argument (promoted by effective altruists such as Peter Singer) argues that middle-class Americans have an obligation to sacrifice luxuries to save the lives of children in developing countries, or do something at least this effective (in practice, many effective altruists instead support animal welfare or existential risk organizations).  This is an unreasonably high standard; nearly no one actually sacrifices all their luxuries (living in poverty) to give away more money.  Effective altruism gives a way to discharge this debt: you can just donate 10% of your income to an effective charity (sacrificing some of your agency to it), or change your career to a more good-doing one.  (This doesn’t work for everyone, and many “hardcore EAs” continue to struggle with scrupulosity despite donating much more than 10% of their income or changing their career plans significantly, since they always could be doing more).
  • The rationalist community: I hesitate to write this section for a few reasons (specifically, it’s pretty close to home and is somewhat less clear given that some rationalists have usefully criticized some of the dynamics I’m complaining about).  But a subtext I see in the rationalist community says something like: “You’re biased so you’re likely to be wrong and make bad decisions that harm other people if you take actions in the world, and it’ll be your fault.  Also, the world is on fire and you’re one of the few people who knows about this, so it’s your responsibility to do something about it.  Luckily, you can discharge some of your debts by improving your own rationality, following the advice of high-level rationalists, and perhaps giving them money.”  That’s clearly an instance of this pattern; no one is unbiased, “high-level rationalists” included.  (It’s hard to say where exactly this subtext comes from, and I don’t think it’s anyone’s “fault”, but it definitely seems to exist; I’ve been affected by it myself, and I think it’s part of what causes akrasia in many rationalists.)

There are many more examples; I’m sure you can think of some.  Setting up a system like this has some effects:

  • Hypocrisy: Almost no one actually follows the standards, but they sometimes pretend they do.  Since standards are unreasonably high, they are enforced inconsistently, often against the most-vulnerable members of a group, while the less-vulnerable maintain the illusion that they are actually following the standards.
  • Self-violence: Buying into unreasonably high standards will make someone turn their mind against itself.  Their mind will split between the “righteous” part that is trying to follow and enforce the unreasonably high standards, and the “sinful” part that is covertly disobeying these standards in order to get what the mind actually wants (which is often in conflict with the standards).  Through neglect and self-violence, the “sinful” part of the mind develops into a shadow.  Self-hatred is a natural results of this process.
  • Distorted perception and cognition: The righteous part of the mind sometimes has trouble looking at ways in which the person is failing to meet standards (e.g. it will avoid looking at things that the person might be responsible for fixing).  Consciousness will dim when there’s risk of seeing that one is not meeting the standards (and sometimes also when there’s risk of seeing that others are not meeting the standards).  Concretely, one can imagine someone who gets lost surfing the internet to avoid facing some difficult work they’re supposed to do, or someone who avoids thinking about the ways in which their project is likely to fail.  Given the extent of the high standards and the debt that most people feel they are in, this will often lead to extremely distorted perception and cognition, such that coming out of it feels like waking from a dream.
  • Motivational problems: Working is one way to discharge debt, but working is less motivating if all products of your work go to debt-collectors rather than yourself.  The “sinful” part of the mind will resist work, as it expects to derive little benefit from it.
  • Fear: Accumulating lots of debt gives one the feeling that, at any time, debt-collectors could come and demand anything of you.  This causes the scrupulous to live in fear.  Sometimes, there isn’t even a concretely-identifiable entity they’re afraid of, but it’s clear that they’re afraid of something.

Systems involving unreasonably high standards could theoretically be justified if they were good coordination mechanisms.  But it seems implausible that they are.  Why not just make the de jure norms ones that people are actually likely to follow?  Surely a sufficient set of norms exists, since people are already following the de facto ones.  You can coordinate a lot without optimizing your coordination mechanism for putting everyone in debt!

I take the radical position that TAKING UNREASONABLY HIGH STANDARDS SERIOUSLY IS A REALLY BAD IDEA and ALL OF MY FRIENDS AND PERHAPS ALL HUMANS SHOULD STOP DOING IT.  Unreasonably high standards are responsible for a great deal of violence against life, epistemic problems, and horribleness in general.

(It’s important to distinguish having unreasonably high standards from having a preference ordering whose most-preferred state is impractical to attain; the second does not lead to the same problems unless there’s some way of obligating people to reach an unreasonably good state in the preference ordering.  Attaining a decent but non-maximally-preferred state should perhaps feel annoying or aesthetically displeasing, but not anxiety-inducing.)

My advice to the scrupulous: you are being scammed and you are giving your life away to scammers.  The debts that are part of this scam are fake, and you can safely ignore almost all of them since they won’t actually be enforced.  The best way to make the world better involves first refusing to be scammed, so that you can benefit from the products of your own labor (thereby developing intrinsic motivation to do useful things) instead of using them to pay imaginary debts, and so you can perceive the world accurately without fear.  You almost certainly have significant intrinsic motivation for helping others; you are more likely to successfully help them if your help comes from intrinsic motivation and abundance rather than fear and obligation.



Mon, 10 Jul 2017 6:35:30 EDT

IMG_7229-1024x488.jpg

Do you get annoyed when people repeat claims that you know aren’t true? Do you feel the urge to correct them, even when you know it’s not important? Do you feel ashamed when you realize you repeated a false claim or made a grammar error? Do you habitually add disclaimers to your statements and still […]

Mon, 10 Jul 2017 6:35:16 EDT

Daniel Lakens writes:

I was listening to a recent Radiolab episode on blame and guilt, where the guest Robert Sapolsky mentioned a famous study [by Danziger and friends] on judges handing out harsher sentences before lunch than after lunch. The idea is that their mental resources deplete over time, and they stop thinking carefully about their decision – until having a bite replenishes their resources. The study is well-known, and often (as in the Radiolab episode) used to argue how limited free will is, and how much of our behavior is caused by influences outside of our own control. I had never read the original paper, so I decided to take a look.

During the podcast, it was mentioned that the percentage of favorable decisions drops from 65% to 0% over the number of cases that are decided upon. This sounded unlikely. I looked at Figure 1 from the paper (below), and I couldn’t believe my eyes. Not only is the drop indeed as large as mentioned – it occurs three times in a row over the course of the day, and after a break, it returns to exactly 65%!

I think we should dismiss this finding, simply because it is impossible. When we interpret how impossibly large the effect size is, anyone with even a modest understanding of psychology should be able to conclude that it is impossible that this data pattern is caused by a psychological mechanism. As psychologists, we shouldn’t teach or cite this finding, nor use it in policy decisions as an example of psychological bias in decision making.

I was aware of one explanation for why the effect reported by Danziger and friends was so large. Andreas Glockner explored what would occur if favourable rulings took longer than unfavourable rulings, and the judge (rationally) plans ahead and stops for their break if they believe the case will take longer than there is time left in the session. Simulating this scenario, Glockner generated an effect of similar magnitude to the original paper.

However, I was never convinced the case ordering was random, a core assumption behind Danziger and friends’ finding. In my brief legal career I often attended preliminary court hearings where matters were listed in a long (possibly random) court list. Then the order emerged. Those with legal representation would go first. Senior lawyers would get priority over junior lawyers. Matters for immediate adjournment would be early. And so on. There was no formal procedure for this to occur other than discussion with the court orderly before and during the session.

It turns out that these Israeli judges (or, I should say, a panel of a judge, a criminologist and a social worker) experienced a similar dynamic. Lakens points to a PNAS paper in which Keren Weinshall-Margela (of the Israeli Supreme Courts research division) and John Shapard investigated whether the ordering of cases was actually random. The answer was no:

We examined data provided by the authors and obtained additional data from 12 hearing days (n = 227 decisions). We also interviewed three attorneys, a parole panel judge, and five personnel at Israeli Prison Services and Court Management, learning that case ordering is not random and that several factors contribute to the downward trend in prisoner success between meal breaks. The most important is that the board tries to complete all cases from one prison before it takes a break and to start with another prison after the break. Within each session, unrepresented prisoners usually go last and are less likely to be granted parole than prisoners with attorneys.

Danziger and friends have responded to these claims and attempted to resuscitate their article, but here is something to be said for the “effect is too large” heuristic proposed by Lakens. No amount of back and forth about the finer details of the methodology can avoid that point.

The famous story about the effect of defaults on organ donation provides another example. When I first heard the claim that 99.98% of Austrians, but only 12% of Germans, are organ donors due to the default organ donation option in their driver licence renewal, I simply thought the size of the effect was unrealistic. Do only 2 in 10,000 Austrians tick the box? I would assume more that 2 in 10,000 would tick it by mistake, thinking that would make them organ donors. So when you turn to the original paper or examine the actual organ donation process you will see this has nothing to do with driver’s licences or ticking boxes. The claimed effect size and the story simply did not line up.

Andrew Gelman often makes a similar point. Much research in the social sciences reflects an attempt to find tiny effects in noisy data, and any large effects we find are likely gross overestimates of the true effect (to the extent the effect exists). Gelman and John Carlin call this a Type M error.

Finally, I intended to include Glockner’s paper in my critical behavioural economics and behavioural science reading list, but it slipped my mind. I have now included it and these other articles for a much richer story.


1 2 Next »

Got a web page? Want to add this automatically updating news feed? Just copy and paste the code below into your HTML.