Consultants are paid for certainty
Darrel Morry -> Usefulness of models, and the limits of models (need to combine stats with instincts)
- Lived in Nazi Occupied France
- Left for Israel Post-War and studied psychology
- Ended up expert on psychology for Israeli Defense Force
- Asked to develop a selection process for sorting recruits into the various branches of the armed forces
- he felt the process of interviewing had biases, for example the the Halo Effect
- Distrustful of self, questioned his biases
- Paratrooper in Israeli Armed forces
- Life of the party and confident
He told them to pose very specific questions, designed to determine not how a person thought of himself but how the person had actually behaved. The questions were not just fact-seeking but designed to disguise the facts being sought. And at the end of each section, before moving on to the next, the interviewer was to assign a rating from 1 to 5 that corresponded with choices ranging from “never displays this kind of behavior” to “always displays this kind of behavior.” So, for example, when evaluating a recruit’s sociability, they’d give a 5 to a person who “forms close social relationships and identifies completely with the whole group” and a 1 to “a person who was “completely isolated.” Even Danny could see that there were all kinds of problems with his methods, but he didn’t have the time to worry too much about them. ”
- he then checked these assessments against performance retrospectively
Found that those who succeeded would succeed across any branch,
“Later, when he was a university professor, Danny would tell students, “When someonesays something, don’t ask yourself if it is true. Ask what it might be true of.” That was his intellectual instinct, his natural first step to the mental hoop: to take whatever someone had just said to him and try not to tear it down but to make sense of it. The question the Israeli military had asked him—Which personalities are best suited to which military roles?—had turned out to make no sense. And so Danny had gone and answered a different, more fruitful question: How do we prevent the intuition of interviewers from screwing up their assessment of army recruits? He’d been asked to divine the character of the nation’s youth. Instead he’d found out something about people who try to divine other people’s character: Remove their gut feelings, and their judgments improved. He’d been handed a narrow problem and discovered a broad truth. “The difference between Danny and the next nine hundred and ninety-nine thousand nine hundred and ninety-nine psychologists is his ability to find the phenomenon and then explain it in a way that applies to other situations,” said Dale Griffin, a psychologist at the University of British Columbia. “It looks like luck but he keeps doing it”
- Transitive property comparison:
- Rationale assumption is that if person prefers A to B, and B to C, then a person should prefer
- However, people fail this test as shown in example of picking mates
- People make judgments relative against some ideal
“features of similarity.” He argued that when people compared two things, and judged their similarity, they were essentially making a list of features. These features are simply what they notice about the objects. They count up the noticeable features shared by two objects: The more they share, the more similar they are; the more they don’t share, the more dissimilar they are. Not all objects have the same number of noticeable features: New York City had more of them than Tel Aviv, for instance. Amos built a mathematical model to describe what he meant—and to invite others to test his theory, and prove him wrong.”
“They might have both a lot in common and a lot not in common. Love and hate, and funny and sad, and serious and silly: Suddenly they could be seen—as they feel—as having more fluid relationships to each other. They weren’t simply opposites on a fixed mental continuum; they could be thought of as similar in some of their features and different in others. Amos’s theory also offered a fresh view into what might be happening when people violated transitivity and thus made seemingly irrational choices.“l. They were collections of features. Those features might become more or less noticeable; their prominence in the mind depended on the context in which they were perceived. And the choice created its own context: Different features might assume greater prominence in the mind when the coffee was being compared to tea (caffeine) than when it was being compared to hot chocolate (sugar). And what was true of drinks might also be true of people, and ideas, and emotions.”
- Pupil dilation as a proxy for engagement / attention
- Attentional switching
- “The successful fighter pilots were better able to switch attention than the unsuccessful ones, and both were better at it than Israeli bus drivers. Eventually one of Danny’s students discovered that you could predict, from how efficiently they switched channels, which Israeli bus drivers were more likely to have accidents.”
Education is knowing what to do when you don’t know
“The Magical Number Seven.’” “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” was a paper, written by Harvard psychologist George Miller, which showed that people had the ability to hold in their short-term memory seven items, more or less. Any attempt to get them to hold more was futile.”
“At any rate, the most effective way to teach people longer strings of information was to feed the information into their minds in smaller chunks.”
- Noticed that people’s predictions about probabilities deviated from expectations of the Bayes Theorem. One illustrative experiment was the poker chips and backpack selection experiment. Some psychologists thought that people were conservative bayesians (i.e. new information would inform direction of where they went with their prediction, just not enough).
Belief in the law of small numbers was Amos Tversky and Daniel Kanheman’s first collaboration. Based on the premise of the paper, they argued that psychologists were likely to draw broad conclusions from small samples rather than increasing their sample size. Gambler’s Fallacy Paul Hoffman attempted to understand decision making by analyzing the inputs experts used to make decisions (“cues”) and infer from those decisions the weights they had placed on the inputs. Lew Goldberg developed an algorithm for predicting stomach cancer based on ulcer results in an X-Ray, he found that doctors tended to conflict with their past predictions, but also performed worse than the algorithim. These results were replicated with clinical psychologists where it was additionally shown that experience was not a predictor of accuracy.
Subjective Probability: A Judgement of Representativeness -> When decision making is made under uncertain odds, the mind used heuristics (rules of thumbs) to come to a decision. The mind compared whatever they were judging to how closely it fit the mental model of that thing.
Amos and Danny theorized that when people made mistakes they did so in a systematic fashion.
the mind had these mechanisms for making judgments and decisions that were usually useful but also capable of generating serious error
Availability: A heurisitic for Judging Frequency and Probability The more easily people can call some scenario to mind – the more available it is to them – the more probable they find it to be. ““the easier it is for me to retrieve from my memory, the more likely it is”. Human judgement was distorted by the memorable.
The Conditionality heuristic… in judging the degree of uncertainty in any situation, they noted, people made “nstated assumptions.”. People assume normal operating conditions when estimating a companies profitability. They do not account for when these assumptions are not holding true.
Anchoring and Adjustment
People make decisions based on the first information, even if the information is completely irrelevant.
On the Psychology of Prediction
“In making predictions and judgments under uncertainty,” they wrote, “people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic error”
“People predict by making up stories People predict very little and explain everything People live under uncertainty whether they like it or not People believe they can tell the future if they work hard enough People accept any explanation as long as it fits the facts The handwriting was on the wall, it was just the ink that was invisible People often work hard to obtain information they already have And avoid new knowledge Man is a deterministic device thrown into a probabilistic Universe In this match, surprises are expected Everything that has already happened must have been inevitable”
“The difference between a judgment and a prediction wasn’t as obvious to everyone as it was to Amos and Danny. To their way of thinking, a judgment (“he looks like a good Israeli army officer”) implies a prediction (“he will make a good Israeli army officer”), just as a prediction implies some judgment—without a judgment, how would you predict? In their minds, there was a distinction: A prediction is a judgment that involves uncertainty. “Adolf Hitler is an eloquent speaker” is a judgment you can’t do much about. “Adolf Hitler will become chancellor of Germany” is, at least until January 30, 1933, a prediction of an uncertain event that eventually will be proven either right or wrong.”
“Evidently, people respond differently when given no specific evidence and when given worthless evidence,” wrote Danny and Amos. “When no specific evidence is given, the prior probabilities are properly utilized; when worthless specific evidence is given, prior probabilities are ignored.
“Man’s inability to see the power of regression to the mean leaves him blind to the nature of the world around him.”
Biases (errors) provide a footprint for the mechanism (heuristic) for arrived at that conclusion.
“Their memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon “hindsight bias”. “In his talk to the historians, Amos described their occupational hazard: the tendency to take whatever facts they had observed (neglecting the many facts that they did not or could not observe) and make them fit neatly into a confident-sounding story:”
“Historians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. “Creeping determinism,” ”