-
Consultants are paid for certainty
-
Darrel Morry â Usefulness of models, and the limits of models (need to combine stats with instincts)
-
Daniel Kanheman
- Lived in Nazi Occupied France
- Left for Israel Post-War and studied psychology
- Ended up expert on psychology for Israeli Defense Force
- Asked to develop a selection process for sorting recruits into the various branches of the armed forces
- he felt the process of interviewing had biases, for example the The Halo Effect
- Distrustful of self, questioned his biases
-
Amos Tversky
- Paratrooper in Israeli Armed forces
- Life of the party and confident
He told them to pose very specific questions, designed to determine not how a person thought of himself but how the person had actually behaved. The questions were not just fact-seeking but designed to disguise the facts being sought. And at the end of each section, before moving on to the next, the interviewer was to assign a rating from 1 to 5 that corresponded with choices ranging from ânever displays this kind of behaviorâ to âalways displays this kind of behavior.â So, for example, when evaluating a recruitâs sociability, theyâd give a 5 to a person who âforms close social relationships and identifies completely with the whole groupâ and a 1 to âa person who was âcompletely isolated.â Even Danny could see that there were all kinds of problems with his methods, but he didnât have the time to worry too much about them. â
- he then checked these assessments against performance retrospectively
-
Found that those who succeeded would succeed across any branch,
âLater, when he was a university professor, Danny would tell students, âWhen someonesays something, donât ask yourself if it is true. Ask what it might be true of.â That was his intellectual instinct, his natural first step to the mental hoop: to take whatever someone had just said to him and try not to tear it down but to make sense of it. The question the Israeli military had asked himâWhich personalities are best suited to which military roles?âhad turned out to make no sense. And so Danny had gone and answered a different, more fruitful question: How do we prevent the intuition of interviewers from screwing up their assessment of army recruits? Heâd been asked to divine the character of the nationâs youth. Instead heâd found out something about people who try to divine other peopleâs character: Remove their gut feelings, and their judgments improved. Heâd been handed a narrow problem and discovered a broad truth. âThe difference between Danny and the next nine hundred and ninety-nine thousand nine hundred and ninety-nine psychologists is his ability to find the phenomenon and then explain it in a way that applies to other situations,â said Dale Griffin, a psychologist at the University of British Columbia. âIt looks like luck but he keeps doing itâ
-
- Transitive property comparison:
- Rationale assumption is that if person prefers A to B, and B to C, then a person should prefer
- However, people fail this test as shown in example of picking mates
- People make judgments relative against some ideal
âfeatures of similarity.â He argued that when people compared two things, and judged their similarity, they were essentially making a list of features. These features are simply what they notice about the objects. They count up the noticeable features shared by two objects: The more they share, the more similar they are; the more they donât share, the more dissimilar they are. Not all objects have the same number of noticeable features: New York City had more of them than Tel Aviv, for instance. Amos built a mathematical model to describe what he meantâand to invite others to test his theory, and prove him wrong.â
âThey might have both a lot in common and a lot not in common. Love and hate, and funny and sad, and serious and silly: Suddenly they could be seenâas they feelâas having more fluid relationships to each other. They werenât simply opposites on a fixed mental continuum; they could be thought of as similar in some of their features and different in others. Amosâs theory also offered a fresh view into what might be happening when people violated transitivity and thus made seemingly irrational choices.âl. They were collections of features. Those features might become more or less noticeable; their prominence in the mind depended on the context in which they were perceived. And the choice created its own context: Different features might assume greater prominence in the mind when the coffee was being compared to tea (caffeine) than when it was being compared to hot chocolate (sugar). And what was true of drinks might also be true of people, and ideas, and emotions.â
- Pupil dilation as a proxy for engagement / attention
- Attentional switching
- âThe successful fighter pilots were better able to switch attention than the unsuccessful ones, and both were better at it than Israeli bus drivers. Eventually one of Dannyâs students discovered that you could predict, from how efficiently they switched channels, which Israeli bus drivers were more likely to have accidents.â
Education is knowing what to do when you donât know
-
âThe Magical Number Seven.ââ âThe Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Informationâ was a paper, written by Harvard psychologist George Miller, which showed that people had the ability to hold in their short-term memory seven items, more or less. Any attempt to get them to hold more was futile.â
âAt any rate, the most effective way to teach people longer strings of information was to feed the information into their minds in smaller chunks.â
- Noticed that peopleâs predictions about probabilities deviated from expectations of the Bayes Theorem. One illustrative experiment was the poker chips and backpack selection experiment. Some psychologists thought that people were conservative bayesians (i.e. new information would inform direction of where they went with their prediction, just not enough).
Belief in the law of small numbers was Amos Tversky and Daniel Kanhemanâs first collaboration. Based on the premise of the paper, they argued that psychologists were likely to draw broad conclusions from small samples rather than increasing their sample size. Gamblerâs Fallacy Paul Hoffman attempted to understand decision making by analyzing the inputs experts used to make decisions (âcuesâ) and infer from those decisions the weights they had placed on the inputs. Lew Goldberg developed an algorithm for predicting stomach cancer based on ulcer results in an X-Ray, he found that doctors tended to conflict with their past predictions, but also performed worse than the algorithim. These results were replicated with clinical psychologists where it was additionally shown that experience was not a predictor of accuracy.
Subjective Probability: A Judgement of Representativeness â When decision making is made under uncertain odds, the mind used heuristics (rules of thumbs) to come to a decision. The mind compared whatever they were judging to how closely it fit the mental model of that thing.
Amos and Danny theorized that when people made mistakes they did so in a systematic fashion.
the mind had these mechanisms for making judgments and decisions that were usually useful but also capable of generating serious error
Availability: A heurisitic for Judging Frequency and Probability The more easily people can call some scenario to mind â the more available it is to them â the more probable they find it to be. ââthe easier it is for me to retrieve from my memory, the more likely it isâ. Human judgement was distorted by the memorable.
The Conditionality heuristic⌠in judging the degree of uncertainty in any situation, they noted, people made ânstated assumptions.â. People assume normal operating conditions when estimating a companies profitability. They do not account for when these assumptions are not holding true.
Anchoring and Adjustment
People make decisions based on the first information, even if the information is completely irrelevant.
On the Psychology of Prediction
âIn making predictions and judgments under uncertainty,â they wrote, âpeople do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errorâ
âPeople predict by making up stories People predict very little and explain everything People live under uncertainty whether they like it or not People believe they can tell the future if they work hard enough People accept any explanation as long as it fits the facts The handwriting was on the wall, it was just the ink that was invisible People often work hard to obtain information they already have And avoid new knowledge Man is a deterministic device thrown into a probabilistic Universe In this match, surprises are expected Everything that has already happened must have been inevitableâ
âThe difference between a judgment and a prediction wasnât as obvious to everyone as it was to Amos and Danny. To their way of thinking, a judgment (âhe looks like a good Israeli army officerâ) implies a prediction (âhe will make a good Israeli army officerâ), just as a prediction implies some judgmentâwithout a judgment, how would you predict? In their minds, there was a distinction: A prediction is a judgment that involves uncertainty. âAdolf Hitler is an eloquent speakerâ is a judgment you canât do much about. âAdolf Hitler will become chancellor of Germanyâ is, at least until January 30, 1933, a prediction of an uncertain event that eventually will be proven either right or wrong.â
âEvidently, people respond differently when given no specific evidence and when given worthless evidence,â wrote Danny and Amos. âWhen no specific evidence is given, the prior probabilities are properly utilized; when worthless specific evidence is given, prior probabilities are ignored.
âManâs inability to see the power of regression to the mean leaves him blind to the nature of the world around him.â
Biases (errors) provide a footprint for the mechanism (heuristic) for arrived at that conclusion.
âTheir memories of the odds they had assigned to various outcomes were badly distorted. They all believed that they had assigned higher probabilities to what happened than they actually had. They greatly overestimated the odds that they had assigned to what had actually happened. That is, once they knew the outcome, they thought it had been far more predictable than they had found it to be before, when they had tried to predict it. A few years after Amos described the work to his Buffalo audience, Fischhoff named the phenomenon âhindsight biasâ. âIn his talk to the historians, Amos described their occupational hazard: the tendency to take whatever facts they had observed (neglecting the many facts that they did not or could not observe) and make them fit neatly into a confident-sounding story:â
âHistorians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. âCreeping determinism,â â