Bu maçta Chris Voss’un Never Split the Difference (Asla farkı bölme) isimli müzakere ve pazarlık
konularını işleyen kitabıyla, Alex Reinhart’ın Statistics Done Wrong (Yanlış Yapılan İstatistik)
kitapları karşı karşıya gelmiş.
Voss zamanında FBI’da rehine pazarlığı yapmış biri. Genel olarak tüm hayatta
karşımıza çıkacak pazarlık, müzakere gibi konuları daha iyi yürütmek için bilgiler veriyor.
Kazan/kazan mantığı güden eski usul pazarlıkçılar yerine Evet’in bir anlamı yoktur, asıl önemli olan Hayır’a ulaşmaktır, pazarlık Hayır’dan sonra başlar gibi daha anlamlı gelen şeyler söylüyor.
Statistics Done Wrong ise genelde bilimsel diye bakılan istatistiklerin nasıl hatalarla dolu
olduğunu ve istatistiksel bilgideki hata sebeplerini anlatıyor. Bu da iyi bir kitap.
Normal şartlar altında bu maçı Voss’un kitabı kazanırdı ancak ben bu kitabı bir yandan da dinlemeye
başladım ve yarısı bitti. Bir sonraki turda dinlemiş olacağım için bu kitabı doğrudan eleneceği bir
maça sokmaktansa, şimdiden bırakmak daha uygun görünüyor.
Never Split the Difference - Chris Voss
How to Become the Smartest Person . . . in Any Room
▪ It was quite a sight to see such a brilliant man flustered by what must have seemed unsophisticated foolishness. On the contrary, though, my move was anything but foolish. I was employing what had become one of the FBI’s most potent negotiating tools: the open-ended question
▪ Mnookin, predictably, started fumbling because the frame of the conversation had shifted from how I’d respond to the threat of my son’s murder to how the professor would deal with the logistical issues involved in getting the money. How he would solve my problems. To every threat and demand he made, I continued to ask how I was supposed to pay him and how was I supposed to know that my son was alive.
▪ This mentality baffled Kahneman, who from years in psychology knew that, in his words, “[I]t is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable.”
▪ There’s the Framing Effect, which demonstrates that people respond differently to the same choice depending on how it is framed (people place greater value on moving from 90 percent to 100 percent—high probability to certainty—than from 45 percent to 55 percent, even though they’re both ten percentage points). Prospect Theory explains why we take unwarranted risks in the face of uncertain losses. And the most famous is Loss Aversion, which shows how people are statistically more likely to act to avert a loss than to achieve an equal gain.
▪ But after the fatally disastrous sieges of Randy Weaver’s Ruby Ridge farm in Idaho in 1992 and David Koresh’s Branch Davidian compound in Waco, Texas, in 1993, there was no denying that most hostage negotiations were anything but rational problem-solving situations. I mean, have you ever tried to devise a mutually beneficial win-win solution with a guy who thinks he’s the messiah?
▪ if emotionally driven incidents, not rational bargaining interactions, constituted the bulk of what most police negotiators had to deal with, then our negotiating skills had to laser-focus on the animal, emotional, and irrational
▪ It all starts with the universally applicable premise that people want to be understood and accepted. Listening is the cheapest, yet most effective concession we can make to get there. By listening intensely, a negotiator demonstrates empathy and shows a sincere desire to better understand what the other side is experiencing.
▪ Contrary to popular opinion, listening is not a passive activity. It is the most active thing you can do.
▪ But allow me to let you in on a secret: Life is negotiation. The majority of the interactions we have at work and at home are negotiations that boil down to the expression of a simple, animalistic urge: I want.
▪ You’ll learn why it’s vitally important to get to “No” because “No” starts the negotiation. You’ll also discover how to step out of your ego and negotiate in your counterpart’s world, the only way to achieve an agreement the other side will implement. Finally, you’ll see how to engage your counterpart by acknowledging their right to choose, and you’ll learn an email technique that ensures that you’ll never be ignored again
Statistics Done Wrong - Alex Reinhart
▪ The situation is so bad that even the authors of surveys of statistical knowledge lack the necessary statistical knowledge to formulate survey questions—the numbers I just quoted are misleading because the survey of medical residents included a multiple-choice question asking residents to define a p value and gave four incorrect definitions as the only options.5
▪ As one prominent epidemiologist noted, “We are fast becoming a nuisance to society. People don’t take us seriously anymore, and when they do take us seriously, we may unintentionally do more harm than good.”7 Our instincts are right. In many fields, initial results tend to be contradicted by later results. It seems the pressure to publish exciting results early and often has surpassed the responsibility to publish carefully checked results supported by a surplus of evidence.
▪ Even properly done statistics can’t be trusted. The plethora of available statistical techniques and analyses grants researchers an enormous amount of freedom when analyzing their data, and it is trivially easy to “torture the data until it confesses.” Just try several different analyses offered by your statistical software until one of them turns up an interesting result, and then pretend this is the analysis you intended to do all along.
▪ Hidden beneath their limitations are some subtler issues with p values. Recall that a p value is calculated under the assumption that luck (not your medication or intervention) is the only factor in your experiment, and that p is defined as the probability of obtaining a result equal to or more extreme than the one observed. This means p values force you to reason about results that never actually occurred—that is, results more extreme than yours. The probability of obtaining such results depends on your experimental design, which makes p values “psychic”: two experiments with different designs can produce identical data but different p values because the unobserved data is different.
Have Confidence in Intervals
▪ Research results, especially in the biological and social sciences, are commonly presented with p values. But p isn’t the only way to evaluate the weight of evidence. Confidence intervals can answer the same questions as p values, with the advantage that they provide more information and are more straightforward to interpret
▪ A confidence interval combines a point estimate with the uncertainty in that estimate. For instance, you might say your new experimental drug reduces the average length of a cold by 36 hours and give a 95% confidence interval between 24 and 48 hours. (The confidence interval is for the average length; individual patients may have wildly varying cold lengths.) If you run 100 identical experiments, about 95 of the confidence intervals will include the true value you’re trying to measure.
▪ In experimental psychology research journals, 97% of research papers involve significance testing, but only about 10% ever report confidence intervals—and most of those don’t use the intervals as supporting evidence for their conclusions, relying instead on significance tests.8
▪ One journal editor noted that “p values are like mosquitoes” in that they “have an evolutionary niche somewhere and [unfortunately] no amount of scratching, swatting or spraying will dislodge them.”10
▪ One possible explanation is that confidence intervals go unreported because they are often embarrassingly wide.11 Another is that the peer pressure of peer-reviewed science is too strong—it’s best to do statistics the same way everyone else does, or else the reviewers might reject your paper. Or maybe the widespread confusion about p values obscures the benefits of confidence intervals.