The Language of Probability: Clarity is Key. Makhbubakorn Ismatova/Getty Images
When someone states they are “probably” having pasta for dinner but later opts for pizza, do you find it surprising or consider them dishonest? On a more critical note, what does it imply when the United Nations asserts it is “very likely” that global temperatures will rise by over 1.5 degrees Celsius in the next decade, as reported last year? The translation between the nuances of language and the intricacies of mathematical probability can often seem challenging, yet we can discover scientific clarity through careful analysis.
Two fundamental points about probability are widely accepted: Something labeled “impossible” has a 0% chance of occurrence, while a “certain” event carries a 100% likelihood. However, confusion arises in between these extremes. Ancient Greeks, including Aristotle, differentiated between terms such as Eikos, meaning the most likely, and Pitanon, which signifies plausible. This presents challenges: persuasive rhetoric may not always align with likelihood. Additionally, both terms were translated by Cicero into the modern term probability.
The concept of a measurable mathematical approach to probability emerged significantly later, primarily in the mid-17th century during the Enlightenment. Mathematicians began to address gambling dilemmas, such as equitable distribution of winnings during interruptions. Concurrently, philosophers probed whether it was feasible to quantify varying levels of belief.
For instance, in 1690, John Locke categorized degrees of probability on a spectrum from complete certainty to confidence based on personal experience, down to testimony affected by repetition. This classification remains vital in legal contexts, both historically and presently.
The interplay between law and probability persisted among philosophers. In his writings of the mid-19th century, Jeremy Bentham criticized the inadequacy of common language in expressing evidence strength. He proposed a numerical ranking system to gauge belief strength, but ultimately deemed its subjectivity as impractical for justice.
A century later, economist John Maynard Keynes rejected Bentham’s certainty measure in favor of relational approaches. He argued that it was more effective to discuss how one probability might exceed another, focusing on the knowledge base for these estimations, thus establishing a hierarchy without offering systematic communication methods for terms such as “may” or “likely.”
Interestingly, the first systematic resolution to this challenge did not arise from mathematicians or philosophers but from a CIA intelligence analyst named Sherman Kent. In 1964, he introduced the idea of estimating probability with specific terminology for National Intelligence Estimates designed to guide policymakers. He articulated the dilemma faced by “poets,” who articulate meaning through words, versus “mathematicians,” who advocate for exact figures. Kent initiated the idea that specific words correspond to precise probabilities, designating “virtually certain” as a 93% probability, but also allowing some leeway to accommodate differing interpretations.
This framework for understanding probability transitioned from the intelligence sector to scientific applications. A review of recent research dating back to 1989 explored how both patients and medical professionals interpret terms like “may” in medical scenarios. The findings showed some alignment with Kent’s framework, although with distinctions.
Returning to the original question about the meaning of “very likely” regarding climate change, the Intergovernmental Panel on Climate Change (IPCC) offers clarity with explicit definitions. According to their guidance, “very likely” signifies a 90% to 100% probability of an event’s occurrence. Alarmingly, many climate scientists now assert that temperatures have already surpassed the critical threshold of 1.5 degrees Celsius.
However, situations are rarely straightforward. Logically, the statements “Event A is likely to occur” and “Event A is unlikely to be avoided” should correlate, albeit research published last year reveals that labeling a climate forecast as “unlikely” diminishes perceived evidence strength and consensus among scientists compared to stating it’s “likely.” This cognitive bias might stem from a preference for positive framing over negative alternatives. A classic example includes a community of 600 individuals facing a health crisis; when presented with two treatment options, most favor one that saves 200 lives over one that saves 400, even if both are statistically similar.
So, what lessons can we draw from this exploration? Firstly, quantifiable data effectively enhances communication of uncertainty. If numerical specificity isn’t available, stating, “75% of the time, I plan to have pasta for dinner,” may raise eyebrows. In such instances, ensure shared understanding of terminology, even in the absence of a formalized framework like Kent’s. Lastly, accentuating the positive tends to foster acceptance of predictions. How likely is that? Well, that’s hard to quantify.
Topics:
Source: www.newscientist.com
