Bayesian reasoning assigns probabilities of certainty/uncertainty to hypotheses about various parts of reality. Then it uses any additional new information to update the probabilities of these hypotheses.
Keith Stanovich brought our rationality and dysrationality into focus in his book Rationality Quotient. Systems rationality(my term) includes the forms of rationality congruent with systems thinking. In Stanovich’s catalogue of reasoning, lack of Bayesian reasoning accounts for the errors we routinely make about medical tests and, I would contend, for many errors in judgement we make every day.
Bayesian reasoning is especially helpful under uncertain circumstances, which is practically all the time. It is congruent with systems thinking and antithetical to single cause-single effect thinking. The idea that a single cause produces a single effect or that a single effect has a single cause is hard to find evidence for in living systems, unless you blind yourself to parts of the available evidence.
Thinking about living systems basically asserts that single causes have multiple effects. And single effects are produced by multiple causes, which often are working together.
A major example where single cause – single effect thinking produces errors is in the interpretation of medical diagnostic tests. A test commonly may be touted as highly accurate. e.g. a hypothetical test for a disease is said to have high accuracy, meaning that in a sample of people with that disease, say 90% of these people score positive on the test and, say, 85% of those without the disease score negative on the test. Single cause – single effect thinking will immediately reason that if I score positive on the test, I have a 90% chance of having the disease.
Bayesian reasoning concludes otherwise. Single cause – single effect reasoning will produce grossly wrong conclusions about a test in situations where the prevalence of the disease is low. An example: schizophrenia has a prevalence of less than 2%. Let’s construct a hypothetical test, presence of a schizophrenia gene. And the test is highly accurate. For argument’s sake, use the previous numbers. Of those with schizophrenia, 90% have that schizophrenia gene and 85% of those without schizophrenia don’t have that gene. Single cause – single effect thinking would contend that if you have that gene, you probably would have schizophrenia.
What is wrong with this reasoning? First, there are many exceptions to the test. 10% of the schizophrenics don’t have that gene. So the test isn’t perfect. And 15% of those who aren’t schizophrenic will actually have the gene.
Schizophrenia has low prevalence; over 98% of the population don’t have schizophrenia. 15% of that nonschizophrenic population will have the gene. These are false positives which dwarf the number of true positives, those with schizophrenia who have the gene.
Let’s translate that into numbers of people. Suppose we draw 1000 people from the general population. A priori we know that about 20 individuals will be schizophrenic and 980 not schizophrenic. Then assess everybody for that schizophrenic gene. 90%, 18 of the 20 schizophrenics will have the gene. 15% of the 980 nonschizophrenics, 147 individuals will also have the gene. So, you see that learning that you have the gene doesn’t increase your certainty about the prediction all that much. A little but not that much. If you apply the Bayesian Theorem to these numbers, you find that having the gene increases your probability of having schizophrenia from 2% to 10.9%. Having the gene increases your chances of being schizophrenic, but not all that much. A very different conclusion from the crude cause-effect over-reliance on the test for the gene. You need a lot more information.
Be careful. Radio, TV and other media will report this as “If you have this gene, you will be 5.5 times more likely to have schizophrenia.” This way of reporting appeals to the automatic thinking of single cause – single effect. The reporting is correct but neglects to point out the prevalence of schizophrenia, 2%. 5.5 times a small amount is still fairly small.
Bayesian reasoning is a philosophy of uncertainty. There are some things that are more certain and many that are not. Bayesian reasoning gives a way to work with realistic uncertainty.
In this example, the prevalence of schizophrenia, 2%, is the information you have before taking the test, which gives you new information. Does the person have this “schizophrenia gene”? Bayesian reasoning asks how much will the test information change your estimate of whether the person has schizophrenia.
Bayesian reasoning requires you to start with a realistic appraisal of your ignorance and then update that appraisal with new facts. Living systems are full of phenomena about which one has only partial certainty and of new information that gives only small amounts of additional certainty. Bayesian reasoning helps deal with the endemic partial certainty of living systems.
Hi Jim Edd,
I look forward to your posts. You bring in a new idea or angle, and this is a good example. Makes sense to relate it to systems thinking, and connects with issues others are pondering. A new way of thinking, provocative. I’m interested in where you are thinking of taking it from here.
Laurie
I am taking it in the direction of elaborating what a systems view of causality is. Causality exists in nature, but the discussion of it has been hobbled by the narrow perspective of single cause – single effect thinking.
Jim Edd,
This is great stuff. Good for you to have the brain power to think it through. Yes, as you say, causality exists in nature, but the complexity of how the effects affect the cause, etc., is what is missing. I see this area of thinking as an important contribution for anyone interested in systems thinking. For myself, it is enlivening, as I realize how much of a stretch it is for me to truly think systems. It blows my mind, in a good way.
Thank you,
Laurie
You are right on the money in targeting the complexity of how effects influence causes.
“Systems rationality(my term) includes the forms of rationality congruent with systems thinking.”..”Bayesian reasoning is especially helpful under uncertain circumstances, which is practically all the time. It is congruent with systems thinking and antithetical to single cause-single effect thinking.” Jim Edd, this is something to think about.
Here’s an article one of my colleagues here passed around. I’m not sure if this is a Bayesian analysis or not . Maybe it’s about how uncertainty is ascertained?
“Poorly presented risk statistics could misinform health decisions.” March 16th, 2011. http://www.physorg.com/news/2011-03-poorly-statistics-misinform-health-decisions.htm
“Choosing the appropriate way to present risk statistics is key to helping people make well-informed decisions. A new Cochrane Systematic Review found that health professionals and consumers may change their perceptions when the same risks and risk reductions are presented using alternative statistical formats.
Risk statistics can be used persuasively to present health interventions in different lights. The different ways of expressing risk can prove confusing and there has been much debate about how to improve the communication of health statistics.
For example, you could read that a drug cuts the risk of hip fracture over a three year period by 50%. At first sight, this would seem like an incredible breakthrough. In fact, what it might equally mean is that without taking the drug 1% of people have fractures, and with the drug only 0.5% do. Now the benefit seems to be much less. Another way of phrasing it would be that 200 people need to take the drug for three years to prevent one incidence of hip fracture. In this case, the drug could start to look a rather expensive option.
Statisticians have terms to describe each type of presentation. The statement of a 50% reduction is typically expressed as a Relative Risk Reduction (RRR). Saying that 0.5% fewer people will have broken hips is an Absolute Risk Reduction (ARR). Saying that 200 people need to be treated to prevent one occurrence is referred to as the Number Needed to Treat (NNT). Furthermore, these effects can be shown as a frequency, where the effect is expressed as 1 out of 200 people avoiding a hip fracture.
In the new study, Cochrane researchers reviewed data from 35 studies assessing understanding of risk statistics by health professionals and consumers. They found that participants in the studies understood frequencies better than probabilities. Relative risk reductions, as in “the drug cuts the risk by 50%”, were less well understood. Participants perceived risk reductions to be inappropriately greater compared to the same benefits presented using absolute risk or NNT.
“People perceive risk reductions to be larger and are more persuaded to adopt a health intervention when its effect is presented in relative terms,” said Elie Akl of the Department of Medicine, University at Buffalo, USA and first author on the review. “What we don’t know yet is whether doctors or policymakers might actually make different decisions based on the way health benefits are presented.”
Although the researchers say further studies are required to explore how different risk formats affect behaviour, they believe there are strong logical arguments for not reporting relative values alone. “Relative risk statistics do not allow a fair comparison of benefits and harms in the same way as absolute values do,” said lead researcher Holger Schünemann of the Department of Clinical Epidemiology and Biostatistics at McMaster University in Ontario, Canada. “If relative risk is to be used, then the absolute change in risk should also be given, as relative risk alone is likely to misinform decisions.”
The interdependence of individuals affecting one another in systems is something that some have tried to address statistically. It’s on my list to check out. Any leads appreciated.
Yes, same thing. And certainly ascertaining uncertainty is a continuing and repeating part of Bayesian reasoning at every step.