32. The Bayesian Belief: 6. The Episteme of Aristotle

[To recap:  It has been argued that Thomas Bayes’ 1763 paper on probability may have been constructed as a direct challenge to the work of David Hume.  Evidence to support that conclusion comes from the alternate title of Bayes’ essay:  A Method of Calculating the Exact Probability of All Conclusions founded on Induction. It has been suggested that use of the word induction points to the work of David Hume. Yet, Hume did not use the word induction and Bayes makes no reference to Hume in his essay.  Newton, on the other hand, did use the word induction to describe the process of empirical discovery and Newton was a contemporary of Bayes.  It may have been Newton, and not Hume, that Bayes was addressing.]

In my last post [29] (before the FDA warning [30]and my foray into cultural manliness [31]) I hinted that in order to understand Bayes’ use of the word induction you need to understand Newton.  Yet we can digress even further:  Before Newton lies Francis Bacon.  And before Bacon is Aristotle.  Each one, in turn, is important to appreciate the idea and intent behind induction — as it was defined during Bayes’ life.

Let’s begin with Aristotle.

In the centuries prior to Bayes’ essay, science and philosophy orbited each other in close proximity. The common thread that tied these methods of understanding together was the philosophy of Aristotle.  Aristotle, student of Plato and tutor to Alexander the Great, left behind a fragmented collection of work that had a fundamental impact on medieval and renaissance thought.

Aristotle wrote widely and his intent was to provide a method of understanding not only the nature of physical and moral existence but how to best reconcile a world that  seems both constant yet is constantly changing.  It is Aristotle’s work that provided the foundation for natural philosophy — the precursor to modern science.

Aristotle attempted to create a full treatise on all aspects of human endeavor and he divided the forms of knowledge or science (episteme) into three categories:  the productive, the practical, and the theoretical.  Consistent with Aristotle’s belief that all things move toward their ultimate purpose or cause, categories of knowledge were defined by their final product. Productive knowledge was the domain of artisans and involved the creation of things. Practical knowledge fell in the domain of politics and ethics and focused on the social and moral behavior of people.  Theoretical knowledge involved the pursuit of truth and pure knowledge — or knowledge for its own sake.

32. Aristotle Knowledge

Before Aristotle, Plato argued that what we sense is, at best, a pale reflection of a superordinate reality consisting of pure and perfect forms or ideas.  Whereas our sensory reality is reminiscent of this higher plane, it is a debased copy that flickers and changes across time and space. For Plato, a transcendental level of perfect forms allowed us to reconcile the presence of that which is immutable and a material world in constant flux.

Aristotle, however, questioned the explanatory value of a higher layer and saw this construct as potentially unnecessary.  Plato had suggested that pure forms were superordinate and therefore could not exist within material things.  Aristotle, however, wondered if this was so, then how can we gain any knowledge of pure forms?  That is, Plato’s definition of forms provided us with no method of connecting material objects to those forms they seemingly reflect. To Aristotle, forms — in this context — made no logical sense.

Aristotle argued that the substance of things must rest within the object under scrutiny and not, as suggested by Plato, beyond the object.  Aristotle advocated an empirical approach to the acquisition of truth, one that begins with information gathered from our senses. Within Aristotle’s perspective, knowledge was not descendent, it was ascendant — it moves from the particular to the universal.  We use our experience and reason to infer that which is universal from the datum of our senses.

This process Aristotle defined as induction.

For Aristotle, induction was a necessary step in the scientific process.

  • [Note.  Aristotle, in Metaphysics, accredited induction to Socrates:   “… for two things may be fairly ascribed to Socrates – inductive arguments and universal definition, both of which are concerned with the starting-point of science.” ]
Advertisements

28. The Bayesian Belief: 4. The Other Title of Bayes’ Essay

In my previous post (25. The Sestercentennial of Thomas Bayes and Richard Price), I had introduced Stephen Stigler’s argument that Thomas Bayes’ 1763 paper is connected to David Hume.  In particular, Stigler has suggested that Bayes’ work, as disseminated by Price, represented a mathematical challenge to Hume’s skeptical observations on matters of religion.

Stigler begins his paper with the interesting finding that Bayes’ 1763 essay may have had two titles.  The accepted title of Bayes’ essay, as published in Philosophical Transactions, was An Essay towards Solving a Problem in the Doctrine of Chances. The other title of Bayes’ work, however, may have been A Method of Calculating the Exact Probability of All Conclusions founded on Induction. This was either the original title of Bayes’ essay prior to publication or Price chose to rename  Bayes’ essay following publication.

Evidence for this alternate title comes from an offprint that followed the publication of Bayes’ essay. (In more current terms, an offprint is akin to a journal reprint but under its own cover.). This alternative title is also described in a footnote in Price’s Four Dissertations (1767) as well as in Volume 7 of Rees’ New Cyclopaedia (1807).

28. Offprint

28. Price28. Rees

For Stigler, this alternate title is meaningful and he suggests that the intent of Bayes was to address the issue of induction.  Following this assumption, Stigler then constructs the following possible sequence of events:

[One].  In 1748, Hume publishes his essay Of Miracles in his Philosophical Essays Concerning Human Understanding.  Hume’s essay that challenged witness testimony to miracles would have been controversial among Christian clergy and members of the church.

[Two].  Bayes attempts to address Hume’s argument in 1748 or 1749 by working on a method of applying probability to unknown causes (or induction).  Based on extant notes, Stigler argues that Bayes would have most likely completed his calculations prior to December 31, 1749.  Stigler then presumes that Bayes put this work aside due to it being either insufficient or not satisfactory to Bayes.

[Three].  Sometime between 1749 and just prior to his death, Bayes discusses the above work with Richard Price.  Stigler emphasizes Bayes’ and Price’s shared religious beliefs as evidence for a likely acquaintance between Bayes and Price as well as Bayes bequeathing £100 to Price in Bayes’ will.  Stigler notes that Bayes may have also described his method to David Hartley in 1749.  If Bayes shared his work with Hartley, Stigler argues, then it would make sense that Bayes would have also shared his work with his more closer acquaintance, Richard Price.

[Four].  Bayes dies on April 7, 1761.  With his prior knowledge of Bayes’ work, Price obtains Bayes’ manuscript from Bayes’ private documents.  Price then spends the next two years expanding and editing Bayes’ manuscript with the explicit purpose of challenging Hume.

[Five].  Price delivers Bayes work to the Royal Society on December 23, 1763 and further extends this work with a second paper read to the Society one year later.

[Six]. According to Stigler (and colorfully), in 1767, the final weapon is deployed — Price’s work Four Dissertations.  In his fourth dissertation, Price argues against Hume and attempts to show mathematically how even improbable events (miracles) can be established through accumulation of evidence from independent witnesses.  Price relies on Bayes’ essay as his source of authority.

And the circle is now complete from Hume to Bayes to Price to Hume.

Stigler acknowledges that it is speculative whether Bayes’ original intention in carrying out his calculations was based on a direct response to Hume.  He accepts that evidence to support this argument is scant given that Hume is not referred to in Bayes’ work and that few documents of Bayes have survived.  However, Stigler argues that the timing of Hume’s work and the timing of Bayes’ initial calculations on probability are compelling.

Indirect evidence that Bayes may have completed his calculations prior to 1749 comes from information contained in Hartley’s work Observations on Man.  Here, Hartley refers  to an “ingenious friend” who had demonstrated a method of calculating the probability of   an unknown cause given only the observation of its effects.  A number of statistical historians (although not all) have suggested that the ingenious friend mentioned by Hartley was, in fact, Bayes.  If so, suggests Stigler, then the bulk of Bayes’ work would have been completed following the publication of Hume’s essays in 1748 and prior to  Hartley’s 1749 publication.

With all due respect to a Stigler, I am having difficulty accepting his argument in its entirety.  In the next post, I will begin to review his thesis item by item.  We will begin at the beginning and first discuss induction.

  • [Note.  If you are a style junkie you may have noticed that when I use the possessive form of Bayes I use Bayes’.  In his article, Stigler uses Bayes’s.  You will get different opinions on which form is correct, particularly when dealing with proper nouns.  Overall, the trend seems to be moving toward Stigler’s use as the preferred method.  I personally find that style awkward but I accept that things change.  For now, however, I will stay with the older method.]

26. Thinking about Christmas: 1. Thinking about Science. Thinking about Faith

It is Christmas Eve, so I am going to take a short detour toward religion.  However, I am not completely off topic.  In my last several posts I have centered on the role of religion in the development of Bayesian statistics.  From much that I have read, this topic seems to be a source of discomfort among modern Bayesians who align themselves with atheism or a general neutrality towards issues of religious faith.  Some have been apologetic to the presence of God in the writings of early probability theorists.  Some, — Bayesian atheists — have used Bayesian probability to argue against God and faith.  This is irony writ large given that Bayes’ work was initially seen as a method that might demonstrate the presence of the Divine.

Today we struggle with the notion of whether we should let science and religion touch or whether they are, by design, mutually exclusive.  Now, don’t get me wrong.  I believe in science and no one has ever accused me of being overly religious.

Like most moderates, however, I am uncomfortable with extremism.  I find extreme religiosity as equally as repelling as extreme atheism or scientism.  All are tedious.

Deal with human suffering long enough and you eventually feel there is little that can be know with complete confidence.  Over time — as you walk in doubt  — that feeling of uncertainty becomes somewhat comforting. You do your best to understand and help others within the limits of your knowledge and you begin to accept that many things are simply unknown.

Sometimes your only tools are kindness and compassion and to offer others small slivers of hope.

Label that belief whatever you wish.

Merry Christmas.

26. Christmas Mandelbrot

22. Sick Versus Slick: 8. The Base-Rate Fallacy

  • Probability, like time, is a concept invented by humans, and humans have to bear the responsibility for the obscurities that attend it.   John Archibald Wheeler

We have been deconstructing the Androgen Deficiency in Aging Males (ADAM) questionnaire and measuring its worth in terms of identifying men who are possibly experiencing low testosterone.  Based on data from its source publication, the ADAM’s  ability to accurately predict men with low testosterone is about 42 percent.

A positive predictive value of 42 percent suggests that the ADAM will be wrong more often than it is right.  If the ADAM predicts you have low testosterone, it is a safe bet that you do not.  However, because we know that 1 out of 4 men in the original ADAM study had low testosterone, the ADAM did outperform guessing by over 15 percent.

Is a 15 percent increase meaningful?

Well, that depends on what we are trying to predict, the relative costs of error in that prediction, and the original base rate from which we began.

If we are trying to improve our ability to predict a tornado, a 15 percent increase in prediction may lead to saved lives.  However, if tornados are rare in our geographical area, this method of prediction will also lead to a greater number of false alarms. False alarms may cause unnecessary panic or, worse, a dismissal of tornado warnings as usually wrong by those who live in that area.  Under these circumstances, we might prefer a more precise early warning system and we might deem a 15 percent increase as not beneficial.

If we are trying to predict the presence of a specific type of rare cancer, a 15 percent increase in prediction again will also lead to many false alarms.  However, if that cancer is highly treatable once detected and its treatment not invasive, we might be willing to accept a trade-off between this increase in prediction coupled with an increase in false alarms.

The point here is that our capacity to make a reasonable judgement depends on the meaning of the event to us, the probability of that event occurring, and our ability to accurately detect that event once it has occurred.  Classic Bayesian probability, of course, can not comment on the subjective relevance or moral weight of an event.  It cannot know what we hold in our minds’ eye.  Instead, it’s main focus is on the general probability or base rate of an event and the specific instance of that event under consideration.

As humans beings, however, we seem to fail to consider the base rate that underlies all events.  We are seduced by the instance.  The example given in the last post based on a study by Agoritsas and his colleagues (2010) illustrates both our blindness and easy seduction.  To repeat:

  • As a school doctor, you perform a screening test for a viral disease in a primary school.
  • The properties of the test are very good:  Among 100 children who have the disease, the test is positive in 99, and negative in only 1, and among 100 children who do not have the disease, the test is negative in 99, and falsely positive in only 1.
  • On average, about 1 out of 100 children are infected without knowing it.
  • If the test for one of the children is positive, what is the probability that he or she actually has this viral disease?

As mentioned in the last post, when this problem was offered to a sample of more than 1000 physicians practicing in Switzerland, the majority guessed the probability of detection of viral disease at 95 percent or greater.  This remained true even when the rate of prevalence was manipulated to range anywhere from 1 to 95 percent or was undetermined.

In fact, the answer to this riddle is 50 percent as depicted below:

Fig 21-7

The high sensitivity and specificity of the diagnostic test is tempered by the low prevalence of the virus.  Although the diagnostic test is highly accurate, there is still a relatively large amount of false positives due to more people not being afflicted by the virus (99 out of 100) than the amount of people who do have the virus (1 out of 100).

The authors of this study highlight that the improper use of probability may result in medical error. If the outcome of diagnostic error leads to biomedical consequences, then the tendency to ignore prevalence or base rates goes beyond a curious phenomenon of human decision making, it becomes a potentially harmful event.

In the psychology literature, the general disregard of base rate information has long been a source of focus. Meehl and Rosen (1955) offered an early exploration of the importance of base rates or, more specifically, the lack of base rate information in most psychological tests. As well, considerable experimental research in the 1970s was conducted by Kahneman and Tversky and analyzed universal flaws in reasoning during decision making tasks.  However, one of the more comprehensive and influential articles on base rate errors was written by Bar-Hillel in 1980.

Bar-Hillel labelled the tendency to ignore information about the historical occurrence of an event as the base-rate fallacy.  Her interest was in gaining a better understanding under what circumstances base rate errors were most likely to occur.

Bar-Hillel did not see the base-rate fallacy as inevitable.  Instead, she demonstrated that its influence could be reduced through manipulation of how information was presented and, more importantly, by increasing the relevance of base rate information. Bar-Hillel argued that if we deem information as possessing low relevance then we tend to disregard that information.  It is not that we are unaware or ignorant of base rate information.  On the contrary, she argued, we disregard this information because we strongly feel it should be disregarded.

The results of the Agoritsas study clearly demonstrates that the majority of physicians do not attend to a disorder’s general occurrence or base rate when making a clinical decision.  They fail to do so either because of unawareness or, following Bar-Hillel, because they deem it of low importance.

In the case of the virus problem, the information provided was highly sparse and intended to focus on the importance of base rates.  Real diagnostic problems, however, are complex and carry an abundance of possibly relevant information.  Disregarding some information is an important step in pruning a problem to its smallest set of possible diagnoses.  Determining that the probability of a correct diagnosis is 50 percent given the outcome of specific test makes complete sense in Bayesian logic.  Yet, when a definitive yes or no response is required, as in health care, judicial decisions, or a marriage proposal, this is not overly helpful.

When asked to decide whether a child is positive for a virus, one must decide.  You cannot 50 percent decide. You cannot treat a child with a half-measure.  In the Agoritsas study, those physicians surveyed may have intuitively moved past the question of probability and toward the final goal of clinical action.  For these physicians, if a child tests positive for a virus, they will chose to treat that child.  Therefore, while it may be true that their answer to the question, as posed, was incorrect, the course of action that stemmed from the incorrect answer may have been consistent with those physicians’ method of practice.

Another way of thinking about the physicians’ process of decision making is that they chose to disregard the existing virus base rate and, instead, inserted a prior probability of perfect uncertainty.  In Bayesian terms, a prior of perfect uncertainty looks like this:

Fig 22-1

If you think about it, most of us approach common day-to-day decisions in this way.  Our base rates are subjective and tend to follow our personal history of exposure to certain events.  If a problem is novel to us, we may opt for a prior probability of perfect uncertainty, as did the physicians in the virus problem.  Across time, however, as we accumulate personal history of the same repeated event, we may start to adjust our prior probability rate.

A prior probability of perfect uncertainty is quite allowable in Bayesian probability.  In fact, under Bayesian inference, it is mandatory (more on this in later posts).  Bayesian probability was originally designed as a method of determining unknown events and, ironically, it was this subjective quality of the Bayesian approach that led to it being disfavored in those years following its publication.

Given the pervasive nature of the base-rate fallacy, the strong push in medical practice to treat any possible disorder, and that meeting patients’ needs is correlated with patient satisfaction, it is very likely that any physician who obtains a positive test result will move toward treatment even in the midst of high false positive rates.