# Accuracy and prediction… the limits of our intuition

Imagine that an infectious disease is spreading around the world, and that a test is available, with for example a proven accuracy of 90%. Ask yourself the following question:

What is the likelihood that I am infected, if the test is positive?

Usually, people answer that the probability is 90%, that is, equal to the accuracy of the test. But this answer is wrong and betrays our difficulty in reasoning correctly with probabilities.

**In reality, the probability in question could be any number between 0% and 100%!**

Now I’ll explain. Before doing so, however, a clarification is necessary. A test has two types of accuracy. One that allows it to detect infected people, which is called ‘sensitivity’, and one that allows it to detect non-infected people, which is called ‘specificity’. But to simplify the discussion, we can here consider that the 90% accuracy of the test in question means that both its sensitivity and its specificity are 90%.

So, how is it possible that with a 90% accurate test, the probability that a person is infected, when the test response is positive, can be any percentage, between 0% and 100%?

It’s simple. It is because we usually forget that this probability depends on how many infected persons are present in the population.

To understand this, we must first remember that the probability in question is a *conditional* probability’. In fact, we are looking for the probability that the person is infected (in), conditional on the fact that the test result is positive (+). Let us denote this probability P(in|+), as is customary in probability theory.

Now we must remember that the conditional probability of the event “in”, knowing that the event “+” is realized, is given by the product of the probability that “in” and “+” are simultaneously realized, multiplied by the probability that “in” is realized. With obvious notation, we therefore have the following formula:

P (in|+) = P (in & +) * P(in).

In other words, the probability we are looking for is given by the product of two probabilities. One of these two probabilities is P(in), which is the probability that the person is infected, and this probability, of course, does not depend in any way on the accuracy of the test.

If there is an insignificant number of infected people in the population, this probability will be equal to zero, so P(in|+), being the product of two numbers, one of which is zero, will also turn out to be equal to zero!

Moral: even a very accurate test will be of no predictive power if the number of infected people in the population is too low.

What happens if, instead, the number of infected people in the population is maximum, that is, if P(in), the probability of being infected, tends to 1. In this case, it is evident, and there is no need for mathematics to understand it, the probability of being infected, knowing that the test is positive, will also tend to 1, that is 100%, regardless of the effectiveness of the test!

Moral: even a very inaccurate test can have great predictive power if the number of infected people in the population is extremely high.

That said, it is clear that to determine P(in|+) in a precise way, given a certain percentage of infected in the population, a specific formula needs to be derived. By the way, in the literature, the probability P(in|+) has a name, it is called “positive predictive value”, often indicated with the acronym PPV.

Ok, let’s try to do the calculation together. The idea is to further decompose the joint probability P(in & +), noting that we can also write:

P (+|in) = P(in & +) * P(+).

So:

P (in & +) = P (+| n) / P(+),

and inserting this expression into the one above, we get:

P (in|+) = P (+|in)P (in) / P(+).

At this point, we can further break down P(+), writing:

P (+) = P (+|n) * (in) + P (+|nin) * P(nin),

where “nin” stands for “not infected”.

This formula, which I just used, it is called the ‘law of total probabilities’, but apart from this high-sounding name, it is something very intuitive. It simply expresses the fact that the sum of the probabilities of incompatible partial events is equal to the total probability of the event in question.

Of course, the probability of *not* being infected, P(nin), is simply 1 minus the probability of being infected, so:

P(nin) = 1 - P(in).

Furthermore, the probability of being positive when infected, P(+|in), is nothing more than the ‘sensitivity’, which for convenience we will denote ‘Se’. Instead, the probability P(+|nin), of being positive when not infected, is equal to 1 minus the probability of being negative when not infected, which is what we have termed ‘specificity’, and will denote ‘Sp’. Therefore:

P(+|nin) = 1 - Sp.

In other words, we have made the following steps explicit:

P(in|+) =

= P(in & +) * P(in)

= P(+|in) * P(in) / P(+)

= P(+|in) * P(in) / [P(+|in) * P(in) + P(+|nin) * P(nin)]

= Se * P(in) / [Se * P(in) + (1 - Sp) * (1 - P(in))].

So, this is the formula. Let us suppose, to simplify, that Se = Sp, i.e., that the precision in detecting the infected is the same as detecting the non-infected, which we will denote S. And to further simplify the notation, we simply write P for the probability P(in), which obviously, in the absence of other information, is simply given by the ratio between the number of infected people and the total number of people. We can then write:

P(in|+) = S * P / [S * P + (1 - S) * (1 - P)].

We can easily verify that if P tends to zero, the numerator of the above formula also tends to zero, therefore P(in|+) tends to 0, in accordance with what we have already observed, that (contrary to our intuition) the test is predictively useless if the number of infected in the population is too low.

If, instead, P tends to 1, the numerator tends towards S, and the denominator also tends towards S, therefore P(in|+) tends to 1, in accordance with what we have already observed, that (contrary to our intuition) a test can become highly predictive, regardless of its accuracy, if the number of the infected is ver high.

Ok, but since we have derived a formula, we can now use it to ask ourselves: what must be the percentage of infected in the population, in order for a 90% accurate test to be able to give the correct positive answer at least in 90% of the times? Well, it is easy to deduce from the formula we have derived that at least 50% of the population must be infected!

On the other hand, if, for example, only 10% of the population were infected, then the predictivity of the test would drop to 50%! In this situation, to have a 90% predictivity, one needs the test to increase its precision up to 98,8%.

In short, when dealing with probabilities, and with scientific reasoning in general, our intuition is of little use.

In fact, it is necessary to activate a *slow reasoning process *and carefully check all its logical steps.

And in this era where everyone is losing one’s mind, perhaps for the first time some will begin to answer that big question that every mathematics teacher has been asked countless times: “What the hell is math for?”