# Video: How Reliable is a Lie Detector Test? Introduction to Bayes Theorem

In this video we learn how a lie detector test which is 90 percent accurate at identifying true statements or lies may not be very reliable at all at spotting lies if lots of people tell the truth. We need to know about Bayes’ Theorem, and conditional probability to understand the likelihood that someone is telling a lie, given that the lie detector says they are telling a lie.

03:43

### Video Transcript

Imagine if you had a lie detector that was able to detect truth or lies nine times out of 10. You’d nearly always be able to tell if someone was telling the truth or telling a lie! How brilliant would that be for solving crimes or finding out who ate the last cookie without telling anyone!

Let’s put aside the question of whether or not people can learn to beat lie detectors by pinching themselves or learning to imagine relaxing on a beach while answering your questions and assume that the lie detector is gonna be equally accurate with everyone who takes a test. That means, all you have to do is wire someone up to the lie detector. And if the big red lie light flashes, then you’re 90 percent sure they’re lying. Fantastic!

But, wait a minute! We’ve forgotten some basic conditional probability here. A test being 90 percent accurate is not the same thing as being 90 percent certain that someone is lying if the machine says they are. If the lie detector is 90 percent accurate, then it correctly identifies nine out of 10 lies as lies and incorrectly says that one out of 10 lies are true. It’ll also correctly identify that someone is telling the truth nine times out of 10 and incorrectly accuse them of lying one in 10 times.

This is all well and good, if everyone tells the truth or lies in equal measure. But in reality, people tell the truth far more than they tell lies! For the sake of argument, let’s say 90 percent of statements are true and only 10 percent are lies. Now, let’s imagine an experiment in which we do 100000 tests and see what happens when we sum up the results in a table.

In 100000 tests, 90 percent of people tell the truth. That’s 90000 actual true statements and 10000 actual lies. The lie detector says 90 percent of the actually true statements are true, that’s 81000. But it wrongly identifies 10 percent of those true statements as lies, that’s 9000 false accusations of lying. It correctly identifies 90 percent of the 10000 lies as lies, that’s 9000, and wrongly says 10 percent of the lies are true, 1000 lies that slip through the net and are believed to be true.

So the lie detector thinks there are 82000 true statements and 18000 lies. It’s underestimated the number of true statements and overestimated the number of lies. Overall, it correctly identified 90000 of the statements, which makes it 90 percent reliable. But, here’s the all-important thing. The lie detector accused 18000 statements of being lies, even though only 9000 of them actually were lies.

If it accuses you of lying, there is only a 9000 out of 18000 probability that you actually are lying. That’s a half, 50 percent! Even though it correctly labels 90 percent of statements truth or lie, it’s no better than flipping a coin and predicting whether you’re telling a lie. The problem is, with so many people actually telling the truth, incorrectly allocating 10 percent of a large number of true statements as lies completely messes up the results. We need to be quite subtle about how we use the test.

If it says you’re lying, then the 90 percent reliability rating is not relevant. We need to consider the whole story including the prevalence of lying in the whole population and the number of true statements that the test has mistaken for lies. In this case, we found that leads to the conclusion that it’s only 50 percent likely you’re lying if the test says you are.

Back in the 1700s, the Reverend Thomas Bayes proposed a formula, which we now call Bayes’ theorem, that helps us to work out conditional probabilities. It applies in all sorts of situations like lie detector tests, medical screening tests, criminal investigations, email spam filtering, crash wreckage location, and many more. The probability of A given B equals the probability of B given A times the probability of A, all over the probability of B.

It’s well worth learning more about so you can make more informed decisions the next time you want to know who ate the last cookie, or your test results seem to say it’s 90 percent likely you’ve got lurgy disease, or you want to know why your spam filter is making bad decisions about your emails.