Video Transcript
In this video, we’re gonna explain the meaning of some of the vocabulary and notation commonly used in probability.
An experiment is an activity with an identifiable result. For example, if we have a six-sided dice and roll it on the table so that it lands with one of the numbers facing upwards, we could say rolling the dice is an experiment. A scientific experiment is a procedure which aims to make a discovery or test a hypothesis. But a probability experiment is a bit different to that. It’s a specific procedure which we can exactly repeat as often as we like, and the randomly occurring set of possible results are always the same. Some other examples of probability experiments could be flipping a coin to see if it lands heads or tails side up, or picking a disc at random out of a bag containing a variety of coloured discs.
An outcome is a specific result of an experiment, and the set of all possible outcomes is called the sample space. For example, when we roll a regular six-sided dice, one outcome will be that it lands with the number one face up. Another outcome would be landing two face up, and so on with all these different possibilities. As we said, sample space is the set of all possible outcomes, so we write it using set notation. In this case, it’s all the numbers listed: one, two, three, four, five, six. And this is an exhaustive set of outcomes because it covers every possible outcome.
An event is a particular subset of the sample space. So, for example, with rolling a dice, a simple event might be rolling a one, or rolling a three. But we can define more complex events which make up larger subsets of the sample space, like getting an even number, or getting a prime number, or getting a multiple of three, and so on.
We measure the likelihood of an outcome or event occurring using the probability scale, which is a continuous scale from zero, which represents an impossible situation, up to one, which represents something that’s certain to occur. So, for example, a probability of a half is something that will occur half of the time that we conduct the experiment. We can use either fractions or decimals to represent these numbers between zero and one. It’s also okay to use percentages to represent the possib- the probability scale, so zero percent up to a hundred percent instead of zero to one. But you do need to include the percentage sign in there as well.
So one important result is, if 𝑃 is representing the probability of an event occurring, then it must be between zero and one. So we can represent that using this inequality, zero is less than or equal to 𝑃 is less than or equal to one.
There’s an important bit of notation that we commonly use to represent probabilities, and this saves us a lot of writing. So, for example, instead of writing the probability of getting a five when I roll a fair six-sided dice is a sixth, I can just write 𝑃 brackets five equals one-sixth because the shorthand way of writing the same thing that all mathematicians will understand.
A probability model is a mathematical description of an event which lists all of the possible outcomes along with their probabilities. This can be summarised in a table. For example, if a bag contains nine similar discs, two which are red, three which are blue, and four which are green, if we draw out one at random, each individual disk is equally likely to be selected. So there are nine discs in there in total, and I’m just gonna pick one disc out of that. So how many ways are there of getting red discs? Well there are two ways out of nine of getting a red disc. So the probability of a red disc is two over nine. There are three blue discs out of the nine, so the probability of getting a blue disc is three out of nine. And for a green disc, there are four ways of getting a green disc out of the nine possibles. So the probability of drawing a green disc is four-ninths. So these are the probabilities of these different outcomes.
Notice how we wrote the blue disc probability as three over nine, three-ninths. We could’ve simplified this to the equivalent fraction of a third, but we don’t have to. In fact, three over nine tells us how many ways of selecting a blue disc there are, and how many discs there are in total in the bag. So it’s actually more informative than simplifying the fraction to a third. Also notice that the table lists all of the possible outcomes for the experiment. So the sum of the probabilities must be one, one of those outcomes is certainly going to occur. So this table represents a probability model.
When we roll a regular fair dice, the possible outcomes are one, two, three, four, five, or six. And all of these outcomes are equally likely to occur. This is the definition of fair in probability.
When one or more outcomes are more or less likely to occur than others, we call the experiment biased. For example, when you buy a lottery ticket, it’ll either be a winning ticket or it’ll be a losing ticket. There’s two possible outcomes. But in most lotteries, the probability that the ticket will lose is much greater than the probability that it will win. So doing the lottery is biased; you’re more likely to lose than you are to win.
There’s some experiments we may not know the probability of each outcome occurring before we try them out. For example, if we drop a drawing pin on the floor from a height of, say, one metre, when it lands and settles, the pin will either be pointing upwards or downwards. But we don’t know the theoretical likelihood of either a- scenario occurring. In this case, we can carry out the experiment lots of times and record the outcomes. We can then use the proportion of occasions on which each outcome occurred, as an estimate of the probability for the outcome. We call this proportion the relative frequency, or sometimes the experimental probability of each outcome. The more times we repeat the experiment, the more confident we get that our relative frequencies are a reliable estimate of the actual probabilities of each outcome. So in this case, we’ve done a thousand trials of the experiment and the pin landed up on six hundred and thirty-two occasions and down on three hundred and sixty-eight occasions. So our estimate of the probability of the pin landing up is six three two over a thousand, and the estimate of the probability of the pin landing down is three six eight over a thousand. Those relative frequencies, those proportions of the occasions that those things occur, are our estimates of the actual probabilities of those outcomes happening.
Independent Probability. Two events are said to be independent if the outcome of one has no effect at all on the outcome of the other. For example, if we flip a coin, it can land heads or tails, and the probability of each of those is a half. If we roll a fair dice, the outcomes are one, two, three, four, five, or six. And again, they’ve all got equal probabilities of a sixth. Each of these two things, flipping a coin or rolling a fair dice, are independent. The probability of getting one, two, three, four, or five or six on a dice is not affected by whether or not the coin lands heads or tails side up in the other experiment. So another way of putting in it, these two experiments are independent because if we know the result of one, it doesn’t change the probabilities of the outcomes of the other.
Okay. So in an experiment where we roll a fair dice, this is our probability model. There are six possible outcomes, one, two, three, four, five, six and the probabilities of all the six. And we’ve seen that many times before. Now let’s define two events. Event one is where the result was an even number, so it could’ve been a two or a four or a six. And event number two is where the result was a prime number, so it could’ve been a two or a three or a five. Now these two events are not independent events. So remember, we said that two events are independent if knowing the result of one doesn’t affect the probabilities of the outcomes of the other. That’s not the case here cause if we know that the result was even, we know that the number that came up with either two or four or six. So if we know that we’ve got two or four or six, what’s the probability that the result is a prime number? Well only one of those three is a prime number. So the result there would be, the probability of being prime would only be a third. If we didn’t know that event one had occurred, that the result was even, so we didn’t know that was odd or even, there are three different possibilities for prime numbers. So the probability of it being a prime is three out of the six possible outcomes, would be a half. So knowing the result of one event affects the probability of another event. Now that didn’t occur in our last example because if we knew the result on a dice, that didn’t tell us anything about the result of flipping a coin. Those two things were completely independent. So dependent and independent probabilities will become very important when you start tackling more complicated questions.
When two events can’t occur at the same time, then we call the mutually exclusive events. For example, let’s say we randomly generate an integer from one to ten, inclusive. The events, it’s a multiple of two and it’s a factor of nine, are mutually exclusive because there are no multiples of two that are also factors of nine. If you know that the number generated is a multiple of two, then it can’t possibly be a factor of nine, and vice versa. So events one and two are mutually exclusive because if it’s a multiple of two, then the probability of it being a factor of nine is zero. And if it’s a factor of nine, then the probability of it being a multiple of two is also zero.
One more example of that might be when you roll a dice, thinking about odd numbers or even numbers. If it’s an odd number, it can’t possibly be an even number. If it’s an even number, it can’t possibly be an odd number. So those two things, even and odd, are mutually exclusive outcomes of that experiment.
So here’s a list of the probability vocabulary that hopefully you should now understand, as a result of watching this video. Let’s hope you find it useful.