**1. Introduction**

In engineering, probability is ‘chance’. It is the chance of an uncertain event (‘hazard’ or ‘threat’) occurring. Reliability is a probability… the probability that a system will perform its intended function during a specified period of time under stated conditions.

The Gambler…

Have you ever heard of the ‘Gambler’s Fallacy’? This fallacy is that a gambler who is on a ‘losing streak’ believes that his/her chances of winning increase, as he loses more and more.

Imagine you are playing a game in which all outcomes are equally likely (e.g., rolling a dice), and you are on a losing streak. The ‘Gambler’s Fallacy’ makes you believe that a losing streak makes it more likely that you will roll the numbers you want on the next roll (because you are ‘due’ some good luck).

Sadly, the truth is that your odds do not change; you start again with each roll.

Probability varies between impossible and certain. Mathematically, this is between 0 and 1; for example:

- Probability = 1… the probability that I will obtain a number from 1 to 6 when I roll a ‘fair’ die.
- Probability = 0… The probability that I will obtain a number greater than 6 when I roll a ‘fair’ die.

Probabilities can be written as fractions, decimals, or percentages on a scale from 0 to 1.

Probability can be:

- an event in the future (time); e.g., the probability that it will rain tomorrow; or,
- it can be a physical quantity (space); e.g., the probability that someone is above average height.

**2. Probability Theory**

Probability is about estimating or calculating how likely or ‘probable’ something is to happen, and probability theory helps us deal with the uncertainties that are inevitably present; for example, if you toss a coin, the probability of obtaining a head (P_{h}) is ½ and the probability of obtaining a tail (P_{t}) is also ½ . The sum of the probabilities of all outcomes must equal 1; therefore:

P_{h} + P_{f} = ½ + ½ = 1

Modern probability theory originates in the gambling world. Gamblers wanted to know what chance they had of winning at cards/dice/racing/etc.. In the 16th century, gamblers would bet in dice games. They reasoned, correctly, that their chance of getting a six, with a fair die, was I/6. But they also reasoned, incorrectly, that their chance of getting a six in four rolls of the die would be 4/6. The gamblers knew they were wrong, as they kept losing money… they needed help from mathematicians….

**3. Two Ways to View Probability?**

Probability can be thought of in two ways: ‘frequentist’; or, ‘Bayesian’.

3.1 Frequentist

The ‘frequentist’ view is only concerned with the data, based on long term averages under repetition of experiment:

- this is an ‘objective view’ as probability is a property of the event itself (e.g., probability of rain tomorrow);
- ‘a priori’ as it is based on logic and exact knowledge (e.g., roll of a dice); and,
- ‘empirical’ as it is based on historical data or experience (e.g., statistics collected on annual rainfall).

A Frequentist View [1]

A frequentist will repeatedly toss a coin to see what the long-run frequency (‘probability’) is of obtaining heads or tails. Eventually, he/she will arrive at the ‘truth’.

But… he/she is assuming all coin-tosses happen under the same circumstances, to ensure the same validity and each toss samples the same set of possible outcomes in the same way. But equally, the coin-tosses have to be different in some sense, otherwise the outcomes would be the same?

So we see that the long-run frequency for a real coin-tossing experiment is as much a function of the way the experiment is performed as it is a property of the coin… we need to allow for some variability, some uncertainty in the experimental set-up. This is the frequentist trap… what if we obtained 12 heads and 8 tails from 20 tosses. This result is reasonable for a frequentist expecting a 50/50 chance of heads or tails (we have a 12% chance of obtaining this ratio), but, is the coin is fair?

3.2 Bayesian

The ‘Bayesian’ view is a ‘subjective probability’: it is a ‘degree of belief’ where probability is a property of our knowledge of the event. Bayesian provides formulae to judge the probability of future events based on the past, and for revising the probability as new information arrives.

Bayesian’ methods are named after Thomas Bayes (1701–1761). He established a mathematical basis for probability ‘inference’ (a means of calculating, from the frequency with which an event has occurred in prior trials, the probability that it will occur in future trials).

The difference between frequentist and Bayesian can be illustrated by asking the question ‘what is the probability that it will rain later today? A frequentist would look at this month’s rainfall statistics, whereas a Bayesian would also look at this month’s rainfall statistics, but also look at the weather now, and look at weather forecasting models.

**4. References**

- M Ambaum, ‘Frequentist vs Bayesian statistics—a non-statisticians view’, Department of Meteorology, University of Reading, UK. July 2012. http://www.met.reading.ac.uk/~sws97mha/Publications/Bayesvsfreq.pdf