Skip to main content

Chapter 17: Statistics and Probability


Chapter 17: Statistics and Probability

     Statistics and probability are two interrelated fields of mathematics that deal with collecting, analysing, and interpreting data. Statistics is the study of data and involves techniques for collecting, organising, summarising, analysing, interpreting, and drawing conclusions from data. It provides a set of tools for making decisions based on data, as well as for assessing the reliability and uncertainty of these decisions.

     Probability, on the other hand, is the study of chance and randomness. It deals with the likelihood or chance of an event occurring and is expressed as a number between 0 and 1, with 0 indicating that an event is impossible, and 1 indicating that an event is certain to occur. Probability is a crucial component of statistics, as it is used to make inferences and predictions about populations based on samples of data.

     Together, statistics and probability provide a powerful set of tools for understanding and making decisions about the world around us based on data and evidence.

     Many people think of statistics as a scary subject involving lots of numbers and equations. While it is true that statistics can be quite technical, the subject remains highly relevant to modern life. For example, before booking a hotel, we might check the number of good and bad online reviewsWhen we make investment decisions, we might also review the relevant financial and statistical data first.

     Many statistical studies investigate samplesWe study a sample to infer (conclude) general conclusions about a population, the set of things we are interested in. Suppose we want to find out the average height of adult males in the United States. The population in this case would be all adult males in the United States. However, it is not feasible to measure the height of every single adult male in the country. Instead, we can select a statistical sample of adult males and measure their heights to estimate the average height of the population.

     For example, we can randomly select 100 adult males from different cities and states across the country, measure their heights, and calculate the average. This average height of the 100 males in the sample would be a good estimate of the average height of all adult males in the United States.

     In this example, the sample size of 100 adult males is the statistic sample, and the population of all adult males in the United States is the population.

1.      EVALUATING SURVEYS AND SAMPLING STUDIES

   Evaluating surveys and sampling studies is an important step in understanding the accuracy and reliability of the data collected. It's important to keep in mind that no survey or sampling study is perfect and each has its own strengths and limitations. 

   Evaluating surveys and sampling studies can help to identify these strengths and limitations and to make informed decisions about the use and interpretation of the results. Here are some key considerations for evaluating surveys and sampling studies:

i)       What exactly is the finding? What do Keywords Mean?

    Focus on the main conclusion. What is the significance? What is the most important finding? How is it formulated? How are the keywords defined?

    Distinguish between the actual results and their interpretation. For example, many reports confuse correlation (connection) with causation (relation between cause and effect). A blog article might say that sleeping more makes your life shorter, which is a claim about causation. But the actual statistics might tell us only that adults who sleep 8 or more hours a day have a higher death rate than those who sleep 6 to 7 hours. These correlational data are not about causation at all. (Perhaps less healthy people sleep more, so sleeping more does not directly cause death.) 

    Check the definitions of key concepts. A survey might say that 27% of university students are Hindu. But what does being a Hindu mean? Is it just a matter of saying that you are? Or does it involve regular temple attendance? Statistics are more informative when it is clear how the main variables are actually measured and defined. 

ii)      How Large is the Sample?

    When we extrapolate from a sample to the population, a larger sample is likely to give a more accurate conclusion. If a restaurant wants to find out whether its customers enjoy the food and service, it would not be enough to solicit the view of just one customer. On the other hand, spending extra money and time on a larger sample size might not be worth it if a smaller one would do just as well. 

    It is not easy to determine the optimum sample size. Partly it depends on the size of the population and the level of precision required for the results.

iii)    How is the Sample Chosen?

    How the sample is selected affects greatly the reliability of the conclusions. A sample should be representative of the population, in the sense that the features being studied are distributed in the same way in both the sample and the population. If you want to find out how often people exercise, it would be wrong to interview only people at the local gym, since they probably exercise a lot more. This constitutes is called a biased sample

    We should check carefully how a sample is chosen to see if there are hidden biases. For example, some online surveys allow people to submit their opinions more than once. They might also attract people who are more computer savvy, have more free time and are more willing to give their opinion.

    A good way to minimise biased sampling is through random samplingby which each sample is selected randomly from the population. Given an adequate sample size, this method is highly likely to result in a representative sample. But even with random sampling, we should be careful of potential biases in the results due to the fact that some selected individuals might not be reachable (for example, in a telephone survey) or are not willing to participate. 

iv)    What Method is used to Investigate the Sample?

    If a sample is investigated using a biased method, the statistical results can be unreliable even if the sample is representative. There are various ways this might come about: 

    Social desirabilitySuppose a teacher selects some students randomly and asks whether they have cheated in exams. This survey will underestimate the extent of cheating since students are unlikely to admit to cheating to their teacher. We generally want to portray ourselves positively and are reluctant (hesitant) to confess to undesirable attitudes or activities. This is especially true when we are questioned directly or when we have doubts about the confidentiality of the results.

    Leading questionsThese are questions that are formulated in such a way that answers are likely to be skewed (crooked) in a certain direction. For example, "Do you want to give vitamin pills to your children to improve their health?" is likely to solicit more positive answers than the more neutral "Do you intend to give vitamin pills to your children?"

    Observer effectIt is often difficult to conduct a statistical study without affecting the results in some way. People might change their answers depending on who is asking them. Animals change their behaviour when they realise they are being observed. Even measuring instruments can introduce errors. We just have to be careful when we interpret statistical results.

v)      What about the Margin of Error?

    Many statistical surveys include a number known as the margin of errorThis number is very important for interpreting the results. 

    The margin of error arises in any sampling study because the sample is smaller than the whole population, so the results might not reflect reality. Suppose you want to find the average weight of a Korean by weighing a random sample of Koreans. The average weight of your sample would be the statistical result, which might or might not be the true result—the average that is calculated from the whole Korean population. If you do manage to weigh the whole population, then your statistical result will be the same as the true result, and your margin or error will be zero indeed (assuming there are no other sources of error like faulty weighing machines.) 

    When the sample is smaller than the population, the margin of error will be larger than zero. The number reflects the extent to which the true result might deviate from the estimate. The margin of error is defined with respect to a confidence interval. In statistics, we usually speak of either the 99% confidence interval, the 95% confidence interval, or the 90% confidence interval. If the confidence interval is not specified, it is usually (but not always!) 95%.

    Suppose an opinion poll about an upcoming election says that 64% of the people support Anson, with a margin of error of 3%. Since the confidence interval is not mentioned, we can assume that the margin of error is associated with the 95% confidence interval. In that case, what the poll tells us is that the 95% confidence interval is 64 + 3%. What this means is that if you repeat the poll 100 times, you can expect that in 95 times the true result will be within the range specified. In other words, in 95% of the polls that are done in exactly the same way, the true level of support for Anson should be between 61% and 67%

    There are at least two reasons why it is important to consider the margin of error. First, if the margin of error is unknown, we do not know how much trust we should place in the result. With a small sample size and a large margin of error, the true result might be very different from the number given. The other reason for considering the margin of error is especially important when interpreting changes in repeated statistical studies over a period of time, especially opinion polls.

    The margin of error is often represented as a plus or minus value and is added and subtracted from a point estimate to create a confidence interval, which is the range within which the true value of a population parameter is likely to fall with a certain degree of certainty.

    Suppose 64% of the people support Anson this month, but the number drops to 60% the next month. How seriously should we take this to indicate that support for Anson is slipping away? If the margin of error is say 5%, then the new finding is within the 95% confidence interval of the earlier result (64 ± 5%). It is therefore quite possible that there is actually no change in the opinion of the general public, and that the decrease is due only to limited sampling.

    The formula for the margin of error depends on the sample size, the standard deviation of the population, and the desired level of confidence. Larger samples and lower levels of variability generally result in smaller margins of error. The margin of error is important to consider when interpreting the results of a survey or poll, as it gives an indication of the level of uncertainty associated with the estimate.

    Finally, it should be emphasised that the margin of error does not take into account biases or methodological errors in the design and execution of the study. So these problems can still be present in a result with a low margin of error!

2.       ABSOLUTE VS. RELATIVE QUANTITY

    When we interpret statistics, it is important to distinguish between absolute and relative quantity. Absolute quantity refers to the actual number of items of a certain kind. Here are some examples: 

    The number of female professors at Beijing University. 

    The number of computer programmers in India. 

    On the other hand, a relative quantity is a number that represents a comparison between two quantities, usually a ratio or a fraction, or a number that measures a rate comparing different variables: 

    The ratio between female and male professors at Beijing University.

    The percentage of computer programmers among workers in India.

    This distinction is important because meaningful comparisons often require information about the right kind of quantity. Suppose the number of violent crimes this year is a lot higher than that of 10 years ago. Does it mean our city has become more dangerous? Not necessarily because the higher number could be due to the increase in the population. We need to look at the relative quantity, such as the number of violent crimes per 1,000 people. If this number has actually dropped over the same period, the city has probably become safer despite the higher number of crimes! Similarly, drivers between the age of 20 and 30 are involved in more car accidents than drivers who are between 60 and 70. Is this because older people drive more safely? Again not necessarily. There might be more younger drivers, and they might also drive a lot more. We should compare the number of car accidents per distance travelled rather than the absolute number of accidents.

    The absolute vs. relative distinction is particularly important in healthcare. The risks associated with illnesses, drugs and medical treatments can often be specified in absolute or relative terms. Take these two headlines:

    New miracle drug lowers liver cancer risk by 50%!

    New drug results in 1% drop in liver cancer risk!

    The first headline is presumably a lot more impressive, but both can be correct in describing the result of a clinical trial. Imagine two groups of normal people, 100 in each group. The first group took the drug to see if it reduced the number of liver cancer. After 10 years, 1 out of 100 developed liver cancer. The other control group took a placebo pill and 2 out of 100 had liver cancer after 10 years. The absolute risk of getting liver cancer is 2% for those without the drug, and 1% for those taking the drug. So the second headline correctly describes the reduction in absolute risk. But reducing 2% to 1% amounts to a relative risk reduction of 50%. So the first headline is correct as well. 

    Why should we care about whether risk information is presented in absolute or relative terms? First of all, note that information about relative risk tells you nothing about absolute risk. If eating farmed salmon increases your chance of getting a certain disease by 100%, this sounds very scary. But this relative increase tells you nothing about the absolute likelihood of getting the disease. If the disease is extremely rare, the chance of getting it can remain negligible even after it has been doubled.

    So we need to be careful of advertisements for drugs and medical treatments. The two headlines above give very different impressions. Hasty decisions based on incomplete data can be dangerous, especially because drugs and treatments can have undesirable side effects. If taking the new liver cancer drug causes more headaches and other health problems, the 1% reduction in absolute risk might not be worth it. 

3.      MISLEADING STATISTICAL DIAGRAMS

    Diagrams can often make it easier to understand and summarise statistical data. Trends and patterns can become more prominent (important). But the suggestive power of diagrams can also be abused when data are presented in a misleading way. Here are some common tricks that we should be aware of. 

    First, when a chart has horizontal and vertical axes, check the origin of the axes carefully to see whether they start from zero. Take these two diagrams below:

    The two diagrams present the same data regarding the profit earned by a certain company from 2006 to 2009, the only difference being that the vertical axis of the diagram on the left does not return to zero. A careless person taking a quick glance at this diagram might get the impression that profit has dramatically increased a few times over a few years, when in fact it has only increased slightly. Obviously, if a chart fails to label the axis, then it is even worse! 

    Apart from the origin of an axis, we should also check its scale. Consider this diagram showing the number of cakes sold by a shop from 2000 to 2008:

    On the face of it, the picture seems to indicate that there was a sudden surge in cake sales. But this is actually an illusion because the horizontal time axis is not evenly scaled. The period 2002 to 2007 is compressed compared to the other periods, giving the incorrect impression that the rate of growth has abruptly increased when in fact the growth in sales might have been rather steady.


    The diagram above shows that the number of emergency calls received by a hospital increased about fivefold between 1980 and 2000. This is represented pictorially by the fact that the diagram on the right is five times taller than the left one. But the problem is that subjectively, our perception of the relative difference depends on the area instead of just the height. Since the diagram on the right is also five times wider, its area is actually 5 x 5 = 25 times larger than the diagram on the left. The result is that looking at the diagram, the readers have the impression that the increase is a lot more than just five times. 

4.      PROBABILITY

    It is no exaggeration to say that probability is the very guide to life. Life is full of uncertainty, but we have to plan ahead based on assumptions about what is likely or unlikely to happen. In all kinds of professions, assessments of probability and risks are of critical importance—forecasting sales, calculating insurance needs and premiums, determining safety standards in engineering, and so on. 

    In this section, we are not going to discuss the mathematics of probability. We shall focus on some of the main reasoning mistakes about probability. 

i)       Gambler’s Fallacy: 

    The gambler's fallacy is the mistaken belief that the probability of an event might increase or decrease depending on the pattern of its recent occurrences, even though these events are independent of each other. The name comes from the fact that lots of people make this mistake when they gamble. 

    For example, the probability of a fair coin landing on heads is ½But suppose you toss the coin four times and it landed on heads each time. Someone who thinks it is more likely to be tails next time so that things will "balance out" is committing the gambler's fallacy. This is because the probability of a fair coin landing heads is just the same as the probability of landing tails, whatever the past results might have been. Similarly, it is also a fallacy to think that a series of four tails is more likely to be followed by yet another tail because the tail side is "hot." 

    good real-life example of the gambler's fallacy might be when people choose numbers for a lottery. Let's say the winning numbers of the last lottery are 2,4,18, 27, 29, and 36. Most people would choose a different set of numbers when they play the lottery, thinking that their combination is more likely to win than the previous winning combination. But this is a fallacy because if the lottery is a fair one, all combinations are equally likely, or better, equally unlikely! 

    A very dangerous manifestation of the gambler's fallacy is the hot hand fallacyThis happens when a gambler wins a few times in a row, and he thinks he is on a lucky streak. As a result, he thinks he is more likely to win than lose if he continues to gamble. But this is a fallacy because the probability of him winning the next round is independent of his past record. It is a dangerous fallacy because very often these gamblers start to feel they are invincible (invulnerable) and so they increase their wager (gamble) and end up losing all their money. 

ii)      Regression Fallacy

    Regression fallacy is a mistake of causal reasoning due to the failure to consider how things fluctuate randomly, typically around some average condition. Intense pain, exceptional sports performance, and high stock prices are likely to be followed by more subdued conditions eventually due to natural fluctuation. Failure to recognise this fact can lead to wrong conclusions about causation. 

    For example, someone might suffer from back pain now and then but nothing seems to solve the problem completely. During a period of very intense pain, the patient decided to try alternative therapy like putting a magnetic patch on his back. He felt less pain afterwards and concluded that the patch worked. But this could just be the result of regression. If he sought treatment when the pain was very intense, it is quite possible that the pain has already reached its peak and would lessen in any case as part of the natural cycle. Inferring that the patch was effectively ignored a relevant alternative explanation. 

    Similarly, sometimes we are lucky in the sense that things go very well, and other times we are unlucky and everything seems to go wrong. This is just an inevitable fact of life. But if we read too much into it, we might think we need to do something to improve our luck and look for solutions where none is needed, such as using crystals to boost our karma. Of course, it is important to reflect on ourselves when things are not working well since it could be due to personal failings such as not working hard enough. What is needed is a careful and objective evaluation of the situation. 

iii)    Amazing Coincidences

    Here is a story about an amazing coincidence:

    In 1975, a man was riding a moped in Bermuda and was killed by a taxi. A year later his brother was riding the same moped and died in the same way. In fact, he was hit by the same taxi driver, and carrying the same passenger!

    There are many stories about similar coincidences. Some are quite creepy and make you wonder whether there might be any hidden meaning to them. This is a normal reaction since human beings naturally seek patterns and explanations. But we should not ignore the fact that improbable things do happen simply as a matter of probability. Otherwise, we might end up accepting rather implausible theories. Here is one more example:

    It is claimed that some photos of the explosion of the World Trade Center Towers during the terrorist attack on September 11, 2001, seem to show the face of the devil in the smoke. But given the amount of video footage and photos that were taken, it is not surprising that some parts of the smoke might be seen to resemble something else.

    A useful reminder relevant to this topic is Littlewood's lawnamed after J. E. Littlewood (1885-1977), a Cambridge mathematician. According to this so-called law, miracles happen quite frequently, around once a month.

    Littlewood's argument starts with the definition of a miracle as an exceptional event with an extremely low probability of one in a million. But suppose a person is awake for eight hours a day, seven days a week, and experiencing about one event per second (watching a particular scene on TV or hearing a sound). Such a person will experience about one million events in 35 days, and so we expect this person to encounter a miracle about once a month! Of course, you might protest that a miracle must be some kind of meaningful event, or perhaps an event with an even lower probability. But whatever the details might be, Littlewood's point is that seemingly miraculous events are bound to happen given lots of random events. This is a fact about statistics, even if it might be difficult (or disappointing) to believe otherwise. 

END OF THE PART


Comments

Popular posts from this blog

BBS First Year English Question Paper with Possible Answers (TU 2021)

PROFESSIONS FOR WOMEN - Virginia Woolf (1882-1941)

Summary : Virginia Adeline Woolf (1882-1941) was an English novelist and essayist, regarded as one of the foremost modernist literary figures of the twentieth century. She was one of the leaders in the literary movement of modernism.  The speech of  Professions for Women  was given in 1931 to the Women’s Service League by Virginia Woolf. It was also included in  Death of a Moth  and  Other Essays  in 1942. Throughout the speech, Virginia Woolf brings forward a problem that is still relevant today:  gender inequality .   Woolf’s main point in this essay was to bring awareness to the phantoms (illusions) and obstacles women face in their jobs. Woolf argues that women must overcome special obstacles to become successful in their careers. She describes two hazards she thinks all women who aspire to professional life must overcome: their tendency to sacrifice their own interests to those of others and their reluctance (hesitancy) to challenge conservative male attitudes .  She starts her

Summary and Analysis of My Mother Never Worked

MY MOTHER NEVER WORKED Bonnie Smith - Yackel SYNOPSIS   In the essay “ My Mother Never Worked ,” Bonnie Smith-Yackel recollects the time when she called Social Security to claim her mother’s death benefits. Social Security places Smith-Yackel on hold so they can check their records on her mother, Martha Jerabek Smith . While waiting, she remembers the many things her mother did, and the compassion her mother felt towards her husband and children. When Social Security returns to the phone, they tell Smith-Yackel that she could not receive her mother’s death benefits because her mother never had a wage-earning job. A tremendous amount of irony is used in this essay. The title, in itself, is full of irony; it makes readers curious about the essay’s point and how the author feels about the situation. Smith-Yackel uses the essay to convey her opinion of work. Her thesis is not directly stated; however, she uses detail upon detail to prove her mother did work, just not in the eyes of the