In statistical computations, intuition can be very misleading
Guess Again
Even hardened scientists can make mistakes when interpreting statistics. Mathematical experiments can give you the right ideas to prevent this from happening, and quick simulations in Perl nicely illustrate and support the learning process.
If you hand somebody a die in a game of Ludo [1], and they throw a one on each of their first three turns, they are likely to become suspicious and check the sides of the die. That's just relying on intuition – but when can you scientifically demonstrate that the dice are loaded (Figure 1)? After five throws that all come up as ones? After ten throws?
Each experiment with dice is a game of probabilities. What exactly happens is a product of chance. It is not so much the results of a single throw that are relevant, but the tendency. A player could throw a one, three times in succession from pure bad luck. Although the odds are pretty low, it still happens, and you would be ill advised to jump to conclusions about the dice based on such a small number of attempts.
The Value of p
For this experiment, a scientist would start by defining a so-called null hypothesis (e.g., "The die is fair" or "The medication shows no effect in patients"). On the basis of the test results, this hypothesis would be either confirmed or rejected later on. The mistake of rejecting a correct null hypothesis is known by statisticians as a "Type I error" or an "Error of the first kind." Experiments define up front the maximum acceptable probability of this event happening; this value is known as the significance level of the experiment.
Another statistical tool, the so-called "p-value" [2], is a probability value between 0 and 1 that can be computed during the experiment and that states how likely it is to see the result you just found – or one that is even more extreme (Figure 2). The smaller the p-value, the more significant the test; thus, the null hypothesis is highly likely to be incorrect.
For example, if you toss a coin 20 times and it comes up heads 14 times (10 is what you would expect), the p-value of 0.115 is still well above the threshold value of 5 percent (0.05), which is typical for scientific experiments [3]. The scientist can thus accept the null hypothesis ("The coin is fair") with a clear conscience and demonstrate with a maximum error probability of 5 percent that the coin has dropped as expected. If the coin came up heads 15 times out of 20 tosses, instead of 14, the p-value would drop to 0.041 – below the 5 percent threshold – and both the null hypothesis and the quality of the coin would be begin to look questionable.
To Err is Human
The Perl script in Listing 1 [4] throws a good coin with the sides H
(for heads) and T
(for tails) a total of 1,000 times and then adds the number of times that it came up heads. The p_value()
function starting in line 23 then computes the p-value from the observed result. The script's output helps you decide whether the coin tosses are regular, or whether there is an anomaly:
$ ./coin-toss Rounds: 1000 Tails: 507 p-value: 0.182979
Listing 1
coin-toss
In this example, out of 1,000 tosses, the coin came up heads 507 times; the p-value is thus 0.18 and well above the threshold value of 5 percent. This means that there's no good reason to reject the null hypothesis.
The script randomly chooses the H
or T
symbol in the @sides
array in each of the 1,000 rounds, thereby deciding whether heads or tails was thrown. In the latter case, the $tails
counter in line 13 is incremented by 1 for the tally output and p-value calculation later on.
Looking for Extremes
So, how do you compute the value of p? If the coin comes up heads 7 times from 10 throws in the experiment, then 8, 9, or 10 times heads would be an even more extreme result. Because the coin is symmetrical, and 8, 9, or 10 times tails is "more extreme" as well, the value of p also includes these values. The probability of k tosses coming up heads in an experiment with a series of n binomial distributed tosses is computed from the binomial coefficient (n/k) divided by the total number of combinations (2n):
The p_value()
function in lines 23-45 uses the CPAN Math::BigFloat module to compute the binomial coefficient and for the ensuing division. The downstream steps in longer experiments result in numeric values that exceed the floating point capacity of most computers by far, whereas Math::BigFloat computes with an arbitrary degree of accuracy even when faced with values featuring a couple of thousand digits.
The bnok()
method in line 36 computes the binomial coefficient (n/k), and line 37 accumulates a subtotal with badd()
. If the value is below the expected average value (e.g., 10 out of 20 tosses), the algorithm looks for more extreme values to the left (i.e., in increasing order from 1, 2, … , up to the value shown in the experiment).
However, if the experimental value is on the right-hand side of the bell curve, it counts from this value up to the maximum value. In both cases, line 43 multiplies the result by 2
because of the symmetry of the experiment (heads and tails are interchangeable).
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.
-
Latest Cinnamon Desktop Releases with a Bold New Look
Just in time for the holidays, the developer of the Cinnamon desktop has shipped a new release to help spice up your eggnog with new features and a new look.
-
Armbian 24.11 Released with Expanded Hardware Support
If you've been waiting for Armbian to support OrangePi 5 Max and Radxa ROCK 5B+, the wait is over.
-
SUSE Renames Several Products for Better Name Recognition
SUSE has been a very powerful player in the European market, but it knows it must branch out to gain serious traction. Will a name change do the trick?
-
ESET Discovers New Linux Malware
WolfsBane is an all-in-one malware that has hit the Linux operating system and includes a dropper, a launcher, and a backdoor.
-
New Linux Kernel Patch Allows Forcing a CPU Mitigation
Even when CPU mitigations can consume precious CPU cycles, it might not be a bad idea to allow users to enable them, even if your machine isn't vulnerable.
-
Red Hat Enterprise Linux 9.5 Released
Notify your friends, loved ones, and colleagues that the latest version of RHEL is available with plenty of enhancements.
-
Linux Sees Massive Performance Increase from a Single Line of Code
With one line of code, Intel was able to increase the performance of the Linux kernel by 4,000 percent.