The coronavirus pandemic has made a statistician out of us all. We are constantly checking the numbers, making our own assumptions on how the pandemic will play out, and generating hypotheses on when the “peak” will happen. And it’s not just us performing hypothesis building – the media is thriving on it. Amidst this statistical exploration, understanding the Z test vs T test difference becomes crucial. These tests serve as valuable tools for statisticians, allowing them to draw meaningful conclusions and make informed decisions based on the data at hand.So, in this article you will understand full about z test vs t test, the difference , which now to choose.

A few days back, I was reading a news article that mentioned this outbreak “could potentially be seasonal” and relent in warmer conditions:

So I started wondering – what else can we hypothesize about the coronavirus? Are adults more likely to be affected by the outbreak of coronavirus? How does Relative Humidity impact the spread of the virus? What is the evidence to support these claims? How can we test these hypotheses? As a Statistic enthusiast, all these questions dig up my old knowledge about the fundamentals of Hypothesis Testing. In this article, we will discuss the concept of Hypothesis Testing and the difference between the z-test and t-test. We will then conclude our Hypothesis Testing learning using a COVID-19 case study.

Also, in this article you will get a brief overview over the Z test its examples what is that and also the comparison between z test vs t test.

**Learning Objectives**

- Understand the fundamentals of hypothesis testing
- Learn how hypothesis testing works
- Be able to differentiate between z-test, t-test, and other statistics concepts

Hypothesis testing in data analysis enables inferences about a population from sample data, guiding decisions on whether to accept or reject assumptions about that population. In machine learning, it assesses model performance, tests parameter significance, and aids feature selection, often using tests like t-tests or z-tests to determine meaningful differences between groups. This approach also verifies the statistical validity of models like linear and logistic regression, supporting further model development and deployment decisions.

This extensive tutorial on hypothesis testing is what you need to get started with the topic.

Let’s take an example to understand the concept of Hypothesis Testing. A person is on trial for a criminal offense, and the judge needs to provide a verdict on his case. Now, there are four possible combinations in such a case:

- First Case: The person is innocent, and the judge identifies the person as innocent
- Second Case: The person is innocent, and the judge identifies the person as guilty
- Third Case: The person is guilty, and the judge identifies the person as innocent
- Fourth Case: The person is guilty, and the judge identifies the person as guilty

As you can clearly see, there can be two types of error in the judgment – Type 1 error, when the verdict is against the person while he was innocent, and Type 2 error, when the verdict is in favor of the person while he was guilty.

According to the Presumption of Innocence, the person is considered innocent until proven guilty. That means the judge must find the evidence which convinces him “beyond a reasonable doubt.” This phenomenon of **“Beyond a reasonable doubt”** can be understood as **Probability (Judge Decided Guilty | Person is Innocent) should be small.**

We consider** the Null Hypothesis** to be true until we find strong evidence against it. Then we accept the **Alternate Hypothesis**. We also determine the **Significance Level (⍺**), which can be understood as the probability of (Judge Decided Guilty | Person is Innocent) in the previous example. Thus, if ⍺ is smaller, it will require more evidence to reject the Null Hypothesis. Don’t worry; we’ll cover all this using a case study later.

There are four steps to performing Hypothesis Testing:

- Set the Null and Alternate Hypotheses
- Set the Significance Level, Criteria for a decision
- Compute the test statistic
- Make a decision

It must be noted that z-Test & t-Tests are Parametric Tests, which means that the Null Hypothesis is about a population parameter, which is less than, greater than, or equal to some value. Steps 1 to 3 are quite self-explanatory but on what basis can we make a decision in step 4? What does this p-value indicate?

We can understand this p-value as the measurement of the Defense Attorney’s argument. If the p-value is less than ⍺ , we reject the Null Hypothesis, and if the p-value is greater than ⍺, we fail to reject the Null Hypothesis.

Let’s understand the logic of Hypothesis Testing with the graphical representation for Normal Distribution.

The above visualization helps to understand the z-value and its relation to the critical value. Typically, we set the Significance level at 10%, 5%, or 1%. If our test score lies in the Acceptance Zone, we fail to reject the Null Hypothesis. If our test score lies in the Critical Zone, we reject the Null Hypothesis and accept the Alternate Hypothesis.

Critical Value is the cut off value between Acceptance Zone and Rejection Zone. We compare our test score to the critical value and if the test score is greater than the critical value, that means our test score lies in the Rejection Zone and we reject the Null Hypothesis. On the other hand, if the test score is less than the Critical Value, that means the test score lies in the Acceptance Zone and we fail to reject the null Hypothesis.

*But why do we need a p-value when we can reject/accept hypotheses based on test scores and critical values?*

P-value has the benefit that we **only need one value** to make a decision about the hypothesis. We don’t need to compute two different values such as critical value and test scores. Another benefit of using the p-value is that we can test at **any desired level of significance** by comparing this directly with the significance level.

This way, we don’t need to compute test scores and critical values for each significance level. We can get the p-value and directly compare it with the significance level of our interest.

In the Directional Hypothesis, the null hypothesis is rejected if the test score is too large (for right-tailed) or too small (for left-tailed). Thus, the rejection region for such a test consists of one part, which is on the right side for a right-tailed test; or the rejection region is on the left side from the center in the case of a left-tailed test.

In a Non-Directional Hypothesis test, the Null Hypothesis is rejected if the test score is either too small or too large. Thus, the rejection region for such a test consists of two parts: one on the left and one on the right. This is a case of a two-tailed test.

z tests are a statistical way of testing a Null Hypothesis when either:

- We know the population variance, or
- We do not know the population variance, but our sample size is large n ≥ 30

If we have a sample size of less than 30 and do not know the population variance, we must use a t-test. This is how we judge when to use the z-test vs the t-test. Further, it is assumed that the z-statistic follows a standard normal distribution. In contrast, the t-statistics follows the t-distribution with a degree of freedom equal to n-1, where n is the sample size.

It must be noted that the samples used for z-test or t-test must be independent sample, and also must have a distribution identical to the population distribution. This makes sure that the sample is not “biased” to/against the Null Hypothesis which we want to validate/invalidate.

*Checkout this article how T-test Performing Hyopthesis Testing with Python*

We perform the One-Sample z-Test when we want to compare **a sample mean with the population mean.**

**Here’s an Example to Understand a One Sample z-Test**

Let’s say we need to determine if girls on average score higher than 600 in the exam. We have the information that the standard deviation for girls’ scores is 100. So, we collect the data of 20 girls by using random samples and record their marks. Finally, we also set our ⍺ value (significance level) to be 0.05.

In this example:

- Mean Score for Girls is 641
- The number of data points in the sample is 20
- The population mean is 600
- Standard Deviation for Population is 100

**Since the P-value is less than 0.05, we can reject the null hypothesis** and conclude based on our result that Girls on average scored higher than 600.

We perform a Two Sample z-test when we want to compare **the mean of two samples.**

**Here’s an Example to Understand a Two Sample Z-Test**

Here, let’s say we want to know if Girls on an average score 10 marks more than the boys. We have the information that the standard deviation for girls’ Score is 100 and for boys’ score is 90. Then we collect the data of 20 girls and 20 boys by using random samples and record their marks. Finally, we also set our ⍺ value (significance level) to be 0.05.

In this example:

- Mean Score for Girls (Sample Mean) is 641
- Mean Score for Boys (Sample Mean) is 613.3
- Standard Deviation for the Population of Girls’ is 100
- Standard deviation for the Population of Boys’ is 90
- Sample Size is 20 for both Girls and Boys
- Difference between Mean of Population is 10

Thus, we can **conclude based on the p-value that we fail to reject the Null Hypothesis**. We don’t have enough evidence to conclude that girls on average score of 10 marks more than the boys. Pretty simple, right?

T-tests are a statistical way of testing a hypothesis when:

- We do not know the population variance
- Our sample size is small, n < 30

*Also, you can check out this how to use interpret T tests*

We perform a One-Sample t-test when we want to **compare a sample mean with the population mean**. The difference from the z-Test is that we do **not have the information on Population Variance** here. We use the **sample standard deviation** instead of population standard deviation in this case.

**Here’s an Example to Understand a One Sample T-Test**

Let’s say we want to determine if on average girls score more than 600 in the exam. We do not have the information related to variance (or standard deviation) for girls’ scores. To a perform t-test, we randomly collect the data of 10 girls with their marks and choose our ⍺ value (significance level) to be 0.05 for Hypothesis Testing.

In this example:

- Mean Score for Girls is 606.8
- The size of the sample is 10
- The population mean is 600
- Standard Deviation for the sample is 13.14

Our** p-value is greater than 0.05 thus we fail to reject the null hypothesis** and don’t have enough evidence to support the hypothesis that on average, girls score more than 600 in the exam.

We perform a Two-Sample t-test when we want to compare the mean of two samples.

**Here’s an Example to Understand a Two-Sample T-Test**

Here, let’s say we want to determine if on average, boys score 15 marks more than girls in the exam. We do not have the information related to variance (or standard deviation) for girls’ scores or boys’ scores. To perform a t-test. we randomly collect the data of 10 girls and boys with their marks. We choose our ⍺ value (significance level) to be 0.05 as the criteria for Hypothesis Testing.

In this example:

- Mean Score for Boys is 630.1
- Mean Score for Girls is 606.8
- Difference between Population Mean 15
- Standard Deviation for Boys’ score is 13.42
- Standard Deviation for Girls’ score is 13.14

Thus, **p-value is less than 0.05 so we can reject the null hypothesis** and conclude that on average boys score 15 marks more than girls in the exam.

So when should we perform the z-test, and when should we perform the t-Test? It’s a critical question we need to answer if we want to master statistics.

If the sample size is large enough, then the z-Test and t-Test will conclude with the same results. For a **large sample size**, **Sample Variance will be a better estimate** of Population variance, so even if population variance is unknown we can** use the z-test using sample variance.**

Similarly, for a** Large Sample**, we have a high degree of freedom. And since t**-distribution approaches the normal distribution**,** t**he difference between the z score and t score is negligible.

Z Test | T Test | |
---|---|---|

Assumption | Population standard deviation is known | Population standard deviation is unknown |

Sample Size | Large sample size (n > 30) | Small sample size (n < 30) |

Distribution | Z-distribution | T-distribution |

Test Statistic | (Sample mean – Population mean) / (Population SD / √n) | (Sample mean – Population mean) / (Sample SD / √n) |

Hypothesis Testing | Test for a population mean or proportion | Test for a population mean |

Degrees of Freedom | Not applicable | n – 1 |

Application | Used when the population standard deviation is known and the sample size is large | Used when the population standard deviation is unknown or the sample size is small |

Example | Testing whether the average height of male adults is significantly different from a known value | Testing whether a new teaching method improves student test scores compared to the old method |

We have used the “stats” module of the “scipy” package to calculate our critical value of the test statistic, as well as the p-value. From these values, we conclude that we do not have evidence to reject our Null Hypothesis that temperature doesn’t affect the COV-19 outbreak. Although we cannot find the temperature’s impact on COV-19, this problem has just been taken for the conceptual understanding of what we have learned in this article. There are certain limitations of the z-test for COVID-19 datasets:

- Sample data may not be well representative of population data
- Sample variance may not be a good estimator of the population variance
- Variability in a state’s capacity to deal with this pandemic
- Socio-Economic Reasons
- Early breakout in certain places
- Some states could be hiding the data for geopolitical reasons

So, we need to be more cautious and research more to identify the pattern of this pandemic.

In this article, we followed a step-by-step procedure to understand the fundamentals of Hypothesis Testing, Type 1 Error, Type 2 Error, Significance Level, Critical Value, p-Value, Non-Directional Hypothesis, Directional Hypothesis, z-Test, and t-Test. Finally, we implemented a Two Sample z-Test for a coronavirus case study. So, you will understand clear understanding on t test vs z test in this article.

Hope you like the article! Understanding the difference between t-test and z-test is crucial for effective statistical analysis. While both tests assess sample means, they apply in different contexts. Exploring z-test vs t-test examples and knowing when to use z-test vs t-test can clarify their applications. Whether for a difference between t-test and z-test ppt or practical z test and t test difference examples, mastering these concepts will enhance your research skills.

**Key Takeaways**

- Z-Test & T-Tests are Parametric Tests, where the Null Hypothesis is less than, greater than or equal to some value.
- A z-test is used if the population variance is known, or if the sample size is larger than 30, for an unknown population variance.
- If the sample size is less than 30 and the population variance is unknown, we must use a t-test.

For more details you can also read these articles:

- Your Guide to Master Hypothesis Testing in Statistics
- Statistics for Data Science: Introduction to t-test and its Different Types (with Implementation in R)
- Introduction to Business Analytics
- Introduction to Data Science

A. A z-test is used to test a Null Hypothesis if the population variance is known, or if the sample size is larger than 30, for an unknown population variance. A t-test is used when the sample size is less than 30 and the population variance is unknown.

A. A one-tailed ztest allows for the possibility of rejection of the Null Hypothesis in only one direction, whereas a two-tailed ztest tests the possibility of rejection in both directions (left and right).

A. It is assumed that the z-statistic follows a standard normal distribution, whereas the t-statistic follows the t-distribution with a degree of freedom equal to n-1, where n is the sample size

The t-statistic measures the difference between two group means relative to their variability. A larger t-statistic indicates a more significant difference.

difference between t test and z testdifference between t-test and z-testdifference between t-test and z-test and f-testdifference between t-test and z-test pptdifference between z test and t testhypothesis buildingHypothesis Testingstatisticial testsstatisticsstatistics for data sciencet test vs z testt-testt-test and z-test differencewhen to use t test and z testwhen to use t test vs z testwhen to use z-test vs t-testz testz test and t testz test and t test differencez test and t test difference examplez test and t test difference formulaz test and t test difference in researchz test in statisticsz test t testz test vs t testz-test and t-testz-test vs t-testz-test vs t-test examplesz-test vs t-test vs chi-square

For the one-sample T-test, shouldn't critical value > t-score (1.833 > 1.64)?

Hi Catherine, Thanks for notifying the typo, we have updated it.

Thank you for the nice article . clearly explained. One clarification please so the type 1error is not in favor of the person when is actually innocent ?

Thank you for the nice article . clearly explained. One clarification please so the type 1error is not in favor of the person when is actually innocent ?

Yes, Type 1 error is a rejection of Null Hypothesis even if it's true (verdict against the Person even if he is innocent) which is also the False Positive.