Research and Null Hypotheses
It is necessary to distinguish between two kinds of hypotheses: research, and null. A research hypothesis is an assertion about a particular phenomenon that is, typically, derived from a theory. We develop theories for phenomena in order to make predictions about observable events. For example, we might assert that a particular independent variable will have an effect on a dependent measure. Our prediction is called the research hypothesis (usually indicated by the symbol H1) because it will be of scientific interest, if it is supported by tests. The null hypothesis (usually indicated by the symbol H0) is the statement that the treatment manipulation will not have any effect - that there is zero effect due to the independent variable. If the null hypothesis is true, then the observed differences in performance on the dependent measure are due solely to chance fluctuations.  If the research hypothesis is true, then the differences are due to chance fluctuations plus the effect of the independent variable.  The major task is to determine whether the research hypothesis or the null hypothesis is true. The only one that can be tested is the null hypothesis; the research hypothesis cannot be tested directly. Therefore, the hypotheses must be formulated in such a way that they cannot both be correct. If the null hypothesis is false, then the research hypothesis must be correct, if the experiment is methodologically sound. 

To determine whether obtained differences are due to the effect of the independent variable or only to chance we need to perform statistical tests.  The obtained differences between groups are said to be significant if the results are unlikely to occur on the basis of chance.   If a particular experimental outcome is a rare event when chance only is operating, it is reasonable to assert that more than chance is operating and to reject the null hypothesis and accept the research hypothesis. In this case the investigator asserts that the differences between the treatments were not due solely to chance fluctuations, but to chance plus the effect of the independent variable. If the results are such that the null hypothesis can be rejected, the obtained differences are said to be statistically significant. 

Significance Level 

How rare must an event be to be called significant?  That is, how rare does an experimental outcome have to be, assuming that chance only is operating, before the investigator will reject the null hypothesis and accept the research hypothesis? The actual level varies somewhat depending on the research area, but most investigators will accept an outcome that has a probability of .05 or less as a rare event. If the probability of an event occurring is .05, it can be expected to occur five times in every 100 if only chance is operating. The level used to define a rare event is called the significance level (or alpha level). 

If chance only is operating then it is highly likely that "small" mean differences will be obtained on the dependent measure after the introduction of the independent variable, so the investigator will probably be unable to reject the null hypothesis. However, since it is possible to obtain "large" mean differences solely on the basis of chance, an investigator may occasionally reject the null hypothesis when the null hypothesis is true. Yet most of the time when the null hypothesis is rejected it will, in fact, be false. That is, in most cases "large" mean difference between groups are due to the effect of chance fluctuations plus the effect of the independent variable.

Statistical tests are performed to assess whether obtained differences between conditions are rare or unlikely if the null hypothesis (H0) is true. The investigator has some freedom to decide how "unlikely" the results have to be before they should be labeled significant. If he wants to be very careful and not claim that the manipulation has had an effect unless there is almost no question about it, he can accept a very stringent significance level such as .001. (This can be represented as p < .001; the p stands for probability.) If the obtained differences are so large that the results are significant at the .001 level, there is less than one chance in a thousand that the results are due to chance. If the investigator attributes the obtained differences to the effect of the independent variable, there is some possibility that he is wrong: Perhaps only chance is operating, that is, H0 is true. The point is that the possibility of being wrong when rejecting the null hypothesis cannot be eliminated, but the probability can be controlled. The probability of being wrong is equal to the significance level adopted. In the present case it is one in a thousand.

Type 1 and Type 2 Errors 

There are four possible situations that need to be considered in order to discuss the two kinds of errors an investigator can make when using statistical decision making. These four situations are represented in Table below. 

If we assert that the independent variable has an effect (we reject the null hypothesis and accept the research hypothesis) when in fact it does not, we commit an error. This is called a Type 1 error, and the probability of committing it is equal to the alpha level. However, we can make another kind of error. If we assert that the independent variable has no effect (i.e., we fail to reject the null hypothesis) when in fact it does have an effect (i.e., H1 is true) we are committing a Type 2 error. The Greek letter (beta) is used to indicate the probability of making a Type 2 error. The reason investigators are reluctant to conclude, when they fail to reject the null hypothesis (Ho), that the independent variable does not have an effect is that the probability of making a Type 2 error cannot be determined. The probability of making a Type 2 error is determined by multiple factors.  The probability increases with decreases in alpha, decreases with increases in sample size (N), and increases with increases in the complexity of the experimental design.  There is no precise  way to determine the exact probability of making a Type 2 error.