When testing the null hypothesis beta_1 = 0 in linear regression, one estimates the variance of the response variable (around the subpopulation means) by first fitting a line with least-squares, then looking at the variance around that line.
This, it seems to me, runs counter to the idea that we should base all calculations on the assumption that beta_1 = 0 (the hypothesis we wish to refute with some level of confidence).
Clearly the estimated variance would be larger then. How does this make sense?
A very basic question that I have not found an answer to anywhere online. |