T O P I C R E V I E W |
Niko4 |
Posted - 07/06/2017 : 02:19:22 AM When testing the null hypothesis beta_1 = 0 in linear regression, one estimates the variance of the response variable (around the subpopulation means) by first fitting a line with least-squares, then looking at the variance around that line.
This, it seems to me, runs counter to the idea that we should base all calculations on the assumption that beta_1 = 0 (the hypothesis we wish to refute with some level of confidence).
Clearly the estimated variance would be larger then. How does this make sense?
A very basic question that I have not found an answer to anywhere online. |
|
|