Contents
In what sense are OLS estimates the best among the class of linear unbiased estimators?
There is of course one standard result: OLS itself is the best unbiased estimator if in addition to the Gauss-Markov assumptions we also assume that the errors are normally distributed. For some other particular distribution of errors I could compute the corresponding maximum-likelihood estimator.
Are OLS estimators unbiased?
OLS estimators are BLUE (i.e. they are linear, unbiased and have the least variance among the class of all linear and unbiased estimators). So, whenever you are planning to use a linear regression model using OLS, always check for the OLS assumptions.
Why are Ols and unbiasedbess not the same?
As essentially discussed in the comments, unbiasedness is a finite sample property, and if it held it would be expressed as The OP shows that even though OLS in this context is biased, it is still consistent. No contradiction here. @Alecos nicely explains why a correct plim and unbiasedbess are not the same.
What is the assumption of no autocorrelation in OLS?
If this variance is not constant (i.e. dependent on X’s), then the linear regression model has heteroscedastic errors and likely to give incorrect estimates. This OLS assumption of no autocorrelation says that the error terms of different observations should not be correlated with each other.
What happens when the assumption of OLS is violated?
OLS assumption is violated), then it will be difficult to trust the standard errors of the OLS estimates. Hence, the confidence intervals will be either too narrow or too wide. Also, violation of this assumption has a tendency to give too much weight on some portion (subsection) of the data.
Why is OLS estimator of AR ( 1 ) coefficient biased?
I am trying to understand why OLS gives a biased estimator of an AR (1) process. Consider y t = α + β y t − 1 + ϵ t, ϵ t ∼ i i d N ( 0, 1). In this model, strict exogeneity is violated, i.e. y t and ϵ t are correlated but y t − 1 and ϵ t are uncorrelated.