In statistics the linear model can be expressed by saying
- Y=Xβ+ε
If, rather than taking the variance of ε to be σ2I, where I is the nxn identity matrix, one assumes the variance is σ2M, where M is a known matrix other than the identity matrix, then one estimates β by the method of "generalized least squares", in which, instead of minimizing the sum of squares of the residuals, one minimizes a different quadratic form in the residuals -- the quadratic form being the one given by the matrix M-1. If all of the off-diagonal entries in the matrix M are 0, then one normally estimates β by the method of "weighted least squares", with weights proportional to the reciprocals of the diagonal entries.
Ordinary Linear regression is a very closely related topic.
"Generalized linear models", rather than saying
- E(Y)=Xβ,
- f(E(Y))=Xβ,
- Yi has a Poisson distribution with expected value eγ+δxi.