Regression 9 - Maximum Likelihood Estimation(MLE) Approach for Regression - Derivation

 

MLE Approach:

When performing linear regression using MLE under the assumption that the residuals (errors) are independently and identically distributed (i.i.d.) with a normal distribution, we estimate the regression coefficients that maximize the likelihood of observing the given data.

In this blog we see how to perform regression on a dataset that applies MLE for model fitting. The mathematical assumptions and derivations are given in detail.

 

The MLE approach gives you point estimates for the coefficients (mean of the likelihood distribution), and you can also compute the variance-covariance matrix of these estimates, which gives you the variances (and covariances) of the estimates. These variances are a measure of the uncertainty or the spread of the likelihood distribution of the parameter estimates.

For simple linear regression, the MLE estimates of the coefficients will actually be the same as the Ordinary Least Squares (OLS) estimates, which are also the same as what you get from the pseudo-inverse matrix method, under the assumption of i.i.d. normal errors.

Good Read:

https://people.missouristate.edu/songfengzheng/Teaching/MTH541/Lecture%20notes/MLE.pdf

Comments

Popular posts from this blog

ANN Series - 10 - Backpropagation of Errors

Regression 10 - Pseudo-Inverse Matrix Approach - Relationship between MLE and LSA - Derivation