Understanding Ordinary Least Square method Summary of a Linear Regression.

DS - VRP
7 min readJan 21, 2023

--

While learning Linear Regression I couldn’t understand the Ordinary Least Squares method summary. Below is the summary and we will learn about this one by one.

Summary of an OLS Method

Dependent variable: Dependent variable is one that is going to depend on other variables. In this regression analysis Y is our dependent variable because we want to analyses the effect of X on Y.

Model: The method of Ordinary Least Squares(OLS) is most widely used model due to its efficiency. This model gives best approximate of true population regression line. The principle of OLS is to minimize the square of errors ( ∑ei2 ).

Number of observations: The number of observation is the size of our sample, i.e. N = 150.

Degree of freedom(df) of residuals:

Degree of freedom is the number of independent observations on the basis of which the sum of squares is calculated.

D.f Residuals = 150 — (1+1) = 148

Degree of freedom(D.f) is calculated as,

Degrees of freedom, D . f = N — K

Where, N = sample size(no. of observations) and K = number of variables + 1

Df of model:

Df of model = K — 1 = 2–1 = 1 ,

Where, K = number of variables + 1

Covariance type:

Covariance type is typically nonrobust which means there is no elimination of data to calculate the covariance between features. Covariance shows how two variables move with respect to each other. If this value is greater than 0, both move in same direction and if this is less than 0, the variables mode in opposite direction. Covariance is difference from correlation. Covariance does not provide the strength of the relationship, only the direction of movement whereas, correlation value is normalized and ranges between -1 to +1 and correlation provides the strength of relationship. If we want to obtain robust covariance, we can declare cov_type=HC0/HC1/HC2/HC3. However, the statsmodel documentation is not that rich to explain all these. HC stands for heteroscedasticity consistent and HC0 implements the simplest version among all.

Constant term: The constant terms is the intercept of the regression line. From regression line (eq…1) the intercept is -3.002. In regression we omits some independent variables that do not have much impact on the dependent variable, the intercept tells the average value of these omitted variables and noise present in model.

Coefficient term: The coefficient term tells the change in Y for a unit change in X i.e if X rises by 1 unit then Y rises by 0.7529. If you are familiar with derivatives then you can relate it as the rate of change of Y with respect to X .

Standard error of parameters: Standard error is also called the standard deviation. Standard error shows the sampling variability of these parameters. Standard error is calculated by as –

Standard error of intercept term (b1):

Standard error of coefficient term(b2):

Here, σ2 is the Standard error of regression (SER) . And σ2 is equal to RSS( Residual Sum Of Square i.e ∑ei2 ).

t — statistics:

In theory, we assume that error term follows the normal distribution and because of this the parameters b1 and b2 also have normal distributions with variance calculated in above section.

That is ,

  • b1 ∼ N(B1, σb12)
  • b2 N(B2 , σb22)

Here B1 and B2 are true means of b1 and b2.

t — statistics are calculated by assuming following hypothesis –

  • H0 : B2 = 0 ( variable X has no influence on Y)
  • Ha : B2 ≠ 0 (X has significant impact on Y)

Calculations for t — statistics :

t = ( b1 — B1 ) / s.e (b1)

From summary table , b1 = -3.2002 and se(b1) = 0.257, So,

t = (-3.2002–0) / 0.257 = -12.458

Similarly, b2 = 0.7529 , se(b2) = 0.044

t = (0.7529–0) / 0.044 = 17.296

p — values:

In theory, we read that p-value is the probability of obtaining the t statistics at least as contradictory to H0 as calculated from assuming that the null hypothesis is true. In the summary table, we can see that P-value for both parameters is equal to 0. This is not exactly 0, but since we have very larger statistics (-12.458 and 17.296) p-value will be approximately 0.

If you know about significance levels then you can see that we can reject the null hypothesis at almost every significance level.

Confidence intervals:

There are many approaches to test the hypothesis, including the p-value approach mentioned above. The confidence interval approach is one of them. 5% is the standard significance level (∝) at which C.I’s are made.

C.I for B1 is ( b1 — t∝/2 s.e(b1) , b1 + t∝/2 s.e(b1) )

Since ∝ = 5 %, b1 = -3.2002, s.e(b1) =0.257 , from t table , t0.025,148 = 1.655,

After putting values the C.I for B1 is approx. ( -3.708 , -2.693 ). Same can be done for b2 as well.

While calculating p values we rejected the null hypothesis we can see same in C.I as well. Since 0 does not lie in any of the intervals so we will reject the null hypothesis.

R — squared value:

R2 is the coefficient of determination that tells us that how much percentage variation independent variable can be explained by independent variable. Here, 66.9 % variation in Y can be explained by X. The maximum possible value of R2 can be 1, means the larger the R2 value better the regression.

F — statistic:

F test tells the goodness of fit of a regression. The test is similar to the t-test or other tests we do for the hypothesis. The F — statistic is calculated as below –

Inserting the values of R2, n and k, F = (0.669/1) / (0.331/148) = 229.12.

You can calculate the probability of F >229.1 for 1 and 148 df, which comes to approx. 0. From this, we again reject the null hypothesis stated above

Log-Likelihood:

The log-likelihood value is a measure for fit of the model with the given data. It is useful when we compare two or more models. The higher the value of log-likelihood, the better the model fits the given data . It can range from negative infinity to positive infinity.

Omnibus and Prob(Omnibus)

Omnibus test checks the normality of the residuals once the model is deployed. If the value is zero, it means the residuals are perfectly normal. Here, in the example prob(Omnibus) is 0.357 indicating that there is 35.7% chance that the residuals the normally distributed. For a model to be robust, besides checking R-squared and other rubrics, the residual distribution is also required to be normal ideally. In other words, the residual should not follow any pattern when plotted against the fitted values.

Skew and Kurtosis

Skew values tells us the skewness of the residual distribution. Normally distributed variables have 0 skew values. Kurtosis is a measure of light-tailed or heavy-tailed distribution compared to normal distribution. High kurtosis indicates the distribution is too narrow and low kurtosis indicates the distribution is too flat. A kurtosis value between -2 and +2 is good to prove normalcy.

Durbin-Watson

Durbin-Watson statistic provides a measure of autocorrelation in the residual. If the residual values are autocorrelated, the model becomes biased and it is not expected. This simply means that one value should not be depending on any of the previous values. An ideal value for this test ranges from 0 to 4.

Jarque-Bera (JB) and Prob(JB)

Jarque-Bera (JB) and Prob(JB) is similar to Omni test measuring the normalcy of the residuals.

Condition Number

High condition number indicates that there are possible multicollinearity present in the dataset. If only one variable is used as predictor, this value is low and can be ignored. We can proceed like stepwise regression and see if there is any multicollinearity added when additional variables are included.

Conclusion

We have discussed all the summary parameters from statsmodel output. This will useful for readers who are interested to check all the rubrics for a robust model., Most of the time, we look for R-squared value to make sure that the model explains most of the variability but we have seen that there is much more than that.

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.51e+04. This might indicate that there are
strong multicollinearity or other numerical problems.

like to connect over linkedIN

References

--

--

DS - VRP
DS - VRP

Written by DS - VRP

An aspiring data scientist on a journey of continuous learning and discovery—turning curiosity into insights and challenges into opportunities to innovate

No responses yet