Concept of Linear Regression

SAS Analytics Linear Regression
7 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
One-time Fee
$69.99
List Price:  $99.99
You save:  $30
€65.70
List Price:  €93.87
You save:  €28.16
£56.78
List Price:  £81.12
You save:  £24.34
CA$96.02
List Price:  CA$137.18
You save:  CA$41.15
A$108.71
List Price:  A$155.31
You save:  A$46.59
S$95.36
List Price:  S$136.23
You save:  S$40.87
HK$548.45
List Price:  HK$783.54
You save:  HK$235.08
CHF 63.80
List Price:  CHF 91.15
You save:  CHF 27.34
NOK kr769.93
List Price:  NOK kr1,099.95
You save:  NOK kr330.02
DKK kr490.31
List Price:  DKK kr700.47
You save:  DKK kr210.16
NZ$118.43
List Price:  NZ$169.19
You save:  NZ$50.76
د.إ257.05
List Price:  د.إ367.23
You save:  د.إ110.18
৳7,676.37
List Price:  ৳10,966.71
You save:  ৳3,290.34
₹5,834.67
List Price:  ₹8,335.60
You save:  ₹2,500.93
RM334.41
List Price:  RM477.75
You save:  RM143.34
₦90,777.03
List Price:  ₦129,687.03
You save:  ₦38,910
₨19,466.05
List Price:  ₨27,809.84
You save:  ₨8,343.78
฿2,589.56
List Price:  ฿3,699.53
You save:  ฿1,109.97
₺2,277.64
List Price:  ₺3,253.91
You save:  ₺976.27
B$364.49
List Price:  B$520.72
You save:  B$156.23
R1,335.85
List Price:  R1,908.44
You save:  R572.59
Лв128.52
List Price:  Лв183.62
You save:  Лв55.09
₩96,509.16
List Price:  ₩137,876.14
You save:  ₩41,366.97
₪262.81
List Price:  ₪375.46
You save:  ₪112.65
₱4,024.28
List Price:  ₱5,749.22
You save:  ₱1,724.94
¥10,831.24
List Price:  ¥15,473.86
You save:  ¥4,642.62
MX$1,193.35
List Price:  MX$1,704.86
You save:  MX$511.51
QR255.21
List Price:  QR364.60
You save:  QR109.39
P971.48
List Price:  P1,387.89
You save:  P416.41
KSh9,343.66
List Price:  KSh13,348.66
You save:  KSh4,005
E£3,370.40
List Price:  E£4,815.06
You save:  E£1,444.66
ብር3,974.28
List Price:  ብር5,677.79
You save:  ብር1,703.50
Kz58,484.22
List Price:  Kz83,552.47
You save:  Kz25,068.24
CLP$67,138.60
List Price:  CLP$95,916.40
You save:  CLP$28,777.80
CN¥506.95
List Price:  CN¥724.24
You save:  CN¥217.29
RD$4,127.56
List Price:  RD$5,896.78
You save:  RD$1,769.21
DA9,431.64
List Price:  DA13,474.35
You save:  DA4,042.71
FJ$159.10
List Price:  FJ$227.30
You save:  FJ$68.19
Q544.01
List Price:  Q777.19
You save:  Q233.18
GY$14,634.05
List Price:  GY$20,906.69
You save:  GY$6,272.63
ISK kr9,876.98
List Price:  ISK kr14,110.58
You save:  ISK kr4,233.60
DH708.59
List Price:  DH1,012.31
You save:  DH303.72
L1,249.26
List Price:  L1,784.74
You save:  L535.47
ден4,044.46
List Price:  ден5,778.05
You save:  ден1,733.58
MOP$564.32
List Price:  MOP$806.20
You save:  MOP$241.88
N$1,343.97
List Price:  N$1,920.04
You save:  N$576.07
C$2,574.58
List Price:  C$3,678.13
You save:  C$1,103.54
रु9,342.18
List Price:  रु13,346.54
You save:  रु4,004.36
S/260.97
List Price:  S/372.84
You save:  S/111.86
K265.79
List Price:  K379.72
You save:  K113.93
SAR262.52
List Price:  SAR375.05
You save:  SAR112.52
ZK1,795.86
List Price:  ZK2,565.63
You save:  ZK769.76
L326.99
List Price:  L467.14
You save:  L140.15
Kč1,660.63
List Price:  Kč2,372.43
You save:  Kč711.80
Ft25,906.43
List Price:  Ft37,010.78
You save:  Ft11,104.34
SEK kr762.37
List Price:  SEK kr1,089.15
You save:  SEK kr326.77
ARS$60,978.49
List Price:  ARS$87,115.87
You save:  ARS$26,137.37
Bs484.36
List Price:  Bs691.97
You save:  Bs207.61
COP$274,870.93
List Price:  COP$392,689.60
You save:  COP$117,818.66
₡35,126.73
List Price:  ₡50,183.20
You save:  ₡15,056.46
L1,726.83
List Price:  L2,467.01
You save:  L740.17
₲517,471.24
List Price:  ₲739,276.32
You save:  ₲221,805.07
$U2,687.21
List Price:  $U3,839.04
You save:  $U1,151.82
zł283.69
List Price:  zł405.30
You save:  zł121.60
Already have an account? Log In

Transcript

In this video we'll be starting with the concept of linear regression. Now, what is linear regression? in linear regression you have a set of independent variables and one dependent variable. There are two types of linear regression one is simple linear regression and one is multiple linear regression. Now, you have to remember that in case of linear regression or in case of classical linear regression model, the dependent variables and the independent variables are linearly related to each other in case of simple linear regression I have one dependent variable and one independent variable and in case of multiple linear regression, I have one dependent variable and multiple independent variables. Now, these independent variables are also called predictor variables or predictors Using the independent variables we will be estimating the value of the dependent variable.

So, in simple linear regression there is one outcome variable with one independent variable and in multiple linear regression there is one outcome variable with multiple independent variables. So, in linear regression analysis, we fit a predictive model to our data and use that model to predict values of the dependent variable from one or more independent variables. Here the dependent variable is linearly related to all the independent variables as I told you simple seeks to predict an outcome variable from a single character variable or independent variables, whereas multiple linear regression seeks to predict an outcome variable from several parameters or several independent variables. So, since the dependent and independent variables are linearly related to each other, therefore, the line of best fit for this model is a straight line. So, the model that we fit here is a linear model linear model just means a model based on the straight line that is the line of business It is a straight line.

So, I have represented the concept of classical linear regression model graphically to see this is the x axis which, which consists of the independent variables and y axis consists of the dependent variable This is the line of best fit Now, what is the standard form of linear regression equation. So, the linear regression equation the standard form of linear regression equation is represented by y equal to a plus b one x one plus b two x two plus b three x three plus bn excellent plus e, where y is the value of the dependent variable for the ayat observation is my intercept or constant b one v two v three v n are the regression coefficients or slopes of my linear regression equation x one x two x three dot Excel are the value of my independent variables and Ei is the error term. So, the error terms means the part of the dependent variable that remains unexplained so from a till be an accident is the part of me explain variation and er is the part of my unexplained variation.

Now, we need to know what are the features of a straight line. So, this straight line is the line of best fit for my classical linear regression model. There are two features first is the slope or gradient of the line. Next is the point where my straight line is cutting the vertical axis that is the y axis and that we call as the intercept of the line what is the method of least squares. The method of least squares is a way of finding the line that best fits the data of all the possible lines that could be drawn. The line of best fit is the one which results in the least amount of difference between the observed data points and the line.

The figure shows that when any line is fitted to a set of data, there will be small differences between the line and the actual data. We are interested in the vertical differences between the line and the actual data because we are using the line to predict the values of y from the value of x, some of those differences are positive, they are above the line indicating that the model underestimates their value and some are negative that is there below the line indicating that the model overestimates the value. So, the ones which are above the line, they are the positive differences, the ones which are below the line, they are the negative differences. Now, what is the goodness of fit for the for the linear regression model you need to remember the goodness of fit for classical linear regression model is measured by r square and r square is the ratio of explained variation by total variation technically, if my value of r square increases and I must say that the value of my explained variation of my model will increase, but if the value or if the number of independent variables of my classical linear regression model increases, automatically the value of my r square will increase, but that will lead to inefficiency of the model if the independent variables are redundant in nature because asked It will not consider the redundancy of the model or it will not consider that which of the independent variables are redundant therefore, r square is not taken as a good measure for goodness of fit for the model.

So, r squared is the ratio of explained variation to the total variation the problem of r square is that if the number of independent variables in the linear regression model are increased, the value of r squared will increase gradually even if redundant variables are taken into account. Hence, these redundant variables does not increase the efficiency of the model. Therefore, r square is not a good measure of goodness of fit for the mode in this case, we will be moving to the concept of adjusted R square adjusted R square is taken as an accurate measure for the goodness of fit of the model, because our adjusted R square is adjusted to the degrees of freedom, which considers only the important and significant variable for the model two this helps to increase the model efficiency. What is the test of significance of the estimated parameters There are two types of tests which is done to check the significance of the estimated parameters.

One is global test and another is local test. Global test transfer overall significance local test transfer individual significance in global test my H naught is all the parameters are equal to zero simultaneously This means they are insignificant equal to zero means they're insignificant and each one is at least one is nonzero. So they're significant this test is conducted by using an F statistic. Next comes the concept of local test this deals with the individual significance of the parameter t test for the individual significance of the parameter where h notice the parameter value is zero that is their insignificant h one is the value is known. So that is their significant when it is equal to zero. When the parameter is equal to zero, we call it as insignificant when it is nonzero we call it a significant and this test is conducted by using a t statistic.

So in this video we will be doing here Let's end this video over here. So goodbye. Have a nice day. Bye See all for the next video.

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.