Python implementation of Polynomial Regression

9 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
One-time Fee
$69.99
List Price:  $99.99
You save:  $30
€65.14
List Price:  €93.07
You save:  €27.92
£55.73
List Price:  £79.62
You save:  £23.88
CA$95.61
List Price:  CA$136.60
You save:  CA$40.98
A$106.30
List Price:  A$151.87
You save:  A$45.56
S$94.64
List Price:  S$135.20
You save:  S$40.56
HK$546.91
List Price:  HK$781.33
You save:  HK$234.42
CHF 63.50
List Price:  CHF 90.72
You save:  CHF 27.21
NOK kr764.69
List Price:  NOK kr1,092.46
You save:  NOK kr327.77
DKK kr485.92
List Price:  DKK kr694.20
You save:  DKK kr208.28
NZ$117
List Price:  NZ$167.15
You save:  NZ$50.15
د.إ257.06
List Price:  د.إ367.25
You save:  د.إ110.18
৳7,661.98
List Price:  ৳10,946.16
You save:  ৳3,284.17
₹5,839.65
List Price:  ₹8,342.71
You save:  ₹2,503.06
RM331.75
List Price:  RM473.95
You save:  RM142.20
₦86,437.65
List Price:  ₦123,487.65
You save:  ₦37,050
₨19,492.21
List Price:  ₨27,847.21
You save:  ₨8,355
฿2,575.56
List Price:  ฿3,679.53
You save:  ฿1,103.97
₺2,262.43
List Price:  ₺3,232.18
You save:  ₺969.75
B$357.76
List Price:  B$511.10
You save:  B$153.34
R1,296.01
List Price:  R1,851.52
You save:  R555.51
Лв127.38
List Price:  Лв181.98
You save:  Лв54.60
₩95,113.23
List Price:  ₩135,881.87
You save:  ₩40,768.63
₪260.11
List Price:  ₪371.60
You save:  ₪111.49
₱3,999.61
List Price:  ₱5,713.97
You save:  ₱1,714.36
¥10,715.43
List Price:  ¥15,308.41
You save:  ¥4,592.98
MX$1,185.45
List Price:  MX$1,693.57
You save:  MX$508.12
QR254.79
List Price:  QR364.01
You save:  QR109.21
P955.69
List Price:  P1,365.33
You save:  P409.64
KSh9,427.65
List Price:  KSh13,468.65
You save:  KSh4,041
E£3,355.67
List Price:  E£4,794.02
You save:  E£1,438.35
ብር3,989.43
List Price:  ብር5,699.43
You save:  ብር1,710
Kz58,616.62
List Price:  Kz83,741.62
You save:  Kz25,125
CLP$66,326.02
List Price:  CLP$94,755.52
You save:  CLP$28,429.50
CN¥506.51
List Price:  CN¥723.62
You save:  CN¥217.11
RD$4,049.59
List Price:  RD$5,785.38
You save:  RD$1,735.78
DA9,420.19
List Price:  DA13,457.99
You save:  DA4,037.80
FJ$157.70
List Price:  FJ$225.30
You save:  FJ$67.59
Q542.62
List Price:  Q775.21
You save:  Q232.58
GY$14,613.08
List Price:  GY$20,876.73
You save:  GY$6,263.64
ISK kr9,792.30
List Price:  ISK kr13,989.60
You save:  ISK kr4,197.30
DH706.05
List Price:  DH1,008.69
You save:  DH302.63
L1,239.86
List Price:  L1,771.31
You save:  L531.44
ден4,010.92
List Price:  ден5,730.13
You save:  ден1,719.21
MOP$562.15
List Price:  MOP$803.11
You save:  MOP$240.95
N$1,302.54
List Price:  N$1,860.85
You save:  N$558.31
C$2,571.43
List Price:  C$3,673.63
You save:  C$1,102.20
रु9,317.58
List Price:  रु13,311.40
You save:  रु3,993.82
S/262.81
List Price:  S/375.46
You save:  S/112.65
K268.53
List Price:  K383.63
You save:  K115.10
SAR262.51
List Price:  SAR375.03
You save:  SAR112.52
ZK1,879.71
List Price:  ZK2,685.42
You save:  ZK805.70
L324.19
List Price:  L463.14
You save:  L138.95
Kč1,629.65
List Price:  Kč2,328.17
You save:  Kč698.52
Ft25,373.17
List Price:  Ft36,248.95
You save:  Ft10,875.77
SEK kr758.75
List Price:  SEK kr1,083.98
You save:  SEK kr325.22
ARS$61,468.94
List Price:  ARS$87,816.53
You save:  ARS$26,347.59
Bs482.36
List Price:  Bs689.12
You save:  Bs206.75
COP$272,946.91
List Price:  COP$389,940.87
You save:  COP$116,993.96
₡35,623.88
List Price:  ₡50,893.45
You save:  ₡15,269.56
L1,732.95
List Price:  L2,475.75
You save:  L742.80
₲523,151.84
List Price:  ₲747,391.81
You save:  ₲224,239.96
$U2,683.09
List Price:  $U3,833.15
You save:  $U1,150.06
zł281.85
List Price:  zł402.67
You save:  zł120.81
Already have an account? Log In

Transcript

Hello everyone, welcome to the course of machine learning with Python. In this video, we shall learn about how to implement polynomial regression in Python. In the last video, we have seen how to use scikit learn library. To solve the linear regression problem, we shall be using the cyclical library here also to solve the fundamental integration problem. So first, we'll import the data set. So we are familiar with the data set, that is the height weight, gender coos and data.

So that is under the data folder, so we go ahead and run this particular cell. So our data is stored inside the data frame named underscore sun underscore data. And we have obtained this data frame by reading the CSV file using the function read underscore CSV, which is defined under the pandas library. Okay, so here we have the first five rows of the data. Now we want to predict weight based on the person's height. So here y is nothing but the weight values and x is the But the height files okay.

So we shall use scikit learn library as I have already mentioned. So from SK learn dot linear underscore model will import linear regression note the camel font is capital and r is capital over here, and we'll also import NumPy. So our first model will be the linear model of the form y hat equals to theta zero plus theta one hat X. Here, we have only two model parameters, because it's the binary data. So the model parameters we want to estimate a sitter zero hat and theta one. So, we define our model as the instance of the linear regression class and we fit the model Now note that as it is by variate data, and x is only have one column vector.

So that is why in order to make sure that x is a column vector, we have reshape it with minus one comma one so that means it contains only one color. So I have fitted X and Y to our linear regression model. So, after it has fitted the model, we can find out the intercept and the coefficients of the model. So, what are the intercepts? So, the intercept is theta zero and theta one, I have denoted as Li in underscore Li in order to denote that this is basically intercept of the linear model. So, the intercept with a linear model is minus 33.756 and the theta one of the linear model be 0.5017 Okay.

Now, we can plot the regression line of the linear model along with the data. So, for that we have created some synthetic data x data linear and why did the linear now how we have opted Why is it a linear function written linear So, why did it linear is Cheetah zero linear plus theta one linear multiplied with x two media and what is a statistic here it is 100 data points between the minimum and the maximum values of x now, we should plot x comma y in the scatter plot and the Line talk will be plotting the x data linear and the wider linear with red color okay. So, as you can see, this is our regression line and these dots are basically our original data points. So, this is the best fit straight line now, we should predict the linear model on the entire data set.

So, in order to predict on the entire data set, so, why underscore predict underscore linear will be equals to model dot predict within bracket again x dot reshape within revenue minus one comma one in order to make sure that the x i am passing is a column vector. Now, we should evaluate the performance of the linear model now, as I have already mentioned, in order to measure the performance we can use our to score as well as mean squared error okay. So, our two squares should be higher and mean squared error should be lower in order to have a good model. So, I am importing These are two score and the mean squared error which are already defined under liability cause is scaler dot matrix. So, the mean square error of the linear model Is ABC underscore Illa yen is the mean squared error of y and y predicted linear.

And to score for the linear model is our to underscore score where the while true is equals to the original y value and y is equals to the Y underscore rate underscore English. Now, as we have opted in this value will have been those values in order to see what are the mean squared error and to score for the linear model. So for linear model mean squared error is 24.8377. And our to score is 0.885. It's not bad, but let's see if we can improve this using the quadratic model. So our model two is the quadratic model.

So the quadratic model is Y hat is equal to theta zero hat plus beta one hat x plus theta two hat x squared. Okay. Now as I have mentioned, I will replace x with x one and x square with x two. So my final model will be theta zero plus theta one hat. It's one procedure to have extra note that x one is nothing but x and x two is nothing but x okay now, we have to prepare the data. So, as I have mentioned x one is equals to x and x two will be nothing but where have the data x So, for that I have used NP dot square function fine now, I will combine these two values x one and x two into a particular matrix format column wise So, I am doing vertical stack So, in Peter Feaster x one comma x two that means, now x one and x two will be arranged column by column okay and I am taking the transpose of it.

So, what is the shift now, so, it is 544 or more two So, that means it is having 544 rows and two columns. So, again I am fitting the value x and y to the model of linear regression. Now, what are the interesting and the coefficients of the quadratic model? So, theta underscore zero underscore it basically stores the intercept of the quadratic model and the intercept of the quadratic model is to teach 18447 610 Tita one of the quadratic model is model coefficient zero and theta two of the model the model is model coefficient one why because model coefficient is at least. So the first fellow will be theta one second mobility the two like that. So as it has only two coefficients, in this case, we'll have theta one and theta two or d. So, you can go ahead and fill this model dot wave underscore in order to clean to the coefficient in the list format.

So you can see this is basically nothing but at least or an eddy where the first element is visibility to one the second element is the data. So we can plot the regression line of quadratic regression. So again, we are obtaining Swan data and x two data and note that x one data is basically linearly spaced hundred data points between minimum and maximum values of x excluded is nothing but square of the x one data and y data is nothing but Peter Esposito Last ticker underscore one underscore CT multiplied with x one beta plus theta two watt multiplied with x two. So these are the basically synthetic data points. Now we should plot x comma y in the scatter plot that is ordinal data, then we'll plot x one data and quantitative data with red color which is nothing but the water to depression line. And again, we have plotted the linear data as well.

Using color black and labeled it as a billionaire equation I have set the agent and location equals to base that means you could locate the laser to the base location inside the plot and see how the plot looks like. So we can see that this black line is basically the linear model and the red line is basically the board it is model and we can see that the quadratic model fits the data better as compared to the linear model. Now you should predict the quadratic model on the entire data set. So why underscore add underscore Wait that is wild prediction quality stores the value of all the predictions for the quadratic model okay. If we go ahead and run this particular cell we can see that it is has generated a factor of 544 dimension and so, the mean square error of this quadratic model can be obtained where y two is nothing but y that is already a value and y predicted here is nothing but y predicted quantity similarly, our to score for the quadratic model is our to underscore what is nothing but our to score where y is equals to y and y predict is equals to y predict what Okay, so for quadratic model mean squared error and not to square mean in pink, okay, so the mean squared error is 15.45.

And our to score is 0.9285. In order to compare that with the linear model, we are again predicting the mean square error and back to school for the linear model here and we can see that the linear model the mean squared error is going to 4.8377 and our score is 0.88. So, in most respects, we can say that the quality polynomial is a better fit for the given data set than the simple linear model okay. So, that is how we do polynomial regression one thing I have not done over here that is the feature scaling. So, it is recommended that you do feature scaling before applying any machine learning model. If you if you did that will get the same kind of model performance that is for quadratic model will have a better performance as compared to the linear model okay.

So, in the next video we shall move towards the new model, which is the classification techniques. So, see you in the next lecture. Thank you

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.