Regression ppt.pptx

1,656 views 26 slides Oct 06, 2022
Slide 1
Slide 1 of 26
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26

About This Presentation

# regression, Regression type, regression in Data Science, Regression in Machine Learning, Regression in data Analytics


Slide Content

Regression By D.S Kaushal

Linear Regression

Logistic Regression Logistic regression is one of the types of regression analysis technique, which gets used when the dependent variable is discrete. Example: 0 or 1, true or false, etc. This means the target variable can have only two values, and a sigmoid curve denotes the relation between the target variable and the independent variable . Logit function is used in Logistic Regression to measure the relationship between the target variable and independent variables. Below is the equation that denotes the logistic regression. logit (p) = ln (p/(1-p)) = b0+b1X1+b2X2+b3X3….+ bkXk where p is the probability of occurrence of the feature.

Polynomial Regression The technique of polynomial regression analysis is used to represent a non-linear relationship between dependent and independent variables. It is a variant of the multiple linear regression model, except that the best fit line is curved rather than straight.

Important Points : While there might be a temptation to fit a higher degree polynomial to get lower error, this can result in over-fitting. Always plot the relationships to see the fit and focus on making sure that the curve fits the nature of the problem. Here is an example of how plotting can help:

Ridge Regression Ridge Regression is another type of regression in machine learning and is usually used when there is a high correlation between the parameters. This is because as the correlation increases the least square estimates give unbiased values. But if the collinearity is very high, there can be some bias value. Therefore, we introduce a bias matrix in the equation of Ridge Regression. It is a powerful regression method where the model is less susceptible to overfitting . Below is the equation used to denote the Ridge Regression, λ (lambda) resolves the multicollinearity issue: β = (X^{T}X + λ*I)^{-1}X^{T}y

 Lasso Regression Lasso Regression performs regularization along with feature selection. It avoids the absolute size of the regression coefficient. This results in the coefficient value getting nearer to zero, this property is different from what in ridge regression .

Therefore we use feature selection in Lasso Regression. In the case of Lasso Regression, only the required parameters are used, and the rest is made zero. This helps avoid the overfitting in the model. But if independent variables are highly collinear, then Lasso regression chooses only one variable and makes other variables reduce to zero. Below equation represents the Lasso Regression method: N^{-1}Σ^{N}_{ i =1}f(x_{ i }, y_{I}, α, β)

Bayesian Linear Regression Bayesian Regression is used to find out the value of regression coefficients. In Bayesian linear regression, the posterior distribution of the features is determined instead of finding the least-squares. Bayesian Linear Regression is a combination of Linear Regression and Ridge Regression but is more stable than simple Linear Regression.

Decision Tree Regression The decision tree as the name suggests works on the principle of conditions. It is efficient and has strong algorithms used for predictive analysis. It has mainly attributed that include internal nodes, branches, and a terminal node. Every internal node holds a “test” on an attribute, branches hold the conclusion of the test and every leaf node means the class label. It is used for both classifications as well as regression which are both supervised learning algorithms. Decisions trees are extremely delicate to the information they are prepared on — little changes to the preparation set can bring about fundamentally different tree structures.

Random Forest Regression Random forest, as its name suggests, comprises an enormous amount of individual decision trees that work as a group or as they say, an ensemble. Every individual decision tree in the random forest lets out a class prediction and the class with the most votes is considered as the model's prediction. Random forest uses this by permitting every individual tree to randomly sample from the dataset with replacement, bringing about various trees. This is known as bagging.
Tags