Supervised Learningclassification Part1.ppt

HarishBellamkonda4 11 views 32 slides Sep 09, 2024
Slide 1
Slide 1 of 32
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32

About This Presentation

Supervised Learningclassification


Slide Content

Supervised
Learning:Classification
Nearest Neighbor Part-1

Machine Learning
•The field of study interested in the
development of computer algorithms to
transform data into intelligent action is known
as machine learning.

Uses and abuses of machine
learning
•A computer may be more capable than a human
of finding patterns in large databases, but it still
needs a human to motivate the analysis and turn
the result into meaningful action.
•Machines are not good at asking questions, or
even knowing what questions to ask.
•They are much better at answering them,
provided the question is stated in a way the
computer can comprehend.

•There are various places where machine learning is
used.
•Identification of unwanted spam messages in e-
mail
• Segmentation of customer behavior for targeted
advertising
•Forecasts of weather behavior and long-term
climate changes
•Prediction of popular election outcomes

•Development of algorithms for auto-piloting
drones and self-driving cars
•Optimization of energy use in homes and
office buildings
•Discovery of genetic sequences linked to
diseases

•Although machine learning is used widely and
has tremendous potential, it is important to
understand its limits.
•Machine learning, at this time, is not in any
way a substitute for a human brain.
•It has very little flexibility to extrapolate
outside of the strict parameters it learned and
knows no common sense.

•With this in mind, one should be extremely
careful to recognize exactly what the
algorithm has learned before setting it loose in
the real-world settings.

How machines learn
•Regardless of whether the learner is a human or
machine, the basic learning process is similar. It
can be divided into four interrelated components:
•Data storage utilizes observation, memory, and
recall to provide a factual basis for further
reasoning.
•Abstraction involves the translation of stored
data into broader representations and concepts.

•Generalization uses abstracted data to create
knowledge and inferences that drive action in
new contexts.
•Evaluation provides a feedback mechanism
to measure the utility of learned knowledge
and inform potential improvements.

Machine learning in practice
•Data collection
•Data exploration and preparation
•Model training
•Model evaluation
•Model improvement

Classification
•The Classification algorithm is a Supervised
Learning technique that is used to identify
the category of new observations on the
basis of training data. In Classification, a
program learns from the given dataset or
observations and then classifies new
observation into a number of classes or
groups. Such as, Yes or No, 0 or 1, Spam or
Not Spam, cat or dog, etc. Classes can be
called as targets/labels or categories.

•Classification: Here our target variable
consists of the categories.
•Regression: Here our target variable is
continuous and we usually try to find out the
line of the curve.

Lazy learning:Nearest neighbors
•Nearest neighbor classifiers are defined by
their characteristic of classifying unlabeled
examples by assigning them the class of
similar labeled examples.
•They have been used successfully for:
•Computer vision applications, including optical
character recognition and facial recognition in
both still images and video

•Predicting whether a person will enjoy a
movie or music recommendation
•Identifying patterns in genetic data, perhaps
to use them in detecting specific proteins or
diseases

LAZY LEARNER
•It is also called a
 
lazy learner
algorithm 
because it does not learn from the
training set immediately instead it stores the
dataset and at the time of classification, it
performs an action on the dataset.
•KNN algorithm at the training phase just stores
the dataset and when it gets new data, then it
classifies that data into a category that is much
similar to the new data.

The k-NN algorithm
•The nearest neighbors approach to classification is
exemplified by the k-nearest neighbors algorithm
(k-NN).
•The k-NN algorithm gets its name from the fact
that it uses information about an example's k-
nearest neighbors to classify unlabeled examples.
•The letter k is a variable term implying that any
number of nearest neighbors could be used.

How does K-NN work?
•The K-NN working can be explained on the basis of the below
algorithm:
•Step-1: 
Select the number K of the neighbors
•Step-2: 
Calculate the Euclidean distance of 
K number of neighbors
•Step-3: 
Take the K nearest neighbors as per the calculated
Euclidean distance.
•Step-4: 
Among these k neighbors, count the number of the data
points in each category.
•Step-5: 
Assign the new data points to that category for which the
number of the neighbor is maximum.
•Step-6: 
Our model is ready.

Measuring similarity with distance
•Locating the tomato's nearest neighbors requires
a distance function, or a formula that measures
the similarity between the two instances.
•The distance formula involves comparing the
values of each feature. For example, to calculate
the distance between the tomato (sweetness = 6,
crunchiness = 4), and the green bean (sweetness
= 3, crunchiness = 7), we can use the formula as
follows:

Choosing an appropriate k

•As you can verify from the above image, if we
proceed with K=3, then we predict that test
input belongs to class B, and if we continue with
K=7, then we predict that test input belongs to
class A.
•That’s how you can imagine that the K value has
a powerful effect on KNN performance.
•There are no pre-defined statistical methods to
find the most favorable value of K.

•The
 
small K value isn’t suitable 
for
classification.
•The optimal K value usually found is
the
 
square root of N, 
where N is the total
number of samples.

Why KNN is Lazy?
•K-NN is a lazy learner. A lazy learner is not really
learning anything.
•An eager learner has a model fitting or training
step. A lazy learner does not have a training
phase.
•There is
 
very less training time 
in K-NN.
•This allows the training phase, which is not
actually training anything, to occur very rapidly.