Non hierarchical clustering

557 views 10 slides Aug 17, 2021
Slide 1
Slide 1 of 10
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10

About This Presentation

This presentation educates you about Non-Hierarchical Clustering, Difference Hierarchical and Non-Hierarchical Clustering, K-means clustering, K-means clustering algorithm and Steps for Applying K-Means Clustering.

For more information stay tuned with Learnbay.


Slide Content

Swipe
Non-Hierarchical Clustering

Non Hierarchical Clustering involves formation of
new clusters by merging or splitting the clusters.
It does not follow a tree like structure like
hierarchical clustering.
This technique groups the data in order to
maximize or minimize some evaluation criteria.K
means clustering is an effective way of non
hierarchical clustering.
In this method the partitions are made such that
non-overlapping groups having no hierarchical
relationships between themselves.
Non-Hierarchical Clustering

Difference between Hierarchical Clustering and
Non Hierarchical Clustering:
Difference
Hierarchical Clustering
involves creating
clusters in a predefined
order from top to
bottom
Non Hierarchical
Clustering involves
formation of new
clusters by merging or
splitting the clusters
instead of following a
hierarchical order.

It is considered less
reliable than Non
Hierarchical Clustering.
It is comparatively
more reliable than
Hierarchical Clustering.
It is considered slower
than Non Hierarchical
Clustering.
It is comparatavely
more faster than
Hierarchical Clustering.
It is very problematic
to apply this technique
when we have data
with high level of error.
It can work better then
Hierarchical clustering
even when error is
there.
The clusters are
difficult to read and
understand as
compared to
Hierarchical clustering.
It is a relatively stable
technique.
It is comparatively
easier to read and
understand.
It is relatively unstable
than Non Hierarchical
clustering.

K-means clustering is a type of unsupervised
learning, which is used when you have unlabeled
data (i.e., data without defined categories or
groups).
The goal of this algorithm is to find groups in the
data, with the number of groups represented by
the variable K.
K-means clustering

The algorithm works iteratively to assign each
data point to one of K groups based on the
features that are provided.
Data points are clustered based on feature
similarity.

The centroids of the K clusters, which can be used
to label new data
Labels for the training data (each data point is
assigned to a single cluster)
Rather than defining groups before looking at the
data, clustering allows you to find and analyze the
groups that have formed organically.
K-means clustering algorithm

The "Choosing K" section below describes how the
number of groups can be determined.
Each centroid of a cluster is a collection of feature
values which define the resulting groups.
Examining the centroid feature weights can be
used to qualitatively interpret what kind of group
each cluster represents.

Step 1: Clean and Transform Your Data
Step 2: Choose K and Run the Algorithm
Step 3: Review the Results
Step 4: Iterate Over Several Values of K
Steps for Applying K-Means Clustering

T-test
Chi-square Test
Stay Tuned with
Topics for next Post