Presentation on the topic Naive_Bayesian_Classifier

shivangisingh564490 17 views 12 slides Aug 27, 2025
Slide 1
Slide 1 of 12
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12

About This Presentation

Naive Bayesian Classifier


Slide Content

Naïve Bayesian Classifier Your Name | Course | Date

Introduction Naïve Bayes is a probabilistic classifier based on Bayes’ Theorem. Assumes independence between predictors. Simple, fast, and effective for classification tasks.

Bayes’ Theorem P(H|X) = [P(X|H) * P(H)] / P(X) - H: Hypothesis (class) - X: Data (features) - P(H): Prior probability of hypothesis - P(X|H): Likelihood - P(H|X): Posterior probability

Naïve Assumption - Predictors are independent given the class. - P(X1, X2, ..., Xn | C) = Π P(Xi | C) This makes computation simple and scalable.

Types of Naïve Bayes Classifiers - Gaussian Naïve Bayes: assumes features follow normal distribution - Multinomial Naïve Bayes: used for text classification - Bernoulli Naïve Bayes: for binary features

Example Email Classification: - Features: presence of words like 'offer', 'buy', 'free' - Classes: Spam / Not Spam - Naïve Bayes calculates probability of each class and selects the maximum.

Steps in Naïve Bayes Classification 1. Convert dataset into frequency table. 2. Calculate prior probability for each class. 3. Calculate likelihood for each attribute. 4. Use Bayes’ theorem to calculate posterior probability. 5. Assign class with maximum probability.

Advantages - Easy to implement - Works well with small datasets - Handles high-dimensional data - Effective for text classification

Limitations - Assumes independence of features (not realistic) - Poor performance with correlated features - Requires large dataset for reliable probability estimates

Applications - Spam filtering - Sentiment analysis - Document categorization - Medical diagnosis

Conclusion - Naïve Bayes = Simple, efficient, widely used classifier - Based on probability and independence assumption - Popular in text mining, NLP, and classification tasks

References - Han, J., Kamber, M., & Pei, J. (2012). Data Mining: Concepts and Techniques - Mitchell, T. M. (1997). Machine Learning