Detection ofs Signlanguageminorppt1.pptx

vigocib930 10 views 21 slides May 01, 2024
Slide 1
Slide 1 of 21
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21

About This Presentation

Ppt


Slide Content

Expand Intro Hook Explore Explain Apply Share Evaluate # Sign Language Detection Using Machine Learning Python Project

Expand Hook Explore Explain Apply Share Evaluate Intro Harsh Vardhan Singh Sisodia (20/BCS/183) Rishabh Shakya (20/BCS/171) Yash Anand (20/BCS/203) # Presented By:-

Expand Hook Explore Explain Apply Share Evaluate Intro Introduction Objective Scope Literature review Methodology Gesture Classification Challenges Faced Result Project Requirements # Content

Expand Hook Explore Explain Apply Share Evaluate Intro 1) American Sign Language (ASL) is crucial for communication among Deaf and Dumb (D&M) individuals who can't use spoken languages. It involves nonverbal gestures understood through vision, facilitating meaningful exchange. 2) The project's goal is to create a model recognizing Fingerspelling-based hand gestures, combining them to form complete words. Specific gestures are targeted for training, as shown in the provided image. # Introduction

Expand Hook Explore Explain Apply Share Evaluate Intro  To create a computer software and train a model using CNN which takes an image of hand gesture of American Sign Language and shows the output of the particular sign language in text format converts it into audio format. # Objective

This System will be Beneficial for Both Dumb/Deaf People and the People Who do not understands the Sign Language. They just need to do that with sign Language gestures and this system will identify what he/she is trying to say after identification it gives the output in the form of Text as well as Speech format. # Scope Expand Hook Explore Explain Apply Share Evaluate Intro

1. Electromechanical devices provide precise hand data, but costly and unfriendly; glove-based approaches are expensive. 2. Vision-based methods use webcams for natural human-computer interaction, reducing costs, but face challenges like variability. 3. Challenges in vision-based hand detection include coping with hand appearance, skin color, viewpoints, scales, and camera speed. Data Acquisition : # Literature review Expand Hook Explore Explain Apply Share Evaluate Intro

Data Acquisition : Expand Hook Explore Explain Apply Share Evaluate Intro We have collected images of different signs of different angles for sign letter A to Z.

Hand detection is achieved through webcam-acquired images using the Media Pipe library for image processing. The process involves detecting the hand, obtaining the region of interest, cropping, converting to a gray image with OpenCV, applying Gaussian blur, and converting to a binary image using threshold methods. Key tools: Media Pipe and OpenCV. We have collected images of different signs of different angles for sign letter A to Z. Data pre-processing : Expand Hook Explore Explain Apply Share Evaluate Intro

Similarly we have collected images of sign of B (Training the data) :- Data pre-processing : Expand Hook Explore Explain Apply Share Evaluate Intro

Similarly we have collected images of sign of C (Training the data) :- Data pre-processing : Expand Hook Explore Explain Apply Share Evaluate Intro

# Methodology Expand Hook Explore Explain Apply Share Evaluate Intro

The MediaPipe Hand Landmarker task lets you detect the landmarks of the hands in an image. The task to localize key points of the hands and render visual effects over the hands. Mediapipe Library : # Methodology Expand Hook Explore Explain Apply Share Evaluate Intro

# Gesture Classification Expand Hook Explore Explain Apply Share Evaluate Intro Now it’s Recognizing the Gestures / Sign language alphabets

1. Dataset Challenge: Initial struggle with finding suitable square images for CNN led to creating a custom dataset for the project. 2. Filter Selection: Experimentation with filters, including binary threshold and canny edge detection, concluded with choosing the gaussian blur filter. 3. Model Improvement: Overcame accuracy issues by enhancing input image size and refining the dataset in later training phases. # Challenges Faced Expand Hook Explore Explain Apply Share Evaluate Intro

Finally, we got  97%  Accuracy (with and without clean background and proper lightning conditions) through our method. And if the background is clear and there is good lightning condition then we got even  99%  accurate results # Result Expand Hook Explore Explain Apply Share Evaluate Intro

This is the output of A alphabet with 100% accuracy :- # Result Expand Hook Explore Explain Apply Share Evaluate Intro

This is the output of B alphabet with 100% accuracy :- # Result Expand Hook Explore Explain Apply Share Evaluate Intro

This is the output of C alphabet with 99.99% accuracy :- # Result Expand Hook Explore Explain Apply Share Evaluate Intro

Operating System: Windows 8 and Above IDE: VScode Programming Language: Python 3.9 5 Python libraries: OpenCV, NumPy, Keras,mediapipe,Tensorflow Software Requirement: # Project Requirements Expand Hook Explore Explain Apply Share Evaluate Intro Hardware Requirement: Webcam

# Thanks!
Tags