finalyear_projecGHHHHHHHHHHHHHHHDYTDYTRTRTD

DebabrataMondal42 11 views 17 slides Jul 10, 2024
Slide 1
Slide 1 of 17
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17

About This Presentation

TYDYTYTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT


Slide Content

Title Course Code Course Name PRESENTED BY- ABCD ( BWU/MTS/20/ ) Guide Name:

AGENDA SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING INTRODUCTION PROJECT OBJECTIVE LITERATURE REVIEW PROPOSED SOLUTION IMPLEMENTATION SYSTEM IMPLEMENTATION FUTURE WORK CONCLUSION REFERENCES

INTRODUCTION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING What is sign language? Why do we need Sign language?

PROJECT OBJECTIVE SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING The objective of this project is to develop a sign language recognition system using computer vision and Machine Learning techniques.

LITERATURE REVIEW SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING In this paper[1] their project aims to bridge the gap by introducing an inexpensive computer in the communication path ,so that the sign language can be automatically captured, recognized and translated to speech for the benefit of blind people. In the other direction, speech must be analyzed and converted to either sign or textual display on the screen for the benefit of the hearing impaired. In this paper[2] This project is designed to improve the recognition rate of the alphabet gestures, (A-Z), in previously done works . Alphabet A has been chosen as it is recognized easily at 100% rate to show that this project not only improves the recognition rates of lacking alphabets, but also maintains the 100% recognition rate of other alphabets. The recognition time has also been improved significantly. It was observed that the recognition rate was fairly improved and recognition time was reduced significantly.

LITERATURE REVIEW SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING In this paper[3] I propose a simple human optical-flow representation for videos based on poseestimation to perform a binary classification per frame — is the person signingor not. I compare various possible inputs, such as full-body pose estimation,partial pose estimation, and bounding boxes and contrast their acquisition timein light of our targeted real-time videoconferencing sign language detection application.We demonstrate our approach on the Public DGS Corpus (German SignLanguage ), and show results of 87%-91% prediction accuracy depending on theinput , with per-frame inference time of 350 − 3500µs. In this paper[4] the authors proposed a system that can recognize and convert ISL gestures from a video feed into English voice and text. They did this by segmenting the shapes in the video stream using several image processing techniques such as edge detection, wavelet transform, and picture fusion. Ellipsoidal Fourier descriptors were used to extract shape features, while PCA was utilized to optimize and reduce the feature set. The fuzzy inference system was used to train the system, which resulted in a 91% accuracy.

PROPOSED SOLUTION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING PER-PROCESSING FEATURE EXTRACTION PATTERN MATCHING/ RECOGNITION DATABASE FEATURE EXTRACTION PATTERN MATCHING/ RECOGNITION INPUT SIGN STOP OUTPUT SIGN/TEXT PER-PROCESSING FEATURE EXTRACTION PATTERN MATCHING/ RECOGNITION PER-PROCESSING INPUT SIGN PER-PROCESSING

IMPLEMENTATION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING STEPS TO DEVELOP SIGN LANGUAGE RECOGNITION: Creating the Dataset Training a CNN on the captured dataset Expected Outcome

IMPLEMENTATION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING CREATING THE DATASET: Using hand-tracking module I can track the hand movement. Then I label all the photos in a folder to create the dataset.

IMPLEMENTATION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING TRAINING A CNN ON THE CAPTURED DATASET: I AM USING TEACHABLE MACHINE WEBSITE FOR TRAINING. USING MY DATASET I ARE CREATING A KERAS MODEL.

IMPLEMENTATION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING EXPECTED OUTCOME: Then i am using our camera for getting real time hand movement Then i am comparing the hand movement with my trained model to get the output STOP VICTROY POWER GOOD JOB

SYSTEM IMPLEMENTATION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING Atfirst i installed Python & Pycharm at my System. I created a new project in PyCharm and set up a virtual environment for the project. Then i installed opencv -python & mediapipe for my project Then I am using HandTrackingModule for hand detection in my project At first part I am creating the image datasets of different hand gestures using my codes.

SYSTEM IMPLEMENTATION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING 6. Then i am using Teachable machine website for training our datasets 7. Then i got the our classification model 8. At the time of testing I am using ClassificationModule for classify my model 9. Then i got my classified output.

FUTURE WORK SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING i plan to expand our dataset to include more sign language words and phrases to improve the accuracy of our model. i aim to develop a real-time sign language translation system to enhance communication between the hearing-impaired and hearing communities. i aim to add this technology to various kind of group video calling applications(google meet, teams teams)

CONCLUSION SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING This project detecting real-time hand signature using deep learning. Sign language recognition is a crucial tool for improving communication between the deaf and hearing-impaired community and the hearing community. Our project successfully developed a model that accurately recognizes sign language. i hope to continue my research to improve the accuracy and expand our system's capabilities

REFERENCES SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING Paper[1] IOSR Journal of Engineering (IOSRJEN) e-ISSN: 2250-3021, p-ISSN: 2278-8719 Vol. 3, Issue 2 (Feb. 2013), ||V2|| PP 45-51. Paper[2] Panwar.M . Hand Gesture Recognition System based on Shape parameters. In Proc. International Conference, Feb 2012. Paper[3] Isaacs, J., Foo, S.: Hand pose estimation for american sign language recognition.In : Thirty-Sixth Southeastern Symposium on System Theory, 2004. Proceedingsof the. pp. 132–136. IEEE (2004). Paper[4] P.V.V Kishore, P. Rajesh Kumar, E. Kiran Kumar & S.R.C.Kishore , 2011, Video Audio Interface for Recognizing Gestures of Indian Sign Language, International Journal of Image Processing (IJIP), Volume 5, Issue 4, 2011 pp. 479-503 https://data-flair.training/blogs/sign-language-recognition-python-ml-opencv/ https://www.youtube.com/watch?v=wa2ARoUUdU8&ab_channel=Murtaza%27sWorkshop-RoboticsandAI https://teachablemachine.withgoogle.com/

SIGN LANGUAGE RECOGNATION USING MACHINE LEARNING THANK YOU
Tags