6.User Emotion Acquisition Using Deep Learning.pptx

venkataramanathappet 19 views 34 slides Sep 15, 2024
Slide 1
Slide 1 of 34
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34

About This Presentation

User Emotion Acquisition Using Deep Learning


Slide Content

C haitanya Bharathi Institute of Technology Accrediated by NAAC & NBA(ECE,EEE) , Approved by AICTE, New Delhi Affiliated to JNTUA, Anantapuramu.) Vidya Nagar, Proddatur-516360, Kadapa (dst.) PH : 08564-278000, Fax : 08564-278444 ( Approved by AICTE , New Delhi &Affiliated to JNTUA) AN ISO 9001:2008 CERTIFIED INSTITUTION Vidyanagar , Proddatur , Kadapa Dt . Under the esteemed guidance of I. Sravani M. Tech N. Srinivasan M. Tech( Ph. D) G. Sreenivasa Reddy M. TecH Internal Guide Project Coordinator Head Of Department USER EMOTION ACQUISITION USING DEEP LEARNING In partial fulfilment of the requirements for the award of the degree of Bachelor of technology in Department of Computer Science and Engineering Done By Y. Aswini 172P1A0592 T. Sirisha 172P1A0581 T. Vineeth Kumar Reddy 172P1A0578 Sangeetha Chandran 172P1A0567 Chaitanya Bharathi Institute of Technology

CONTENTS Abstract Introduction Existing System Disadvantages Proposed System Advantages Requirements Modules Block Diagram UML Diagrams Testing Outputs Conclusion

ABSTRACT Human emotions can be classified into six types: neutral, surprise, fear, anger, happiness, and sadness. This project focuses on a system of recognizing human's emotion from a detected human's face. The work describes the development of Emotion Based Music Player, which is a computer application meant for users to minimize their efforts in generating playlists and selecting Song.

INTRODUCTION In this project, we have used deep learning to predict emotions. This project uses OpenCV module for processing images. It is used to analyze the detected objects from camera. Emotion is detected from the recognized face using emotion detection module. Once the emotion is detected, music is extracted according to the mood of the user.

Although several approaches have been proposed to recognize human emotions based on facial expressions or speech. Especially it takes more time to detect face. Various algorithms have been proposed and developed for automating the playlist generation process. EXISTING SYSTEM

DISADVANTAGES Detection of face takes more time. Slow and Less Accuracy. It uses Large Code in earlier stage

Our Proposed System uses very less code. Here we use Convolution Neural Network (CNN) Algorithm, which is used for image processing. In this project we use AI-Deep Learning Technology. It detects the emotion fastly. PROPOSED SYSTEM

ADVANTAGES It uses very less code. It detects the emotion Accurately. Plays the music based on detected user emotions .

Software Requirements: Operating System : WINDOWS or UBUNTU Programming Language : Python Tools Used : PyCharm REQUIREMENTS

Hardware Requirements Processor : Intel i3 CPU or above Processor type : 64-bit RAM : 4 GB or above Web cam

MODULES Mainly we use four modules in this project. They are: Image dataset Accessing the web cam Pre-process the image of the face Detecting the face recognition and its emotional expressions Plays the music based on the emotion

Image dataset Firstly, we have to collect the images which consists of the emotions like anger, happy, sad and neutral .

2. Accessing the webcam OpenCV has support for getting data from web cam. OpenCV is used to load the images/real time images (or) to load real time vedios. By importing OpenCV, user can get access of web cam from the computer. Below given are sequence of steps for accessing web cam through OpenCV. >>>pip install OpenCV-Python ****OpenCV gets installed and then we should import it to Python libraries Import cv2.

3. Pre-process the image of the face Before asking what emotion, the face is displaying, we need to crop and standardize it. This cropped window updates only when a face is successfully detected in the webcam stream.

4. Detecting the face recognition and its emotional expressions Here we extract various facial features like eyes, nose, mouth, etc. We actually verify whether the parts are actually carrying out a face or not. By the use of detector, the facial expressions are identified in the form of emotions.

5. Plays the music based on the emotion When the emotion is detected successfully background music will be played. The win sound module provides access to the basic sound-playing machinery provided by Windows platforms.

BLOCK DIAGRAM

UML DIAGRAMS Class Diagram:

Use case Diagram:

Sequence Diagram:

Activity Diagram:

TESTING In our project we use two types of testing: Unit Testing: It is a type of software testing where individual components of a software are tested. Integration testing : It is a type of software testing in which individual software modules are combined and tested as a group.

POSITIVE TEST CASES Testcase ID Testcase description Input given Expected Output Actual Output Result 1 Emotion recognition Image representing a happy face Displays happy emoji and plays the song Displays happy emoji and plays the song Success 2 Emotion recognition Image representing an anger face Displays angry emoji and plays the song Displays angry emoji and plays the song Success 3 Emotion recognition Image representing a sad face Displays sad emoji and plays the song Displays sad emoji and plays the song Success 4 Emotion recognition Image representing a neutral face Displays neutral emoji and plays the song Displays neutral emoji and plays the song Success

NEGATIVE TEST CASES Testcase ID Testcase description Input given Expected Output Actual Output Result 1 Emotion recognition Image representing a happy face Displays happy emoji and plays the song Displays anger emoji and plays the song Fail 2 Emotion recognition Image representing an anger face Displays anger emoji and plays the song Displays neutral emoji and plays the song Fail 3 Emotion recognition Image representing a neutral face Displays neutral emoji and plays the song Displays anger emoji and plays the song Fail 4 Emotion recognition Image representing a sad face Displays sad emoji and plays the song Displays neutral emoji and plays the song Fail

TESTCASE SCREENSHOTS Fig-1:Image with happy face

Fig-2:Image with neutral face

Fig-3:Image with anger face

Fig-4:Image with sad face

OUTPUTS F ig-1.1: Happy Emotion D etection

Fig-2.1: Neutral Emotion Detection

Fig-3.1: Angry Emotion Detection

Fig-4.1: Sad Emotion Detection

The main aim of this project focuses on a system by recognizing human's emotion from a detected human's face and extracting the music according to the detected emotions. The proposed system makes use of less code and yields better accuracy in terms of performance and computational time. It is simple and fast. Future Enhancement: In present system, only four emotions can be detected, but there is scope to detect many other emotions. Recognizing emotion by voice. CONCLUSION

THANK YOU…
Tags