Artificial Intelligence And Face Detection

MuhammadAdnanBashir5 7 views 35 slides May 30, 2024
Slide 1
Slide 1 of 35
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35

About This Presentation

AI Project base application


Slide Content

Thesis Presentation Outline Introduction Literature Review Implementation Experimental Results 01 02 03 04 05 06 07 08 09 Proposed Research Related work Title Slide Concept of Facial Expression and Emotion Theory of FER Future Directions

MONITORING STUDENTS’ ENGAGEMENT IN CLASS USING FACIAL EMOTION RECOGNITION Name: Muhammad Adnan Bashir Advisor: Dr. Shaista Habib SCHOOL OF SYSTEM AND TECHNOLOGY UNIVERSITY OF MANAGEMENT AND TECHNOLOGY

INTRODUCTION

The system-based identification of facial expressions has been a vital area of research in the literature for a long time. The most meaningful way humans exhibit emotions is by facial expressions. A utomatic emotions recognition has been a source of contention. When students in a classroom are exposed to circumstances that are taken from real life, they experience a wide spectrum of emotions at any given time. We purposed a mechanism in which students' facial expressions can be used to automatically tell if they are paying attention and how they are feeling, such as if they are active or not active.

LITERATURE REVIEW

Divjak and Bischof evaluated three factors (eye tracking, head movement, and eye distance) generate a warning when they detected alarm will sound to notify Kamath create an automated gaze system to find out how interested the students were. Using video from the classroom, they made a way to tell how engaged students are. They used a face-tracking system to get students to look at the screen. Bidwell use the histogram of oriented gradients (HOG) to analyze the input images. RELATED WORK

Turabzadeh investigated the Local Binary Point (LBP) algorithm for recognizing facial emotions in real time proposed a real-time system based on how a student's face shows emotion during a lesson. Based on the student's emotions, the system would automatically adjust the content to the student's level of concentration. . They also showed that there were three different levels of focus (high, medium, and low) Sharma watches how students move their eyes and heads to see if they are paying attention and sounds an alarm if they aren’t. Kithira

Title Methods Results Face Recognition and FER using PCA PCA Better Face Recognition Rate with Low Error Rate A Real Time Facial Expression Classification System Using LBP Haar Classifier High face detection rate and produce more efficient features Automated Facial Expression Recognition System Methods Constraint Model with SVM Constraint Model with SVM FER using Optical Flow without Complex Feature Extraction Uniform grids MLP 95.1% A neural-AdaBoost based facial expression recognition system Viola Jones, Gabor Feature Extraction ,Ada Boost, MNN-NP 96.83% and 92.22% are respectively Facial Expression Recognition 2D appearance-based Model Radial Symmetry Transform Accuracy achieved at 81 % Facial action unit recognition using multi-class classification Methods MLBP, FCBF, ECOC Good Result achieve by Multi-Class Classifier Recognition of facial expressions and measurement of levels of interest from video Grid points HMMs 90% A multi-task model for simultaneous face identification and FER MT-FIM 92.57% An SOM-based Automatic FER System Viola Jones, Gabor Wavelet Filters, Landmark Point Tracking Recognition Rate over 90%

MOTIVATION & CONTRIBUTION

In a competitive environment, it is critical to use technology and learning strategies to assist students in adapting. Included as well is synopsis of the numerous facial expressions of emotion that have been categorized by a variety of researchers throughout the course of time. Many schools have implemented e-learning systems in recent decades to improve learning attention and efficiency. Show a new way to recognize facial expressions with a way to pay attention. Contextual face perception provides relevant context for processing facial expressions. To offer a framework for future research, each study's weaknesses have been identified and addressed. To give new researchers with unified knowledge on a single platform, research on facial expression detection using image and video classification has been conducted, and a collection of the relevant literature has been prepared because of this research. Motivation Contribution

Concept of Facial Expression and Emotion

Explainable Facial Expressions and Emotions Expressions Emotions a form of nonverbal signaling using the movement of facial muscles. Emotions can be understood of in terms of neurobiology as complex action plans that are triggered by external or internal inputs.

Facial Expression Analysis Techniques Facial electromyography (FEMG) It is based on facial anatomical features. Facial expressions are broken down into their component parts, which are referred to as "Action Units". Using facial action unit coding, you can acquire all the knowledge that is required to differentiate between the following three categories of facial expressions. The Facial Action Coding System (FACS) Most of the time, they happen in everyday situations, last between 0.5 and 4 seconds, and can be seen with the naked eye. Macro expressions are short facial movements that last less than half a second. They happen when a person is trying to hide or repress their current emotional state, whether they are aware of it or not. Micro expressions Subtle expressions is the first sign of a facial expression when the emotion it shows is still considered to be mild.

01 02 Facial Emotions Analysis such as "fight or flight," which are designed to either immediately remove oneself from a potentially harmful situation or to prepare one for a physical confrontation with an adversary. Behavior patterns Emotions and physical and mental arousal are inextricably linked, and different levels of arousal are associated with various emotions. Bodily symptoms 03 04 evaluations of the activities, stimuli, or things being considered. Cognitive evaluations   showing one's teeth and frowning are both signs of displeasure. Facial expressions example Most of these symptoms occurs unconsciously and cannot be controlled by the patient.

Classification Of Facial Emotions Emotion Definition Motion of facial part Happiness Joy, pride, relief, hope, pleasure, and excitement. Open eyes, mouth corner pulled up, open mouth, elevated cheeks, and wrinkles around the eyes. Sadness Sadness is the opposite emotion of Joy. The secondary emotions are suffering, pain, sadness, sympathy, and despair. Outer brow lowered, inner brow corner lifted, mouth edge lowered, eye closed, lip corner pushed down. Fear Danger is the source of fear. It could be due to the possibility of bodily or mental injury. The secondary emotions of fear are Horror, anxiety, panic, and dread. Danger is the source of fear. It could be due to the possibility of bodily or mental injury. The secondary emotions of fear are Horror, anxiety, panic, and dread. Humans can undoubtedly make thousands of slightly different combinations of face expressions. There are six basic happiness, sadness, fear, anger, surprise, and disgust emotions

Surprise This emotion arises when the unexpected occurs. The secondary emotions of surprise include astonishment and amazement. The individual has raised eyebrows, an open eye, an open mouth, and a lowered jaw. Disgust Disgust is a sense of disgust. A human's disgust can be triggered by any taste, smell, sound, or texture. In corner depressor, nose wrinkle, lower lip depressor, and lowered eyebrows. Anger Anger is among the most perilous emotions. This emotion may be detrimental; thus, humans attempt to avoid it. Anger's secondary emotions are irritation, annoyance, exasperation, hatred, and aversion. The eyebrows are pulled down, the mouth is closed, and the lips are tightened, and the upper and lower lids are pulled up.

Theory of FER

In this context, "Facial Emotion Recognition" is a type of software that can decipher emotional states in still images and moving video.

The three phases of FER analysis are as follows: (a) identification of the face (b) identification of the facial expression (c) classification of the facial expressions Analysis of the positions of key facial landmarks can be used to infer emotional states (e.g., end of nose, eyebrows).

METHODOLOGY

Facial Recognition Methods 1- Point-based Display 2- Stick-figure Models 3- 3D XFace Animations

Proposed Research

We propose a system that uses techniques from deep convolutional neural networks to analyze the emotions that are present in a classroom setting. The real-time webcam footage from the classroom was subjected to a technique known as "key frame extraction," which enabled the analysis of the students' emotional expressions. Using cameras to monitor classrooms is a non-intrusive method for digitizing student behavior. The purpose of the proposed system is to determine the level of concentration of the students in classroom during lecture. This is done by continuously monitoring head rotation and eye movement. Once facial characteristics have been identified, it will be determined whether the student is attentive or not. The system can be utilized during a live lecture using a laptop computer with an integrated web camera. After that the first thing that needs to be checked off the list is whether or not the student possesses eyes.

Workflow diagram of the proposed system Methodology The proposed system consists of two models for detecting engagement and emotions. For engagement detection, only the eye region of a face is examined with the aid of a Haar-Cascade classifier. Once the eye region has been segmented, the image is manually labeled as "engaged" or "not engaged.“ The CNN is then trained using these manually annotated datasets, and the engagement probability is calculated using the CNN model.

Proposed Approach for Engagement Detection In our model, we take into consideration what the engagement levels of student during learning process. However, we didn’t take into consideration whether the student learned or not. We decided to divide the level of engagement into two groups: Active and Not Active. State of Positions Perceive the Expressions/Emotions Active Neutral emotion (front), smile (front), frown (front), Neutral emotion, eyes to the left (front), neutral emotion, eyes to the right (front), Smile, eyes to the left (front), smile, eyes to the right (front), Teeth smile (front), teeth smile, eyes to the left (front), teeth smile, eyes to the right (front)   Wide eyes Dilated pupils Slight frown Staring with half-lidded eyes Nodding Pursing one’s lips Making eye contact when listening or conversing Not Active Looking down (towards the ground), Looking down (right side), looking down (left side), Neutral emotion, looking forward (front) Smile; looking forward (right side), smile; looking forward (left side) Hand over the mouth, looking forward (front).   Blank stare Glazed eyes Minimal eye contact Yawning Closing or half-closing one’s eyes Propping one’s head in hands Unfocused gaze Rapid blinking Squinting

IMPLEMENTATIONS

Three cases have been defined to detect matching. Matching can be existing if emotion found in the proposed affective model and compatible with the engagement level. Matching cannot be existing if emotion is not compatible with the engagement level. Matching can be as a new emotion if volunteer felt about new emotion not found in the proposed affective model.

This application, which is completely based on the theory of compound emotions, seeks to classify all human emotions into distinct groups, active and inactive, using deep convolutional neural networks. When processing videos in Python, we'll be using the Haar cascade module in conjunction with OpenCV, an open-source computer vision library. To begin, we need to get a hold of footage from our main computer's camera or any other video in which a person's face may be identified. Application Work

RESULTS

Future Directions All the engagement levels and feelings that are included in this model were chosen based on other works that were similar. Active and inactive participation are the two levels of engagement available. W e offered a few recommendations aimed at enhancing the affective model. At this disengagement level, we need more investigation. As a result, one of our upcoming projects will consist of conducting an analysis of the physical and psychological activities of students while they are experiencing various emotions to define a comprehensive behavior for each level of engagement.

THANKS

Questions ?
Tags