Keynote speaker .pptx

researchcollabarator 17 views 36 slides May 04, 2024
Slide 1
Slide 1 of 36
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36

About This Presentation

With a rapidly changing regulatory landscape on AI, and in light of continued media hype and hope surrounding chatbots, Knowing how the field of psychiatry currently understands the potential of these tools and seeks to use them in the future can help guide the development of LLMs and ensure they ar...


Slide Content

By Dr. Gururaj H L Associate Professor Dept. of Information Technology Manipal Institute of Technology Bengaluru ‹#› Improv ing Face Detection Techniques and Applications for Image Processing

Introduction to Biometrics Biometric is a unique measurable characteristics of a human being used to recognize an individual’s identity automatically ‹#›

‹#› Introduction to Biometrics I t includes both physiological and behavioral characteristics Behavioral characteristics are referred as Soft Biometrics Physiological Face Fingerprint Behavioral signatures Speech

Biometric Modalities Periocular Recognition: Utilizing features around the eyes for authentication, periocular recognition offers robustness against varying lighting conditions and facial expressions. Ear Recognition: Leveraging the unique characteristics of the ear, ear recognition systems are gaining traction for their accuracy and resistance to spoofing attacks. Vein Recognition: By capturing the vein patterns beneath the skin's surface, vein recognition provides a highly secure biometric modality suitable for applications requiring a high level of security. EEG-Based Authentication: Harnessing brainwave patterns measured through electroencephalography (EEG), EEG-based authentication offers a novel approach to biometric recognition with potential applications in mental state authentication and neuroadaptive systems . ‹#›

Odor Recognition: Recent advancements in sensor technology have enabled the development of odor recognition systems, which identify individuals based on their unique scent signatures. Heartbeat Biometrics: Utilizing the unique characteristics of an individual's heartbeat, heartbeat biometrics offer a non-intrusive and continuous authentication solution with applications in wearable devices and healthcare. Brainwave-Based Authentication: By analyzing patterns in brainwave signals, brainwave-based authentication systems provide a highly secure and user-friendly biometric modality, albeit with challenges related to signal acquisition and processing. ‹#› Emerging Technologies

Face Recognition Face Recognition is a process of identifying and verifying the faces It uses unique facial features such as shape, nose tip, eyes, lips, etc. to identify a person Face Recognition techniques are categorized into 2D and 3D techniques Image-based systems focus on recognizing individuals based only on their physical appearance Video-based system not only consider physical features but also incorporates changes in appearance over time and dynamic facial movements ‹#›

Face Recognition In Face Recognition there are 2 types of comparison: ‹#›

Face Recognition ‹#› Identification Figure out “Who is X?” One-to-many search 1:M

Face Recognition Verification Answer the Question “Is this X?” One to One search ‹#› Matching

Applications ‹#›

‹#› How Face Recognition Works

Techniques 2D: Analyzing facial features in two-dimensional images 3D: Utilizing depth information for more accurate recognition Deep Learning: Training neural networks to recognize faces ‹#›

Methodology The method that focuses on the entire face is called the Global method The method that focuses on the specific region of a face is called Local method ‹#›

Feature Extraction Methods There are 3 Feature Extraction Methods Generic method Feature template-based method Structural matching ‹#›

Feature Extraction Methods Generic method – These methods rely on identifying edges, lines and curves ‹#›

Feature Extraction Methods Feature template-based method – detect specific facial features like eye, lip, nose, etc. ‹#›

Feature Extraction Methods Structural matching – Consider geometrical constraints on facial features, ensuring they match specific structural patterns ‹#›

Lighting Conditions: Variations in lighting affecting image quality Pose Variations: Different angles and orientations of faces Occlusions: Obstructions such as glasses, hats, or facial hair Aging and Appearance Changes: Changes in appearance over time ‹#› Challenges

Ethnic and Gender Biases: Biases in algorithms affecting recognition accuracy Privacy Concerns: Risks associated with the collection and storage of biometric data Deepfake detection ‹#› Challenges

Deep Learning Breakthroughs: Significant improvements in accuracy and performance Convolutional Neural Networks (CNNs): Adoption of CNNs for feature extraction Real-Time Systems: Development of systems capable of processing faces in real-time Improved Accuracy and Robustness: Enhanced models capable of handling various challenges ‹#› Recent Advancement in Face Recognition

‹#› 1 Face Identification 2 Cropping and Resizing 3 Facial feature points 4 Alignment with 3D models 5 Frontal view rendering Pose Estimation and Correction

A pre-existing face detector is utilized to identify and locate a face in the given image The identified face is then cropped and resized to conform to a standardized coordinate system Facial feature points are identified on the standardized face To create a frontal view of the face, the appearance of the query photo is projected onto the reference coordinate system In cases where facial features are not adequately visible due to the pose of the original image, the algorithm generates the final frontalized face by incorporating visual features from corresponding symmetric sides of the face ‹#› Pose Estimation and Correction

‹#› Pose Estimation and Correction

U tilize two distinct measures to evaluate facial attributes Edge Density Measurement - Calculate the average magnitude of the gradient across the person face Sharpness Measure using Low-Pass filter This combined approach enhances ability to evaluate and understand the quality of facial images ‹#› Blur Measure and Deblurring

Compute the gradient magnitude at each pixel Define ROI Compute Average Magnitude This Represents the change in pixel va lue ‹#› Edge Density Measurement

Sharpness measure will be employed by applying a low-pass filter to the image After applying the low-pass filter, the sharpness will be measured by evaluating the pixel value in the filtered image This step captures the overall clarity and fine details present in the facial structures ‹#› Sharpness Measurement

Illumination Measure Weber face serves as a unique representation that effectively captures local salient pattern within the input image Weber’s face demonstrates robustness to variation in illumination, making it a valuable tool for extracting meaningful features from facial data ‹#›

Multi-Modal Fusion Explainable AI and Interpretability Edge Computing and Privacy-Preserving Technologies Human-Centric Design and Inclusive Technologies ‹#› Future Directions

Deepfakes are manipulated videos or images created using deep learning techniques. Potential to spread misinformation, threaten privacy, and manipulate public opinion. GAN models plays a majority role in generating deepfake images. ‹#› Deepfake Detection

Entire Face Synthesis Attribute Manipulation Identity swap Expression swap ‹#› Deepfake Manipulation

‹#› Audio-Visual Deepfake Detection

Generalization The characteristics of generated DeepFakes are dependent on the training dataset, necessitating large datasets to achieve specific characteristics. Temporal coherence Lack of consistency between frames leads to visible abnormalities like flickering and jittering in DeepFake videos. Illumination stipulations Changes in lighting conditions, especially in indoor/outdoor scenarios, result in color discrepancies and odd abnormalities in DeepFake output. Lack of realism in eyes & lips Difficulty in achieving natural emotions, interruptions, and synchronization of eye and lip movements, affecting the realism of DeepFake videos. ‹#› Deepfake Challenges

Identity leakage Target identity preservation is challenging, especially in face reenactment tasks, due to discrepancies between target and driving identities during training. Multi-tasking Simultaneously performing forgery localization and deepfake detection has been identified as a practical approach to enhance accuracy in deepfake detection tasks. Triplet training A method aimed at minimizing the distance between samples of the same category while simultaneously maximizing the distance between features of different categories in the feature space. ‹#› Deepfake Challenges

Implementation of facial analysis tools in mental health clinics leading to improved patient outcomes and treatment adherence. Successful use of facial recognition in cybercrime investigations ‹#› Mental Health Monitoring

Deepfake Prediction in Election Implementing proactive measures, stakeholders can enhance resilience against deepfake manipulation in elections, safeguard democratic processes, and uphold the integrity of electoral systems. ‹#›

Thank You ‹#›
Tags