researchcollabarator
17 views
36 slides
May 04, 2024
Slide 1 of 36
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
About This Presentation
With a rapidly changing regulatory landscape on AI, and in light of continued media hype and hope surrounding chatbots, Knowing how the field of psychiatry currently understands the potential of these tools and seeks to use them in the future can help guide the development of LLMs and ensure they ar...
With a rapidly changing regulatory landscape on AI, and in light of continued media hype and hope surrounding chatbots, Knowing how the field of psychiatry currently understands the potential of these tools and seeks to use them in the future can help guide the development of LLMs and ensure they are implemented safely and in alignment with the field’s needs. Several researches revealed 44 % of psychiatrists had used OpenAI’s ChatGPT-3.5 and 33 % had used GPT-4.0 “to assist with answering clinical questions. Therefore, the aim of this study was to gauge the views of psychiatrists about these tools.
Size: 3.62 MB
Language: en
Added: May 04, 2024
Slides: 36 pages
Slide Content
By Dr. Gururaj H L Associate Professor Dept. of Information Technology Manipal Institute of Technology Bengaluru ‹#› Improv ing Face Detection Techniques and Applications for Image Processing
Introduction to Biometrics Biometric is a unique measurable characteristics of a human being used to recognize an individual’s identity automatically ‹#›
‹#› Introduction to Biometrics I t includes both physiological and behavioral characteristics Behavioral characteristics are referred as Soft Biometrics Physiological Face Fingerprint Behavioral signatures Speech
Biometric Modalities Periocular Recognition: Utilizing features around the eyes for authentication, periocular recognition offers robustness against varying lighting conditions and facial expressions. Ear Recognition: Leveraging the unique characteristics of the ear, ear recognition systems are gaining traction for their accuracy and resistance to spoofing attacks. Vein Recognition: By capturing the vein patterns beneath the skin's surface, vein recognition provides a highly secure biometric modality suitable for applications requiring a high level of security. EEG-Based Authentication: Harnessing brainwave patterns measured through electroencephalography (EEG), EEG-based authentication offers a novel approach to biometric recognition with potential applications in mental state authentication and neuroadaptive systems . ‹#›
Odor Recognition: Recent advancements in sensor technology have enabled the development of odor recognition systems, which identify individuals based on their unique scent signatures. Heartbeat Biometrics: Utilizing the unique characteristics of an individual's heartbeat, heartbeat biometrics offer a non-intrusive and continuous authentication solution with applications in wearable devices and healthcare. Brainwave-Based Authentication: By analyzing patterns in brainwave signals, brainwave-based authentication systems provide a highly secure and user-friendly biometric modality, albeit with challenges related to signal acquisition and processing. ‹#› Emerging Technologies
Face Recognition Face Recognition is a process of identifying and verifying the faces It uses unique facial features such as shape, nose tip, eyes, lips, etc. to identify a person Face Recognition techniques are categorized into 2D and 3D techniques Image-based systems focus on recognizing individuals based only on their physical appearance Video-based system not only consider physical features but also incorporates changes in appearance over time and dynamic facial movements ‹#›
Face Recognition In Face Recognition there are 2 types of comparison: ‹#›
Face Recognition ‹#› Identification Figure out “Who is X?” One-to-many search 1:M
Face Recognition Verification Answer the Question “Is this X?” One to One search ‹#› Matching
Applications ‹#›
‹#› How Face Recognition Works
Techniques 2D: Analyzing facial features in two-dimensional images 3D: Utilizing depth information for more accurate recognition Deep Learning: Training neural networks to recognize faces ‹#›
Methodology The method that focuses on the entire face is called the Global method The method that focuses on the specific region of a face is called Local method ‹#›
Feature Extraction Methods There are 3 Feature Extraction Methods Generic method Feature template-based method Structural matching ‹#›
Feature Extraction Methods Generic method – These methods rely on identifying edges, lines and curves ‹#›
Feature Extraction Methods Feature template-based method – detect specific facial features like eye, lip, nose, etc. ‹#›
Feature Extraction Methods Structural matching – Consider geometrical constraints on facial features, ensuring they match specific structural patterns ‹#›
Lighting Conditions: Variations in lighting affecting image quality Pose Variations: Different angles and orientations of faces Occlusions: Obstructions such as glasses, hats, or facial hair Aging and Appearance Changes: Changes in appearance over time ‹#› Challenges
Ethnic and Gender Biases: Biases in algorithms affecting recognition accuracy Privacy Concerns: Risks associated with the collection and storage of biometric data Deepfake detection ‹#› Challenges
Deep Learning Breakthroughs: Significant improvements in accuracy and performance Convolutional Neural Networks (CNNs): Adoption of CNNs for feature extraction Real-Time Systems: Development of systems capable of processing faces in real-time Improved Accuracy and Robustness: Enhanced models capable of handling various challenges ‹#› Recent Advancement in Face Recognition
‹#› 1 Face Identification 2 Cropping and Resizing 3 Facial feature points 4 Alignment with 3D models 5 Frontal view rendering Pose Estimation and Correction
A pre-existing face detector is utilized to identify and locate a face in the given image The identified face is then cropped and resized to conform to a standardized coordinate system Facial feature points are identified on the standardized face To create a frontal view of the face, the appearance of the query photo is projected onto the reference coordinate system In cases where facial features are not adequately visible due to the pose of the original image, the algorithm generates the final frontalized face by incorporating visual features from corresponding symmetric sides of the face ‹#› Pose Estimation and Correction
‹#› Pose Estimation and Correction
U tilize two distinct measures to evaluate facial attributes Edge Density Measurement - Calculate the average magnitude of the gradient across the person face Sharpness Measure using Low-Pass filter This combined approach enhances ability to evaluate and understand the quality of facial images ‹#› Blur Measure and Deblurring
Compute the gradient magnitude at each pixel Define ROI Compute Average Magnitude This Represents the change in pixel va lue ‹#› Edge Density Measurement
Sharpness measure will be employed by applying a low-pass filter to the image After applying the low-pass filter, the sharpness will be measured by evaluating the pixel value in the filtered image This step captures the overall clarity and fine details present in the facial structures ‹#› Sharpness Measurement
Illumination Measure Weber face serves as a unique representation that effectively captures local salient pattern within the input image Weber’s face demonstrates robustness to variation in illumination, making it a valuable tool for extracting meaningful features from facial data ‹#›
Multi-Modal Fusion Explainable AI and Interpretability Edge Computing and Privacy-Preserving Technologies Human-Centric Design and Inclusive Technologies ‹#› Future Directions
Deepfakes are manipulated videos or images created using deep learning techniques. Potential to spread misinformation, threaten privacy, and manipulate public opinion. GAN models plays a majority role in generating deepfake images. ‹#› Deepfake Detection
Generalization The characteristics of generated DeepFakes are dependent on the training dataset, necessitating large datasets to achieve specific characteristics. Temporal coherence Lack of consistency between frames leads to visible abnormalities like flickering and jittering in DeepFake videos. Illumination stipulations Changes in lighting conditions, especially in indoor/outdoor scenarios, result in color discrepancies and odd abnormalities in DeepFake output. Lack of realism in eyes & lips Difficulty in achieving natural emotions, interruptions, and synchronization of eye and lip movements, affecting the realism of DeepFake videos. ‹#› Deepfake Challenges
Identity leakage Target identity preservation is challenging, especially in face reenactment tasks, due to discrepancies between target and driving identities during training. Multi-tasking Simultaneously performing forgery localization and deepfake detection has been identified as a practical approach to enhance accuracy in deepfake detection tasks. Triplet training A method aimed at minimizing the distance between samples of the same category while simultaneously maximizing the distance between features of different categories in the feature space. ‹#› Deepfake Challenges
Implementation of facial analysis tools in mental health clinics leading to improved patient outcomes and treatment adherence. Successful use of facial recognition in cybercrime investigations ‹#› Mental Health Monitoring
Deepfake Prediction in Election Implementing proactive measures, stakeholders can enhance resilience against deepfake manipulation in elections, safeguard democratic processes, and uphold the integrity of electoral systems. ‹#›