Selective Feature Extraction for Masked Face

yosuaalvin 9 views 15 slides Sep 30, 2024
Slide 1
Slide 1 of 15
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15

About This Presentation

Selective Feature Extraction


Slide Content

Selective Feature Exclusion for Masked Face Recognition in ResNet-34 Deep Learning Architecture Ir. Yosua Alvin Adi Soetrisno, S.T., M.Eng., IPM. Ir. Sumardi, S.T., M.T., IPM., ASEAN Eng. Ir. Denis , S.T., M.Eng., IPM., ASEAN Eng. Ir. Aghus Sofwan, S.T., M.T., Ph.D., IPU. Ir. M. Arfan , S.Kom , M.Eng.

BACKGROUND R ecognizing faces when they are masked present is a challenge due to the limited number of facial features available for comparison This research proposes enhancing face-masked recognition by only retraining the face landmark using fewer features and implying variation of tree depth in training Existing research inadequately explores the correlation between using the selective feature and proper face detection method in the masked condition

ARCHITECTURE Three Convolutional Layers : Layer 1 : 16 filters, kernel size 5x5, stride 2x2. Layer 2 : 32 filters, kernel size 5x5, stride 1x1. Layer 3 : 32 filters, kernel size 5x5, stride 1x1. Activation Function : Each layer is followed by a ReLU activation . Affine Transformation : An affine transformation is applied after the activation functions.

ARCHITECTURE Three Convolutional Layers : Layer 1 : 45 filters, downsampler , reduces spatial dimensions by a factor of 2. Layer 2 : 45 filters, kernel size 5x5, stride 1x1. Layer 3 : 45 filters, kernel size 5x5, stride 1x1. Activation Function : Each layer is followed by a ReLU activation . Additional Layers : Pooling layers for downsampling . Fully connected layers to reduce overfitting. Normalization layers to improve performance.

ARCHITECTURE Fully Connected Layers with 128 hidden units. L2 Normalization layer applied on top of the convolutional base. Embedding Layer : Produces 128-dimensional feature vectors . Support Vector Machine (SVM) : Utilizes the 128-dimensional feature vectors as support vectors. Helps differentiate individual features, even with a masked face .

OBJECTIVE AND RELATED WORK Researcher(s) Methodology Dataset Used Key Contributions Yiyang Ma [8] Deep learning-based approach with data augmentation techniques RWMFD Dataset Focus on training models using simulated masks and data augmentation to enhance precision. Saroj Mishra [9] Real-time Masked Facial Recognition (MFR) system CASIA Dataset & Microsoft Celebrity Face Images Achieves accurate recognition across diverse demographics. Libo Zhang [10] MaskDUL: Two-stream convolutional network addressing uncertainty in sampling and intra-class distribution learning Not specified Tackles challenges in Masked Face Recognition (MFR) by focusing on intra-class distribution learning. Hong Lin [12] Combines human posture recognition with CNN for automatic masked individual identification Openpose (for human body skeletons) Integrates human posture recognition and Openpose with supervised learning for effective face mask detection.

Methodology

Methodology

Algorithm

Training

68 vs 31 keypoints const double mean_face_shape_x [] = {0.000213256, 0.0752622, 0.18113, 0.29077, 0.393397, 0.586856, 0.689483, 0.799124, 0.904991, 0.98004, 0.490127, 0.490127, 0.490127, 0.490127, 0.36688, 0.426036, 0.490127, 0.554217, 0.613373, 0.121737, 0.187122, 0.265825, 0.334606, 0.260918, 0.182743, 0.645647, 0.714428, 0.793132, 0.858516, 0.79751, 0.719335, 0.254149, 0.340985, 0.428858, 0.490127, 0.551395, 0.639268, 0.726104, 0.642159, 0.556721, 0.490127, 0.423532, 0.338094, 0.290379, 0.428096, 0.490127, 0.552157, 0.689874 , 0.553364, 0.490127, 0.42689}; const double mean_face_shape_x [] = { 0.000213256, 0.0752622, 0.18113, 0.29077, 0.393397, 0.586856, 0.689483, 0.799124, 0.904991, 0.98004, 0.490127, 0.490127, 0.490127, 0.490127, 0.36688, 0.426036, 0.490127, 0.554217, 0.613373, 0.121737, 0.187122, 0.265825, 0.334606, 0.260918, 0.182743, 0.645647, 0.714428, 0.793132, 0.858516, 0.79751, 0.719335};

Result

Result

Conclusion Improvement in Accuracy : Implementing SVM and selective features using proper tree depth resulted in an accuracy improvement of 83.28%. Class Separation with SVM : Replacing the Euclidean distance or Cosine similarity measurement with SVM effectively separated each class representing a person. Tree Depth and Accuracy : Adjusting the depth of the tree (making it deeper or shallower) did not increase accuracy; the default setting of the ensemble learning algorithm proved to be the optimal configuration. Impact of Key Points : Altering the key points significantly impacted accuracy, suggesting that the choice of key points is crucial. Library Modification : The main library was modified to accept variations of key points, enabling different key point configurations to be used in the model.

Welcome to Semarang, Indonesia We are very happy and excited to see you in ICITACEE 2024! Take care and stay safe!
Tags