ieee research paper of multimodal biometric authentication
Size: 37.53 KB
Language: en
Added: Oct 15, 2025
Slides: 11 pages
Slide Content
Multimodal Behavioral Biometric Authentication Using Typing Rhythm and Live Voice with AI-Generated Random Prompts Authors: Abhigna M N & Hanumantha S Affiliation: Global Academy of Technology, Bengaluru, India Emails: [email protected], [email protected] Conference/Journal, Date
Abstract Novel MBBA system fusing keystroke and live voice biometrics. AI challenges for liveness, drastic reduction in EER (<1%). Secures CPPS and EVCS environments.
Motivation / Problem Statement • Static passwords/tokens easily breached. • Unimodal biometrics lack liveness; replay/deepfake threats. • High-stakes environments (power systems, EV charging) need dynamic, unforgeable authentication.
Related Work / Unimodal Limitations • Keystroke-only: easy to mimic. • Voice-only: vulnerable to replay/deepfake. • Multi-layered security lacking in current approaches.
Proposed MBBA Solution • Fuses involuntary typing rhythm with random phrase voice challenge. • Score-level fusion; attacker must bypass both simultaneously. • Liveness through AI-generated prompt.
System Architecture • Parallel processing: typing stream & voice stream. • Feature extraction, SNN for typing, x-vector for voice. • Fusion engine computes final score. (Include block diagram if allowed.)
Methodology • Typing: Dwell, flight time features → SNN classifier. • Voice: TTS random prompt → MFCC extraction → x-vector model + STT text match. • Score fusion, adaptive threshold for risk levels.
Results & Evaluation Method EER (%) Resilience Keystroke Only ~5-6 Low Voice Only ~3-4 Low MBBA Hybrid <1 High Reliability: 95%+ across environments. Processing: End-to-end <2s.
Security Analysis • Prevents replay/deepfake/MITM. • Encrypted templates, anonymized math representations. • High resilience against spoofing & impersonation.