Brain computer technology can used in many IT sector
Size: 2.31 MB
Language: en
Added: Jul 13, 2024
Slides: 22 pages
Slide Content
Classification of Selective Attention to Auditory Stimuli: Toward Vision-Free Brain Computer Interfacing and EEG Based Classification of Imagined Vowel Sounds
Classification of Selective Attention to Auditory Stimuli: Toward Vision-Free Brain Computer Interfacing and EEG Based Classification of Imagined Vowel Sounds Dr. Md. Sujan Ali Professor Submitted to Presented by Smita Moni Roy Suchi Reg : 7751 Session: MS:2022-2023 Dept. of Computer Science and Engineering Jatiya Kabi Kazi Nazrul Islam University
Brain Computer Interface(BCI) T ranslates brain signal into simple commands that control external devices or into messages with which one can communicate. Major targets have been disabled individuals who cannot control specific parts of body due to serious neurological diseases. As patients do not have cognitive impairment, brain can be a source for communication. Some human brain mapping techniques like fMRI, near infrared spectroscopy, EEG has been widely used. Both paper use EEG based BCI system for the selection of appropriate mental task. 1
Classification of Selective Attention to Auditory Stimuli: Toward Vision-Free Brain Computer Interfacing Do-Won Kim, Han- Jeong Hwang, Jeong -Hwan Lim, Yong-Ho Lee, Ki-Young Jung, Chang-Hwan Im 2
Abstract C urrent BCI systems are mostly visual stimuli or feedback based, may not be applicable for severe locked-in patients lost eyesight or control eye movements. Here, they investigated the feasibility of using ASSR, elicited by selective attention to a specific sound source, as an EEG-based BCI paradigm. Implemented a first online ASSR-based BCI system that demonstrated the possibility of materializing a totally vision-free BCI system. 3
Background Study Lopez et al., 2009 investigated whether the ASSR is modulated by ASA to a specific sound stream and discussed possibility of using ASSR as new BCI paradigm. Here- P rovided eight participants with two AM sound streams (1 kHz and 2.5 kHz) with different modulation frequencies (38 Hz and 42 Hz) to both ears simultaneously. P articipants were then asked to either concentrate on stimulus from left ear or ignore both stimuli according to instructions on monitor. Using SOM method , attended and ignored conditions clearly into two clusters. In six out of eight, spectral density of alpha rhythm was inversely proportional to modulation frequency for left ear. 4
Present Study Similarly to previous study, six participants presented with two pure tone burst trains with different beat frequencies to both sound fields simultaneously. In modified paradigm, participants asked to close eyes and concentrate on either auditory stimulus according to instructions through speakers during ISI . Their proposed paradigm methods to implement an online ASSR-based BCI system. 2.5 kHz tone with 37 Hz beat frequency for left sound field and 1 kHz tone with 43 Hz beat frequency for right sound field. 5
Methods… Participants Six healthy volunteers (one female and five male) Auditory Stimuli Two pure tone burst trains used; each generated using MATLAB at sampling rate 44,100 Hz. Experimental Protocols Fig.1:Overall experimental environment. Fig.2: E xperimental paradigm used in present study. 6
Methods Feature Selection EEG spectral densities of each electrode averaged over 37±1 Hz (Cz 37 ,Oz 37 ,T7 37 ,T8 37 ) and 43±1 Hz (Cz 43 ,Oz 43, T7 43 ,T8 43 ). Ratio between all possible pairs of spectral densities evaluated at same modulation frequency (Cz 37 / T7 37 , Cz 37 /T8 37, Cz 37 /Oz 37 ,T7 37 /T8 37 , T8 37 /Oz 37, Cz 43 /T7 43 , Cz 43 /T8 43, Cz 43 /Oz 43, T7 43 /T8 43, T7 43 /Oz 43, T8 43 /Oz 43 ) Ratio between spectral powers of each electrode evaluated at different modulation frequency (Cz 37 /Cz 43, T7 37 /T7 43, T8 37 /T8 43, Oz 37 /Oz 43 ) Classification For classification, 10-fold cross validation method was considering. 7
Results Fig.3: Classification accuracy averaged over six participants with respect to different analysis window sizes and different numbers of features vectors(1,2 and 3). Fig.5: An EEG spectral density plot at Cz electrode averaged across participants with respect to two condition. Fig.4: Classification accuracy for each participants with respect to analysis window sizes when three features were selected. 8
Advantages Did not use any visual information, considering the main targets of auditory BCI System. Investigated whether ASSR modulated by selective attention to a specific sound stream. Did not use any complex preprocessing procedure. Classifying the intentions of individuals who have difficulty of controlling their vision. Overcome drawbacks of mental task based BCI paradigms. 9
Drawbacks Relatively low Information Transfer Rate. Multi-class classification was not possible. Used synchronous BCI system which restricted to a predefined time frame. Enforced participants to perform task when execution cue was given. 10
Sadaf Iqbal Aligarh Muslim University, Aligarh , Uttar Pradesh , India Yusuf Uzzaman Khan Aligarh Muslim University , Aligarh, Uttar Pradesh, India Omar Farooq Aligarh Muslim University, Aligarh , Uttar Pradesh, India EEG Based Classification of Imagined Vowel Sounds 11
Abstract Speech is an important way to convey ideas. But, in severe communication impairments like individuals are unable to talk, normal speech is not possible, BCI interfaces may use. Researches indicate EEG can be used to classify data of imagined speech. That further utilized to develop speech prosthesis and synthetic telepathy systems. O bjective is to improve the classification performance in imagined speech by selecting features that extract maximum discriminatory information from data . 12
Background Study Single-trial classification of vowel speech imagery using common spatial patterns by DaSalla proposed a control scheme for brain-computer interfaces using vowel speech imagery. Recorded EEG data in subjects imagined mouthing and speaking of vowels /a/ and /u/. Feature extracted by Common Spatial Patterns method. Classification was done using a nonlinear SVM. C lassification done between /a/ and control state, /u/ and control state and between /a/ and /u/ with accuracies 56-82 %. 13
Present Study U sed data provided by DaSalla in the open source is obtained. Increased classification accuracy rate in ranging from 77.5-100%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller. Signal variance, entropy and signal energy in the normalized frequency range 0.5-0.9 are used as features. 14
Methods… Participants Three healthy subjects S1,S2 and S3 (two male and one female). Experimental Protocols Fig.6: Experimental set up used by DaSalla . Data Processing Data was processed using MATLAB. 15
Methods Feature Selection Fig. 7: Variance of channel 1 in S1. Fig.8: Entropy of channel 1 in S1 . Fig.9: Energy of channel 1 in S1 . /a/ shown by thin line and /u/ by thick line Note Classification 3 types of classifiers namely-linear, quadratic and SVM was trained. 16
Results Fig.11: Table for sensitivity . Fig.12: Table for specificity. Fig.10: Table for correct rate . Fig.10: Table for positive predictive value. Fig.10: Table for negative predictive value . 17
Advantages Higher accuracy rate quite encouraging . Potential to be used in development of accurate speech prosthesis for patients. Be used on telepathy system where most of the information from imagined speech can be extracted. Capture and decipher of this internal thought can possible. 18
Drawbacks Only two vowel words were classified. Required data preprocessing. Time consuming and complex. 19