Decoding Brain oscillations during naturalistic scenarios

KrishnaPrasad194459 23 views 72 slides Jun 09, 2024
Slide 1
Slide 1 of 72
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72

About This Presentation

Machine learning approaches for EEG signal decoding


Slide Content

Decoding Brain oscillations while Listening Songs, Watching Movies, and Meditating Krishna Prasad Miyapuram Associate Professor Centre for Cognitive and Brain Sciences Indian Institute of Technology Gandhinagar [email protected]

Naturalistic Scenarios Brain Rhythms

Attributes of Naturalistic Music Repetitive Musical Patterns - Beat, timber Patterns enable effortless song recognition Subjectivity of Musical Listening https://www.inc.com/andrew-griffiths/do-you-want-to-capture-every-audience-you-stand-in-front-of.html Training Culture Familiarity Attention Enjoyment https://www.ncpamumbai.com/soi/ Complexity of Music Two brains listen to 1 song differently!

Silence 2 min Song 1 Enjoyment 1 - 5 . . . Enjoyment 1 - 5 Song 12 Familiarity 1 - 5 Familiarity 1 - 5 128 Channels 20 Participants Multiple Genres Instrumental Except 3 Songs IIT Gandhinagar

Stimuli

EEG

Magnitude spectra of stimulus envelopes at low frequencies. Vertical dashed lines denote the frequencies of the musical beat hierarchy. Three peaks are marked at 1/4X, X, and 2X frequencies for song 7, 5, and 11.

MUSIC IDENTIFICATION USING BRAIN RESPONSES TO INITIAL SNIPPETS Is there a significant correlation among a person's neural responses across the duration of a song? Are the neural signatures embedded in the initial segments retained throughout the song? Are neural signatures associated with a song listener specific or independent? Intra-Subject Inter-Subject

EEG Datasets NMED-T 20 Participants (Mean Age 23 Years) 125 Hz 125 Channels 10 Naturalistic Songs Range : 4.5 - 5 Minutes Musin-G 20 Participants (Mean Age 23.5 Years) 250 Hz 128 Channels 12 Naturalistic Songs Range: 1.5 - 2 Minutes

Proposed Approach

Mean Accuracy Participants for Four Training Windows

Subject-wise Performance on 20s of Training Data.

ML-based Intra-Subject Song Prediction

Performance of Frequency Bands

ML-based Intra-subject Song Prediction

Subject-independent Song Identification

Visualizing Individual Differences in EEG responses to music Effective approach to visualize patterns of neural activity Unsupervised learning - No predefined labels while transformation Brain responses from naturalistic music listening generate neural signatures for individual song vs person identification.

ElectroEncephalography (EEG) Characteristics Temporal Spectral Spatial Feature Extraction Visualization Bartošová, V., Vyšata, O., & Procházka, A. (2007)

Analysis Pipeline : Acquisition - Preprocessing - Window Extraction

Analysis Pipeline : Feature Extraction - Wavelet Applying Fourier transform over the EEG signal doesn’t yield the best result because of frequent characteristics of signals that are characterized by non-stationary time behavior. Wavelet transform has been an effective time-frequency analysis tool for analyzing transient signals. Haar db8 bior2_2 d) coif5 e) sym2

Analysis Pipeline : Feature Extraction - Wavelet Decomposition

Analysis Pipeline : Visualization High Dimensional Features Limit Visualization in 2-D Space

Analysis Pipeline : Dimension Reduction Technique Linear projection of the data Miss the Non-Linear structure of the data Manifold learning is an approach to non-linear dimensionality reduction. Unsupervised Way - No predefined labels on the data Perplexity : 5, 50, 100, 200, 500 Isomap Locally Linear Embedding (LLE) t-distributed Stochastic Neighbor Embedding (t-SNE) https://scikit-learn.org/stable/auto_examples/manifold/plot_compare_methods.html

2d - one 2d - two 2d - two 2d - one Results through Locally Linear Embedding (LLE)

Results : LLE -d5 -d6

Results : t-SNE

Classification of Enjoyment and Familiarity

Audio Feature Extraction V. Alluri et al. Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm,” Neuroimage , 2012

Mapping between Audio Feature and Brain Responses N. Gang et al. “Decoding neurally relevant musical features using canonical correlation analysis,” ISMIR, 2017 J. R. Katthi and S. Ganapathy, “Deep Correlation Analysis for Audio-EEG Decoding,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2021 How far the ML predictive power could go beyond chance level (0.5) for binary classification(low/high) in different brain areas for familiarity and enjoyment? What feature is more prominent for classification? What is the best delay to capture the relationship between stimulus and brain response? Canonical Correlation Analysis (CCA) involves projecting two data sets onto subspaces such that the projections are maximally correlated across time. It determines a set of orthogonal directions on which the two signals are highly correlated.

Predictive Power of Classifier: How much above chance level? Finding: Right frontal and right parietal contributed the most to the accuracy. The maximum prediction above the chance level reached nearly 26% and 23% in familiarity and enjoyment, respectively.

Best Feature for Prediction: Which feature provides the maximum classification? Finding: RMS features were more predictive of familiarity, whereas PC1 features were for enjoyment. ST : Sampling Technique

EEG pattern identification from Convolutional Neural Networks Can we predict song ID given its corresponding EEG response using Deep Learning? Can I apply DL?

Song classification task Can we predict song ID given its corresponding EEG response using Deep Learning? Can I apply DL? We have EEG Time series Data!

Data Augmentation Original Dataset : Data size for one participant : 129 x 27500 x 12 #electrodes #songs #song_length Electrodes Time

Data Augmentation Data Augmentation Song Image size : 129 x 250 #electrodes #samples per second New dataset size : 11772 x 129 x 250 250 samples Electrodes Time

Song image Song images of 26th second for participant ID: 1902 in time domain (a) Song ID - 6 (b) Song ID - 7

CNN Architecture

Result on time domain data

Frequency domain conversion Song image size is independent of sampling frequency Using spectopo tool in matlab, we converted time series data to frequency domain Maximum frequency component as per Nyquist criteria

Song image in frequency domain Song images of 26th second for participant ID: 1902 in frequency domain (a) Song ID - 6 (b) Song ID - 7

Result on frequency domain data

Effect of Train-test split ratio (a) Change in the test accuracy for different train-test split values (b) Training and validation curve

Extension to Movie Classification Task

The 9 Emotional States in Indian Rasa Theory

Study Design 20 participants during which they were asked to view chosen film clips. of 9 different emotional states from Bollywood. 128 channel high density Geodesic EEG Systems. Indian Institute of Technology Gandhinagar

Result for Movie Classification Task

Effect of Train-test split ratio

Confusion Matrix for train-test split 0.3 train-test split 0.5 train-test split 0.9 train-test split

Output of CNN Layer (b) Participant ID : 5, Movie ID - 9 (a) Participant ID : 1, Movie ID - 9

Indistinguishable Pairs

Distinguishable Pairs

Higher no. of cross connections in delta and theta. Stronger connections on the parietal and occipital sites on beta. Intra-hemispheric connectivity in gamma.

Brain connectivity based classification of Meditation expertise Pankaj Pandey 1 , Pragati Gupta 2 and Krishna Prasad Miyapuram 1 1 Indian Institute of Technology Gandhinagar, India 2 National Forensic Sciences University, Gandhinagar, India

Various Types of Meditation Tradition Maintaining a sustained selective attention on a chosen concept or object, such as breathing, physical sensation, or a visual image Developing love and compassion for one self and toward all other beings . To use a sound or mantra to be aware of the present without an object of thought Involves acceptance of internal and external cues with the goal of non-judgmental awareness . Focused Attention Open Monitoring Loving-Kindness Meditation Transcendental Meditation Includes Himalayan Yoga, Mantra, Metta meditation etc Includes Vipassana, Shoonya Yoga, Isha Yoga, Zen etc Source: (Lee et al., 2018)

Previous Studies Meditation Type Brain Waves elicited Methods Brain Regions Reference Himalayan Yoga Tradition Theta (4-7Hz) and Alpha (9-11 Hz) EEG time- frequency analysis Fronto-Midline Region (Theta) & Somatosensory cortex (alpha rhythm) (Brandmeyer &Delorme, 2016) Himalayan Yoga, Isha Shoonya, Vipassana Higher Gamma Amplitude (60-11O Hz) for all 3 meditators, Higher 7-11 Hz Alpha activity in Vipassana , Lower 10-11 Hz Alpha in Himalayan group Spectral Analysis Parieto-Occipital Regions (Gamma) (Braboszcz et al., 2017) Focused Attention, Loving Kindness, Open Monitoring Distributed delta networks, left hemispheric theta networks, right hemisphere alpha networks for all the 3 types of meditation techniques Connectivity Analysis (using Imaginary Coherence ) and Integrating connectivity , Hemispheric Asymmetry Various intra-interhemispheric regions (posterior/anterior) based on the frequency bands elicited (Yordanova et al., 2020)

Operational Definition Himalayan Yoga tradition(HYT) (12 expert HYT practitioners) All practitioners began with an initial body scan as they relaxed into their seated posture and then started to mentally recite their mantra. When deeper levels of meditation or stillness are obtained, mantra repetitions gradually cease . Source: (Brandmeyer & Delorme, 2016)

Experimental Design Timeline Source: (Brandmeyer & Delorme, 2016) Experience-sampling probes were presented at random intervals ranging from 30 to 90 s throughout the duration of the experiment Q1: “Please rate the depth of your meditation,” from 0 (not meditating at all)- 3 (deep meditative state) Q2: “Please rate the depth of your mind wandering” automatically followed, from 0 (“not mind-wandering at all) to 1 (immersed in their thoughts) Q3: “Please rate how tired you are,” from 0 (not drowsy at all)to 3 (very drowsy) 12 Expert Meditators: 14.8 mean hours weekly 12 Non-Expert Meditators: 3.2 mean hours weekly * Our Work utilized Twenty second- epochs(trials) ranging from -20 seconds prior to the beginning of the question Q1.

Extracting optimum no. of trials Data acquisition Discriminatory Bands Significance of Regions Best Performing Classifier

Rationale for using PLV PLV is an effective technique to estimate the instantaneous phase relationship between two neural signals and robust to amplitude variations Illustration of Five primary scalp regions based on the electrode placement in different positions, and the table defines its four combinations.

Rationale for using PLV PLV is an effective technique to estimate the instantaneous phase relationship between two neural signals and robust to amplitude variations Illustration of Five primary scalp regions based on the electrode placement in different positions, and the table defines its four combinations.

Fig: [Left] Maximum classification accuracy obtained for each set of epochs, including all bands, regions, correlation thresholds, and classifiers. [Right] Performance of each frequency band during QDA classification, including all regions, two epochs, and correlation threshold of 80

Results and Discussion Slow deep breathing had a lower value of HIgh Beta and Low Beta spectral power in comparison to Rapid Deep Breathing. The beta power was lower in the fronto parietal and central regions of the cortex Fronto- Central Region significant discriminator with accuracy greater than 80%, across 3 validation techniques Region 'ffc_pcp_p' showed accuracy greater than 90% in 10-Fold and leave one out session Quadratic Discriminant Analysis and Gaussian Process outperformed other classifiers classifiers Highest accuracy in the 10-Fold and leave out one session

Future Implications Source:(Brandmeyer & Delorme, 2020) Rationale Digital Distraction, Information Overload, Less functional Inhibition, Increased Mind Wandering, Rumination, Decreased Attention Real Time Neurofeedback of Meditation and Mind Wandering based on expert meditation’s neurophysiological data Individuals can explore various meditative traditions with cognitive benefits from each type Using state-of-the-art deep learning and machine learning models for real time classification

Thank you for Listening, Watching, and Meditating BR ain A nd IN formatics Lab [email protected]
Tags