Decoding Brain oscillations during naturalistic scenarios
KrishnaPrasad194459
23 views
72 slides
Jun 09, 2024
Slide 1 of 72
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
About This Presentation
Machine learning approaches for EEG signal decoding
Size: 25.42 MB
Language: en
Added: Jun 09, 2024
Slides: 72 pages
Slide Content
Decoding Brain oscillations while Listening Songs, Watching Movies, and Meditating Krishna Prasad Miyapuram Associate Professor Centre for Cognitive and Brain Sciences Indian Institute of Technology Gandhinagar [email protected]
Naturalistic Scenarios Brain Rhythms
Attributes of Naturalistic Music Repetitive Musical Patterns - Beat, timber Patterns enable effortless song recognition Subjectivity of Musical Listening https://www.inc.com/andrew-griffiths/do-you-want-to-capture-every-audience-you-stand-in-front-of.html Training Culture Familiarity Attention Enjoyment https://www.ncpamumbai.com/soi/ Complexity of Music Two brains listen to 1 song differently!
Magnitude spectra of stimulus envelopes at low frequencies. Vertical dashed lines denote the frequencies of the musical beat hierarchy. Three peaks are marked at 1/4X, X, and 2X frequencies for song 7, 5, and 11.
MUSIC IDENTIFICATION USING BRAIN RESPONSES TO INITIAL SNIPPETS Is there a significant correlation among a person's neural responses across the duration of a song? Are the neural signatures embedded in the initial segments retained throughout the song? Are neural signatures associated with a song listener specific or independent? Intra-Subject Inter-Subject
Mean Accuracy Participants for Four Training Windows
Subject-wise Performance on 20s of Training Data.
ML-based Intra-Subject Song Prediction
Performance of Frequency Bands
ML-based Intra-subject Song Prediction
Subject-independent Song Identification
Visualizing Individual Differences in EEG responses to music Effective approach to visualize patterns of neural activity Unsupervised learning - No predefined labels while transformation Brain responses from naturalistic music listening generate neural signatures for individual song vs person identification.
Analysis Pipeline : Feature Extraction - Wavelet Applying Fourier transform over the EEG signal doesn’t yield the best result because of frequent characteristics of signals that are characterized by non-stationary time behavior. Wavelet transform has been an effective time-frequency analysis tool for analyzing transient signals. Haar db8 bior2_2 d) coif5 e) sym2
Analysis Pipeline : Visualization High Dimensional Features Limit Visualization in 2-D Space
Analysis Pipeline : Dimension Reduction Technique Linear projection of the data Miss the Non-Linear structure of the data Manifold learning is an approach to non-linear dimensionality reduction. Unsupervised Way - No predefined labels on the data Perplexity : 5, 50, 100, 200, 500 Isomap Locally Linear Embedding (LLE) t-distributed Stochastic Neighbor Embedding (t-SNE) https://scikit-learn.org/stable/auto_examples/manifold/plot_compare_methods.html
2d - one 2d - two 2d - two 2d - one Results through Locally Linear Embedding (LLE)
Results : LLE -d5 -d6
Results : t-SNE
Classification of Enjoyment and Familiarity
Audio Feature Extraction V. Alluri et al. Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm,” Neuroimage , 2012
Mapping between Audio Feature and Brain Responses N. Gang et al. “Decoding neurally relevant musical features using canonical correlation analysis,” ISMIR, 2017 J. R. Katthi and S. Ganapathy, “Deep Correlation Analysis for Audio-EEG Decoding,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2021 How far the ML predictive power could go beyond chance level (0.5) for binary classification(low/high) in different brain areas for familiarity and enjoyment? What feature is more prominent for classification? What is the best delay to capture the relationship between stimulus and brain response? Canonical Correlation Analysis (CCA) involves projecting two data sets onto subspaces such that the projections are maximally correlated across time. It determines a set of orthogonal directions on which the two signals are highly correlated.
Predictive Power of Classifier: How much above chance level? Finding: Right frontal and right parietal contributed the most to the accuracy. The maximum prediction above the chance level reached nearly 26% and 23% in familiarity and enjoyment, respectively.
Best Feature for Prediction: Which feature provides the maximum classification? Finding: RMS features were more predictive of familiarity, whereas PC1 features were for enjoyment. ST : Sampling Technique
EEG pattern identification from Convolutional Neural Networks Can we predict song ID given its corresponding EEG response using Deep Learning? Can I apply DL?
Song classification task Can we predict song ID given its corresponding EEG response using Deep Learning? Can I apply DL? We have EEG Time series Data!
Data Augmentation Original Dataset : Data size for one participant : 129 x 27500 x 12 #electrodes #songs #song_length Electrodes Time
Data Augmentation Data Augmentation Song Image size : 129 x 250 #electrodes #samples per second New dataset size : 11772 x 129 x 250 250 samples Electrodes Time
Song image Song images of 26th second for participant ID: 1902 in time domain (a) Song ID - 6 (b) Song ID - 7
CNN Architecture
Result on time domain data
Frequency domain conversion Song image size is independent of sampling frequency Using spectopo tool in matlab, we converted time series data to frequency domain Maximum frequency component as per Nyquist criteria
Song image in frequency domain Song images of 26th second for participant ID: 1902 in frequency domain (a) Song ID - 6 (b) Song ID - 7
Result on frequency domain data
Effect of Train-test split ratio (a) Change in the test accuracy for different train-test split values (b) Training and validation curve
Extension to Movie Classification Task
The 9 Emotional States in Indian Rasa Theory
Study Design 20 participants during which they were asked to view chosen film clips. of 9 different emotional states from Bollywood. 128 channel high density Geodesic EEG Systems. Indian Institute of Technology Gandhinagar
Output of CNN Layer (b) Participant ID : 5, Movie ID - 9 (a) Participant ID : 1, Movie ID - 9
Indistinguishable Pairs
Distinguishable Pairs
Higher no. of cross connections in delta and theta. Stronger connections on the parietal and occipital sites on beta. Intra-hemispheric connectivity in gamma.
Brain connectivity based classification of Meditation expertise Pankaj Pandey 1 , Pragati Gupta 2 and Krishna Prasad Miyapuram 1 1 Indian Institute of Technology Gandhinagar, India 2 National Forensic Sciences University, Gandhinagar, India
Various Types of Meditation Tradition Maintaining a sustained selective attention on a chosen concept or object, such as breathing, physical sensation, or a visual image Developing love and compassion for one self and toward all other beings . To use a sound or mantra to be aware of the present without an object of thought Involves acceptance of internal and external cues with the goal of non-judgmental awareness . Focused Attention Open Monitoring Loving-Kindness Meditation Transcendental Meditation Includes Himalayan Yoga, Mantra, Metta meditation etc Includes Vipassana, Shoonya Yoga, Isha Yoga, Zen etc Source: (Lee et al., 2018)
Previous Studies Meditation Type Brain Waves elicited Methods Brain Regions Reference Himalayan Yoga Tradition Theta (4-7Hz) and Alpha (9-11 Hz) EEG time- frequency analysis Fronto-Midline Region (Theta) & Somatosensory cortex (alpha rhythm) (Brandmeyer &Delorme, 2016) Himalayan Yoga, Isha Shoonya, Vipassana Higher Gamma Amplitude (60-11O Hz) for all 3 meditators, Higher 7-11 Hz Alpha activity in Vipassana , Lower 10-11 Hz Alpha in Himalayan group Spectral Analysis Parieto-Occipital Regions (Gamma) (Braboszcz et al., 2017) Focused Attention, Loving Kindness, Open Monitoring Distributed delta networks, left hemispheric theta networks, right hemisphere alpha networks for all the 3 types of meditation techniques Connectivity Analysis (using Imaginary Coherence ) and Integrating connectivity , Hemispheric Asymmetry Various intra-interhemispheric regions (posterior/anterior) based on the frequency bands elicited (Yordanova et al., 2020)
Operational Definition Himalayan Yoga tradition(HYT) (12 expert HYT practitioners) All practitioners began with an initial body scan as they relaxed into their seated posture and then started to mentally recite their mantra. When deeper levels of meditation or stillness are obtained, mantra repetitions gradually cease . Source: (Brandmeyer & Delorme, 2016)
Experimental Design Timeline Source: (Brandmeyer & Delorme, 2016) Experience-sampling probes were presented at random intervals ranging from 30 to 90 s throughout the duration of the experiment Q1: “Please rate the depth of your meditation,” from 0 (not meditating at all)- 3 (deep meditative state) Q2: “Please rate the depth of your mind wandering” automatically followed, from 0 (“not mind-wandering at all) to 1 (immersed in their thoughts) Q3: “Please rate how tired you are,” from 0 (not drowsy at all)to 3 (very drowsy) 12 Expert Meditators: 14.8 mean hours weekly 12 Non-Expert Meditators: 3.2 mean hours weekly * Our Work utilized Twenty second- epochs(trials) ranging from -20 seconds prior to the beginning of the question Q1.
Extracting optimum no. of trials Data acquisition Discriminatory Bands Significance of Regions Best Performing Classifier
Rationale for using PLV PLV is an effective technique to estimate the instantaneous phase relationship between two neural signals and robust to amplitude variations Illustration of Five primary scalp regions based on the electrode placement in different positions, and the table defines its four combinations.
Rationale for using PLV PLV is an effective technique to estimate the instantaneous phase relationship between two neural signals and robust to amplitude variations Illustration of Five primary scalp regions based on the electrode placement in different positions, and the table defines its four combinations.
Fig: [Left] Maximum classification accuracy obtained for each set of epochs, including all bands, regions, correlation thresholds, and classifiers. [Right] Performance of each frequency band during QDA classification, including all regions, two epochs, and correlation threshold of 80
Results and Discussion Slow deep breathing had a lower value of HIgh Beta and Low Beta spectral power in comparison to Rapid Deep Breathing. The beta power was lower in the fronto parietal and central regions of the cortex Fronto- Central Region significant discriminator with accuracy greater than 80%, across 3 validation techniques Region 'ffc_pcp_p' showed accuracy greater than 90% in 10-Fold and leave one out session Quadratic Discriminant Analysis and Gaussian Process outperformed other classifiers classifiers Highest accuracy in the 10-Fold and leave out one session
Future Implications Source:(Brandmeyer & Delorme, 2020) Rationale Digital Distraction, Information Overload, Less functional Inhibition, Increased Mind Wandering, Rumination, Decreased Attention Real Time Neurofeedback of Meditation and Mind Wandering based on expert meditation’s neurophysiological data Individuals can explore various meditative traditions with cognitive benefits from each type Using state-of-the-art deep learning and machine learning models for real time classification
Thank you for Listening, Watching, and Meditating BR ain A nd IN formatics Lab [email protected]