Title: Neuroengineering Specialist & Data Scientist
Summary:
With over 7 years of experience in EEG signal processing, speech analysis, and machine learning, I specialize in developing real-time brain-computer interface (BCI) systems for cognitive health diagnostics. My expertise spans neuromorphic processing, speech signal processing, and advanced feature extraction techniques for EEG data, enabling real-time attention monitoring and insights into mental health and neurological disorders. I’ve led cross-disciplinary research in computational psychiatry and space medicine, with applications designed for healthcare and extreme environments.
Neuroengineering Initiatives:
Real-Time EEG-Based Attention Scoring System on Intel Loihi Neuromorphic Chip:
Designed and implemented a comprehensive EEG-based attention scoring pipeline using advanced feature extraction (Power Spectral Density, coherence, entropy).
Developed Spiking Neural Networks (SNNs) on Intel Loihi neuromorphic hardware for real-time inference, incorporating Spike-Timing Dependent Plasticity (STDP) for adaptive learning.
Achieved scalable, energy-efficient operation, integrated with a device for real-time visualization, making it suitable for low-latency neuroengineering applications.
EEG-Based Real-Time Attention Score Model:
Developed an EEG-based real-time attention score model using feature extraction (Theta, Alpha, Beta ratios) for real-time BCI applications.
Created a Unity-based attention UI for displaying predictive scores, optimized for wearable and embedded systems.
Focused on early detection of cognitive decline, mental health conditions, and neurological disorders through advanced signal processing techniques.
Speech Signal Processing for Mental Health and Neurological Disorders:
Developed speech signal processing pipelines for extracting prosodic, spectral, and entropy-based features to assess cognitive, emotional, and neurological states.
Applied machine learning models for classifying speech patterns associated with mental health disorders (e.g., depression, anxiety) and neurological conditions (e.g., Parkinson's, Alzheimer's).
Integrated EEG and speech signals to improve multimodal diagnostics for both mental health and neurological disorders.
Enhanced Feature Extraction for EEG Analysis:
Implemented a comprehensive feature extraction process for EEG data, including Power Spectral Density (PSD), coherence, time-domain statistics, non-linear features (e.g., sample entropy, fractal dimensions), and wavelet transforms.
Developed a Python-based pipeline that uses MNE-Python, scipy, and pywt libraries for robust EEG signal processing.
Features included measures of Theta/Alpha ratios, Beta/Alpha ratios, peak detection, entropy, fractal dimension, and wavelet coefficients to capture complex neural dynamics and cognitive states.
Optimizing TRCA + CNN Hybrid Model on FPGA and Edge Devices:
Optimized a hybrid TRCA and CNN model for SSVEP signal decoding on FPGA and edge devices, achieving improvements in speed, accuracy, and energy efficiency for real-time BCI applications.
Prodromal Risk Modeling Using Speech Features:
Developed a prodromal risk prediction model using speech-based features such as MFCCs, jitter, and temporal features (e.g., confidence scores, trends).
Implemented XGBoost regression to predict the likelihood of prodromal states in patients at risk of mental health or neurological conditions.
Achieved low error rates and high predictive accuracy by integrating speech and temporal features, contributing to early diagnosis systems for cognitive decline.