eeg-imagined-speech-recognition

Imagined speech recognition using EEG signals. KaraOne database, FEIS database.

https://github.com/ashrithsagar/eeg-imagined-speech-recognition

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.2%) to scientific vocabulary

Keywords

covert-speech eeg eeg-classification eeg-signals imagined-speech karaone
Last synced: 6 months ago · JSON representation ·

Repository

Imagined speech recognition using EEG signals. KaraOne database, FEIS database.

Basic Info
Statistics
  • Stars: 18
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
covert-speech eeg eeg-classification eeg-signals imagined-speech karaone
Created about 2 years ago · Last pushed 8 months ago
Metadata Files
Readme License Citation

README.md

EEG Imagined Speech Recognition

GitHub GitHub repo size CodeFactor Ruff GitBook GitBook

Imagined speech recognition through EEG signals

Installation

Clone the repository ```shell git clone https://github.com/AshrithSagar/EEG-Imagined-speech-recognition.git cd EEG-Imagined-speech-recognition ```
Install uv Install [`uv`](https://docs.astral.sh/uv/), if not already. Check [here](https://docs.astral.sh/uv/getting-started/installation/) for installation instructions. It is recommended to use `uv`, as it will automatically install the dependencies in a virtual environment. If you don't want to use `uv`, skip to the next step. TL;DR: Just run ```shell curl -LsSf https://astral.sh/uv/install.sh | sh ```

The dependencies are listed in the pyproject.toml file.

Install the package in editable mode (recommended):

```shell

Using uv

uv pip install -e .

Or with pip

pip install -e . ```

Additional packages

For Ubuntu: sudo apt-get install graphviz

For macOS (with Homebrew): brew install graphviz

For Windows: Download and install Graphviz from the Graphviz website.

Usage

Configuration file config.yaml

The configuration file config.yaml contains the paths to the data files and the parameters for the different workflows. Create and populate it with the appropriate values. Refer to config-template.yaml.

```yaml

select: classifier: (str) Select one from { RegularClassifier, ClassifierGridSearch, EvaluateClassifier } dataset: (str) Select one from { KaraOne, FEIS } classifier: featuresselectkbest: k: (int | list[int]) scorefunc: (str) Name of the score function to be used for ranking the features before selection. One from { pearsonr, fclassif } modelbasedir: (path) Preferably use files/Models/ models: (list[str]) list of directory names containing the model.py within them. Eg:- [ model-1, model-2, ... ] nsplits: (int) Number of splits in cross-validation. randomstate: (int) Seed value. testsize: (float) Size of test split. trialsize: (float / null) For testing purposes. Use null to use the entire dataset, else this is the fraction of the dataset that will be used. feis: epochtype: (str) One from { thinking, speaking, stimuli } featuresdir: (path) Preferably use files/Features/FEIS/features-1/ rawdatadir: (path) Preferably use files/Data/FEIS/dataeeg/ subjects: (all / list[int] / list[str]) Specify the subjects to be used. Use 'all' to use all subjects. tasks: list[int]) Available tasks:- [0]; Refer utils/feis.py:FEISDataLoader.gettask(); karaone: epochtype: (str) One from { thinking, speaking, stimuli, clearing } featuresdir: (path) Preferably use files/Features/KaraOne/features-1/ filtereddatadir: (path) Preferably use files/Data/KaraOne/EEGdata-1/ lengthfactor: (float) Determines the window length. overlap: (float) Determines the overlap between consecutive windows. rawdatadir: (path) Preferably use files/Data/KaraOne/EEGraw/ subjects: (all / list[int] / list[str]) Specify the subjects to be used. Use 'all' to use all subjects. tasks: (list[int]) Available tasks:- [0, 1, 2, 3, 4]; Refer utils/karaone.py:KaraOneDataLoader.gettask(); tfrdatasetdir: (path) Preferably use files/TFR/KaraOne/tfr_ds-1/ utils: path: (path) Absolute path to the project directory utils folder ```

Classifier model.py

In {classifier.modelbasedir}, create the model.py with the following template.

```python def model(): # Model definition here # Takes certain parameters like random_state from config.yaml return ...

def param_grid(): # Optional. Only useful in ClassifierGridSearch, ignored otherwise. return ...

def resample(): # Optional. Remove/Comment this entire function to disable sampler. # Takes certain parameters like random_state from config.yaml return ...

def crossvalidation(): # Optional. Remove/Comment this entire function to use default CV of 5 splits from StratifiedKFold. # Takes certain parameters like randomstate, n_splits from config.yaml return ...

def pipeline(): # Optional. Remove/Comment this entire function to disable any pipeline functions to be run. ```

Workflows

Run the different workflows using python3 workflows/*.py from the project directory.

  1. download-karaone.py: Download the dataset into the {rawdatadir} folder.

  2. features-karaone.py, features-feis.py: Preprocess the EEG data to extract relevant features. Run for different epochtypes: { thinking, acoustic, ... }. Also saves processed data as a .fif to {filtereddata_dir}.

  3. ifs-classifier.py: Train a machine learning classifier using the preprocessed EEG data. Uses Information set theory to extract effective information from the feature matrix, to be used as features.

  4. flatten-classifier.py: Flattens the feature matrix to a vector, to be used as features. Specify the number of features to be selected in featuresselectk_bestk.

  5. flatten-classifier-KBest.py: Run over multiple k's from featuresselectk_bestk.

References

Citation

If you use this project in your research, please cite using the following BibTeX entries.

bibtex @software{Yedlapalli_EEG-Imagined-Speech-recognition, author = {Yedlapalli, Ashrith Sagar}, license = {MIT}, title = {{EEG-Imagined-Speech-recognition}}, url = {https://github.com/AshrithSagar/EEG-Imagined-speech-recognition} }

License

This project falls under the MIT License.

Owner

  • Name: Ashrith Sagar
  • Login: AshrithSagar
  • Kind: user

#

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: EEG-Imagined-Speech-recognition
message: "If you use this software, please cite it as follows"
type: software
authors:
  - given-names: Ashrith Sagar
    family-names: Yedlapalli
    email:
      - ashrith9sagar@gmail.com
      - ashrith.yedlapalli@gmail.com
      - ashrith.yedlapalli@learner.manipal.edu
    affiliation: "Manipal Institute of Technology, Manipal"
identifiers:
  - type: url
    value: >-
      https://github.com/AshrithSagar/EEG-Imagined-speech-recognition
    description: GitHub Repo
repository-code: >-
  https://github.com/AshrithSagar/EEG-Imagined-speech-recognition
url: >-
  https://github.com/AshrithSagar/EEG-Imagined-speech-recognition
abstract: Imagined speech recognition through EEG signals
keywords:
  - Electroencephalography
  - EEG
  - Imagined speech
  - Covert speech
  - Silent speech
  - Classification
  - Machine learning
  - Information set theory
  - Hanman classifier
  - KaraOne
license: MIT

GitHub Events

Total
  • Watch event: 9
  • Push event: 6
Last Year
  • Watch event: 9
  • Push event: 6

Dependencies

requirements.txt pypi
  • mne *
  • numpy *
  • pandas *
  • rich *
  • scikit-learn *
  • scipy *
  • tensorflow *
  • tftb *