Brainda is now a part of the MetaBCI project and this repo is no longer in active development. Please move to this repo for more information.
A Library of Datasets and Algorithms for Brain-Computer Interface
Explore the docs »
·
Report Bug
·
Request Feature
When I was a noob in the Brain-Computer Interface, there are 3 things that annoyed me most:
- inject the conductive jelly
- preprocess the EEG data from different formats
- copy and past the algorithm codes in MATLAB over and over again
For the first problem, I feel hopeless(maybe there is a chance to replace the stupid injection in 10 years?). For other questions, I may find answers in the Python Community. When I started to learn Python and MNE, I began to build my framework to simplify the EEG data acquisition and preprocessing steps. Then I found MOABB, which is obviously much more advanced than my simple framework, so I started to use MOABB to get the EEG data. I also found that Scikit-learn provides an elegant abstraction of implementing machine learning algorithms with 'fit and transform'. This allows me to reuse existing codes instead of copy-and-paste.
Brainda is a combination of advantages of MOABB and other excellent packages. I created this package to collect EEG datasets and implement BCI algorithms for my research.
- Improvements to MOABB APIs
- add hook functions to control the preprocessing flow more easily
- use joblib to accelerate the data loading
- add proxy options for network conneciton issues
- add more information in the meta of data
- other small changes
- Implemented BCI algorithms in Python
- Decomposition Methods
- SPoC, CSP, MultiCSP and FBCSP
- CCA, itCCA, MsCCA, ExtendCCA, ttCCA, MsetCCA, MsetCCA-R, TRCA, TRCA-R, SSCOR and TDCA
- DSP
- Manifold Learning
- Basic Riemannian Geometry operations
- Alignment methods
- Riemann Procustes Analysis
- Deep Learning
- ShallowConvNet
- EEGNet
- ConvCA
- GuneyNet
- Transfer Learning
- MEKT
- LST
- Decomposition Methods
- Clone the repo
git clone https://github.com/Mrswolf/brainda.git
- Change to the project directory
cd brainda
- Install all requirements
pip install -r requirements.txt
- Install brainda package with the editable mode
pip install -e .
In basic case, we can load data with the recommended options from the dataset maker.
from brainda.datasets import AlexMI
from brainda.paradigms import MotorImagery
dataset = AlexMI() # declare the dataset
paradigm = MotorImagery(
channels=None,
events=None,
intervals=None,
srate=None
) # declare the paradigm, use recommended Options
print(dataset) # see basic dataset information
# X,y are numpy array and meta is pandas dataFrame
X, y, meta = paradigm.get_data(
dataset,
subjects=dataset.subjects,
return_concat=True,
n_jobs=None,
verbose=False)
print(X.shape)
print(meta)
If you don't have the dataset yet, the program would automatically download a local copy, generally in your ~/mne_data
folder. However, you can always download the dataset in advance and store it in your specific folder.
dataset.download_all(
path='/your/datastore/folder', # save folder
force_update=False, # re-download even if the data exist
proxies=None, # add proxy if you need, the same as the Request package
verbose=None
)
# If you encounter network connection issues, try this
# dataset.download_all(
# path='/your/datastore/folder', # save folder
# force_update=False, # re-download even if the data exist
# proxies={
# 'http': 'socks5://user:pass@host:port',
# 'https': 'socks5://user:pass@host:port'
# },
# verbose=None
# )
You can also choose channels, events, intervals, srate, and subjects yourself.
paradigm = MotorImagery(
channels=['C3', 'CZ', 'C4'],
events=['right_hand', 'feet'],
intervals=[(0, 2)], # 2 seconds
srate=128
)
X, y, meta = paradigm.get_data(
dataset,
subjects=[2, 4],
return_concat=True,
n_jobs=None,
verbose=False)
print(X.shape)
print(meta)
or use different intervals for events. In this case, X, y and meta should be returned in dict.
dataset = AlexMI()
paradigm = MotorImagery(
channels=['C3', 'CZ', 'C4'],
events=['right_hand', 'feet'],
intervals=[(0, 2), (0, 1)], # 2s for right_hand, 1s for feet
srate=128
)
X, y, meta = paradigm.get_data(
dataset,
subjects=[2, 4],
return_concat=False,
n_jobs=None,
verbose=False)
print(X['right_hand'].shape, X['feet'].shape)
Here is the flow of paradigm.get_data
function:
brainda provides 3 hooks that enable you to control the preprocessing flow in paradigm.get_data
. With these hooks, you can operate data just like MNE typical flow:
dataset = AlexMI()
paradigm = MotorImagery()
# add 6-30Hz bandpass filter in raw hook
def raw_hook(raw, caches):
# do something with raw object
raw.filter(6, 30,
l_trans_bandwidth=2,
h_trans_bandwidth=5,
phase='zero-double')
caches['raw_stage'] = caches.get('raw_stage', -1) + 1
return raw, caches
def epochs_hook(epochs, caches):
# do something with epochs object
print(epochs.event_id)
caches['epoch_stage'] = caches.get('epoch_stage', -1) + 1
return epochs, caches
def data_hook(X, y, meta, caches):
# retrive caches from the last stage
print("Raw stage:{},Epochs stage:{}".format(caches['raw_stage'], caches['epoch_stage']))
# do something with X, y, and meta
caches['data_stage'] = caches.get('data_stage', -1) + 1
return X, y, meta, caches
paradigm.register_raw_hook(raw_hook)
paradigm.register_epochs_hook(epochs_hook)
paradigm.register_data_hook(data_hook)
X, y, meta = paradigm.get_data(
dataset,
subjects=[1],
return_concat=True,
n_jobs=None,
verbose=False)
If the dataset maker provides these hooks in the dataset, brainda would call these hooks implictly. But you can always replace them with the above code.
Now it's time to do some real BCI algorithms. Here is a demo of CSP for 2-class MI:
import numpy as np
from sklearn.svm import SVC
from sklearn.pipeline import make_pipeline
from brainda.datasets import AlexMI
from brainda.paradigms import MotorImagery
from brainda.algorithms.utils.model_selection import (
set_random_seeds,
generate_kfold_indices, match_kfold_indices)
from brainda.algorithms.decomposition import CSP
dataset = AlexMI()
paradigm = MotorImagery(events=['right_hand', 'feet'])
# add 6-30Hz bandpass filter in raw hook
def raw_hook(raw, caches):
# do something with raw object
raw.filter(6, 30, l_trans_bandwidth=2, h_trans_bandwidth=5, phase='zero-double', verbose=False)
return raw, caches
paradigm.register_raw_hook(raw_hook)
X, y, meta = paradigm.get_data(
dataset,
subjects=[3],
return_concat=True,
n_jobs=None,
verbose=False)
# 5-fold cross validation
set_random_seeds(38)
kfold = 5
indices = generate_kfold_indices(meta, kfold=kfold)
# CSP with SVC classifier
estimator = make_pipeline(*[
CSP(n_components=4),
SVC()
])
accs = []
for k in range(kfold):
train_ind, validate_ind, test_ind = match_kfold_indices(k, meta, indices)
# merge train and validate set
train_ind = np.concatenate((train_ind, validate_ind))
p_labels = estimator.fit(X[train_ind], y[train_ind]).predict(X[test_ind])
accs.append(np.mean(p_labels==y[test_ind]))
print(np.mean(accs))
If everything is ok, you will get the accuracy about 0.75.
- add demos
- add documents
- more datasets for P300
- more BCI algorithms
See the open issues for a list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated. Especially welcome to submit BCI algorithms.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.
My Email: [email protected]
Project Link: https://github.com/Mrswolf/brainda