Coder Social home page Coder Social logo

etiennecmb / visbrain Goto Github PK

View Code? Open in Web Editor NEW
240.0 23.0 64.0 253.25 MB

A multi-purpose GPU-accelerated open-source suite for brain data visualization

Home Page: http://visbrain.org

License: Other

Python 99.89% Makefile 0.11%
vispy gpu neuroscience visualization gui connectivity opengl mni brain sleep

visbrain's Introduction

Visbrain

image

image

image

image

image

image

image

Visbrain is an open-source python 3 package dedicated to brain signals visualization. It is based on top of VisPy and PyQt and is distributed under the 3-Clause BSD license. We also provide an on line documentation, examples and datasets and can also be downloaded from PyPi.

Installation

Dependencies

Visbrain requires :

  • NumPy >= 1.13
  • SciPy
  • VisPy >= 0.5.3
  • Matplotlib >= 1.5.5
  • PyQt5
  • Pillow
  • PyOpenGL

User installation

Install Visbrain :

pip install -U visbrain

visbrain's People

Contributors

annapasca avatar christian-oreilly avatar danieltomasz avatar dmalt avatar elijahc avatar etiennecmb avatar iraquitan avatar kdeleeuw11 avatar munsudc avatar paulbrodersen avatar pausz avatar raphaelvallat avatar skjerns avatar tombugnon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

visbrain's Issues

Topo Error

hello, developers, i am new in python.

i am trying to run this code in python.

111

but the error that i got is

112

after i read through the comment, i tried to install visbrain in developer mode and tried to run pip install matplotlib == 2.1.0
but it pops out this error
113

i tried to update my setuptools using pip install --upgrade setuptools
but it did not solve my problem.

is there any suggestion to solve this error ?

thank you.

[Sleep] Channel amplitude max limited

Currently, the maximum channel amplitude is set to the maximum of the signal. However, in some cases (such as large monitors with clipped signals) it can be preferable to set the maximum channel amplitude (= the scaling) to a higher value (that means, decrease the displayed channel amplitudes).

Suggestion: Remove the upper limit of the maximum amplitude scaling.

I tried to look in the source where I can change this (i.e. _PanAllAmpMax), however, I could not find where the maximum value restriction is set (no call to _PanAllAmpMax.setMaximum anywhere).

Brain import error

Code:

from visbrain import Brain

Result:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-7-05c534653bd0> in <module>()
----> 1 from visbrain import Brain

~/python_venv/venv-3.4/lib/python3.5/site-packages/visbrain/__init__.py in <module>()
     21 
     22 # Import modules :
---> 23 from .brain import Brain
     24 from .colorbar import Colorbar
     25 from .figure import Figure

~/python_venv/venv-3.4/lib/python3.5/site-packages/visbrain/brain/__init__.py in <module>()
      1 """From the brain file, import the brain module."""
----> 2 from .brain import Brain

~/python_venv/venv-3.4/lib/python3.5/site-packages/visbrain/brain/brain.py in <module>()
      9 import logging
     10 
---> 11 import vispy.scene.cameras as viscam
     12 
     13 from .interface import UiInit, UiElements, BrainShortcuts

~/python_venv/venv-3.4/lib/python3.5/site-packages/vispy/scene/__init__.py in <module>()
     31 """
     32 
---> 33 from .visuals import *  # noqa
     34 from .cameras import *  # noqa
     35 from ..visuals.transforms import *  # noqa

~/python_venv/venv-3.4/lib/python3.5/site-packages/vispy/scene/visuals.py in <module>()
     16 import weakref
     17 
---> 18 from .. import visuals
     19 from .node import Node
     20 from ..visuals.filters import ColorFilter, PickingFilter

~/python_venv/venv-3.4/lib/python3.5/site-packages/vispy/visuals/__init__.py in <module>()
     21 from .histogram import HistogramVisual  # noqa
     22 from .infinite_line import InfiniteLineVisual  # noqa
---> 23 from .isocurve import IsocurveVisual  # noqa
     24 from .isoline import IsolineVisual  # noqa
     25 from .isosurface import IsosurfaceVisual  # noqa

~/python_venv/venv-3.4/lib/python3.5/site-packages/vispy/visuals/isocurve.py in <module>()
     16 _HAS_MPL = has_matplotlib()
     17 if _HAS_MPL:
---> 18     from matplotlib import _cntr as cntr
     19 
     20 

ImportError: cannot import name '_cntr'

Virtual environement:

!pip3 freeze

alabaster==0.7.10
apptools==4.4.0
astroid==1.6.2
Babel==2.5.3
beautifulsoup4==4.6.0
bibtexparser==0.6.2
biopython==1.70
bleach==2.1.3
bluepy==0.11.11
bluepy-configfile==0.1.4
-e git+ssh://[email protected]/project/proj55/thalamus_pipeline@038eae051f9d5c96313f34a3df0df53cec430626#egg=BlueThalamus&subdirectory=BlueThalamus
-e git+ssh://[email protected]/nse/brainbuilder@de6587f4230eed2829dbbc0f90998a2cd8019561#egg=brainbuilder
certifi==2017.11.5
chardet==3.0.4
click==6.7
cloudpickle==0.5.2
colorlover==0.2.1
configobj==5.0.6
cssselect==1.0.3
cycler==0.10.0
Cython==0.27.3
decorator==4.2.1
docutils==0.14
entrypoints==0.2.3
enum34==1.1.6
envisage==4.6.0
eutils==0.3.2
feedparser==5.2.1
future==0.16.0
gitdb2==2.0.3
GitPython==2.1.8
h5py==2.7.1
html5lib==1.0.1
idna==2.6
imagesize==1.0.0
ipykernel==4.8.2
ipython==6.2.1
ipython-genutils==0.2.0
ipywidgets==7.1.2
isort==4.3.4
jedi==0.11.1
Jinja2==2.10
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.2.2
jupyter-console==5.2.0
jupyter-core==4.4.0
kiwisolver==1.0.1
lazy==1.3
lazy-object-proxy==1.3.1
lxml==4.1.1
MarkupSafe==1.0
matplotlib==2.2.0
mayavi==4.5.0
mccabe==0.6.1
metapub==0.4.3.6
mistune==0.8.3
mne==0.16.dev0
-e git+https://github.com/BlueBrain/nat.git@63fa7e2a0828852e0c41423ea0d18c643a712295#egg=nat
nbconvert==5.3.1
nbformat==4.4.0
-e git+https://github.com/BlueBrain/NeuroM.git@10b10751e288328495e14c6185dd08d04224f1d3#egg=neurom
nibabel==2.2.1
nine==1.0.0
notebook==5.4.0
numexpr==2.6.4
numpy==1.14.2
numpy-stl==2.3.2
numpydoc==0.7.0
packaging==17.1
pandas==0.22.0
pandocfilters==1.4.2
parse==1.8.2
parso==0.1.1
pathlib==1.0.1
patsy==0.5.0
pexpect==4.4.0
pickleshare==0.7.4
Pillow==5.0.0
pkg-resources==0.0.0
plotly==2.4.1
pluggy==0.6.0
progressbar2==3.34.3
prompt-toolkit==1.0.15
psutil==5.4.3
ptyprocess==0.5.2
py==1.5.2
pycodestyle==2.3.1
pyface==5.1.0
pyflakes==1.6.0
Pygments==2.2.0
pylint==1.8.3
pylru==1.0.9
pynrrd==0.2.4
PyOpenGL==3.1.0
pyparsing==2.2.0
PyQt5==5.9.2
pysurfer==0.8.0
python-dateutil==2.7.0
python-ternary==1.0.3
python-utils==2.2.0
pytz==2018.3
PyYAML==3.12
pyzmq==17.0.0
Pyzotero==1.3.0
QtAwesome==0.4.4
qtconsole==4.3.1
QtPy==1.4.0
quantities==0.12.1
requests==2.18.4
rope==0.10.7
scikit-learn==0.19.1
scikits.bootstrap==0.3.3
scipy==1.0.0
seaborn==0.8.1
Send2Trash==1.5.0
Shapely==1.3.2
simplegeneric==0.8.1
sip==4.19.8
six==1.11.0
sklearn==0.0
smmap2==2.0.3
snowballstemmer==1.2.1
Sphinx==1.7.1
sphinxcontrib-websupport==1.0.1
spyder==3.2.8
spyndle==0.4.0
SQLAlchemy==1.2.1
statsmodels==0.8.0
tables==3.4.2
tabulate==0.8.2
terminado==0.8.1
testpath==0.3.1
tornado==5.0
tox==2.9.1
tqdm==4.19.6
traitlets==4.3.2
traits==4.6.0
traitsui==5.1.0
Unidecode==1.0.22
urllib3==1.22
virtualenv==15.1.0
visbrain==0.3.9
vispy==0.5.2
voxcell==2.3.4.dev1
voxcellview==2.1.1
vtk==8.1.0
Wand==0.4.4
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==3.1.4
wonambi==3.1
wrapt==1.10.11

Black holes in exported images

I tried visualizing my own ROI obj:

  1. holes are observed after exporting
  2. no holes in scene.preview()

My roi image is a classification result in MNI space. So a voxel value indicates a class label, i.e., integer 1~N. ROIs are selected via:
roi_custom.select_roi(list(range(1, N)), unique_color=True, smooth=3)

Same problem is observed using the demo code of screenshot:
1

Brain GUI screenshot issue

Hi visbrain,

I'm preparing figures for publication by saving screenshots (tried both GUI and programming) from Brain GUI. However, I found some shadows on cortical surfaces from saved figures. To be noted, I used Freesurfer pial surface generated from individual's T1 MPRage MR scan. And the Visbrain version I'm using is 0.4.5

An example figure is attached below,
test

Cheers,
Miao

unable to preview the output of code

Currently running this example code of mac ios 10.14.3 with anaconda with python 3.6

The code runs without any flaws , however there is no output that is displayed at the end.
Is there some settings that i am missing out

Code used:
`"""
Image, time-frequency map and spectrogram objects

Use and control image, time-frequency maps and spectrogram.
* Display and configure an image (color, interpolation)
* Compute and display time-frequency properties of a signal (spectrogram,
wavelet based time-frequency maps or multi-taper)
"""
import numpy as np
from visbrain.objects import (ImageObj, TimeFrequencyObj, ColorbarObj,
SceneObj)

###############################################################################

Scene creation

###############################################################################

First, we define the scene and a few colorbar properties (like font size,

colorbar width...)

CBAR_STATE = dict(cbtxtsz=12, txtsz=10., width=.1, rect=(-0.2, -2., 1., 4.),
cbtxtsh=4.)
sc = SceneObj(size=(1000, 600))

###############################################################################

Create sample data

###############################################################################

Then we create some data for 1) images (a basic diagonale image) and 2) a

sine signal with a main frequency at 25hz

Define a (10, 10) image

n = 10
image = np.r_[np.arange(n - 1), np.arange(n)[::-1]]
image = image.reshape(-1, 1) + image.reshape(1, -1)
image[np.diag_indices_from(image)] = 30.

Define a 25hz sine

n, sf = 512, 256
time = np.arange(n) / sf # time vector
data = np.sin(2 * np.pi * 25. * time) + np.random.rand(n)

###############################################################################

Plot an image

###############################################################################

Most basic plot of the image without further customization

im_basic = ImageObj('ex1', image)
sc.add_to_subplot(im_basic, row=0, col=0, title='Basic image', zoom=.9)

###############################################################################

Interpolated image

###############################################################################

The image can also be interpolated. Checkout the complete list on the

VisPy website (vispy.visuals.ImageVisual)

im_interp = ImageObj('ex2', image, interpolation='bicubic')
sc.add_to_subplot(im_interp, row=0, col=1, title='Interpolated image', zoom=.9)

###############################################################################

Color properties

###############################################################################

The ImageObj allow several custom color properties (such as color

thresholding, colormap control...)

Create the image object

im_color = ImageObj('ex3', image, interpolation='bilinear', cmap='Spectral_r',
vmin=5., vmax=20., under='gray', over='darkred')
sc.add_to_subplot(im_color, row=0, col=2, title='Custom colors', zoom=.9)

Get the colorbar of the image

cb_im_color = ColorbarObj(im_color, cblabel='Image data', **CBAR_STATE)
sc.add_to_subplot(cb_im_color, row=0, col=3, width_max=150, zoom=.9)

###############################################################################

Spectrogram

###############################################################################

Extract time-frequency properties using the Fourier transform

spec = TimeFrequencyObj('spec', data, sf, method='fourier', cmap='RdBu_r')
sc.add_to_subplot(spec, row=1, col=0, title='Spectrogram', zoom=.9)

###############################################################################

Time-frequency map

###############################################################################

Extract time-frequency properties using the wavelet convolution

tf = TimeFrequencyObj('tf', data, sf, method='wavelet')
sc.add_to_subplot(tf, row=1, col=1, title='Time-frequency map', zoom=.9)

###############################################################################

Multi-taper

###############################################################################

Extract time-frequency properties using multi-taper (need installation of

lspopt package)

tf_mt = TimeFrequencyObj('mt', data, sf, method='multitaper', overlap=.7,
interpolation='bicubic', cmap='Spectral_r')
sc.add_to_subplot(tf_mt, row=1, col=2, title='Multi-taper', zoom=.9)
cb_tf_win = ColorbarObj(tf_mt, cblabel='Power', **CBAR_STATE)
sc.add_to_subplot(cb_tf_win, row=1, col=3, width_max=150, zoom=.9)

Display the scene

sc.preview()
`

output of the script

Creation of a scene ImageObj(name='ex1') created ImageObj(name='ex1') added to the scene ImageObj(name='ex2') created ImageObj(name='ex2') added to the scene ImageObj(name='ex3') created ImageObj(name='ex3') added to the scene Get colorbar properties from ImageObj(name='ex3') object ColorbarObj(name='ex3Cbar') created ColorbarObj(name='ex3Cbar') added to the scene TimeFrequencyObj(name='spec') created Compute time-frequency decomposition using the fourier method TimeFrequencyObj(name='spec') added to the scene TimeFrequencyObj(name='tf') created Compute time-frequency decomposition using the wavelet method Compute the time-frequency map (normalization=None) TimeFrequencyObj(name='tf') added to the scene TimeFrequencyObj(name='mt') created Compute time-frequency decomposition using the multitaper method TimeFrequencyObj(name='mt') added to the scene Get colorbar properties from TimeFrequencyObj(name='mt') object ColorbarObj(name='mtCbar') created ColorbarObj(name='mtCbar') added to the scene

kcdetect fails with direct use

We are trying to detect K-Complexes using the function kcdetect, without using the GUI.
We encounter errors using raw signals form the SHHS dataset.
The signals are 125Hz single channel EEGs, with shape (1,15000) (2 minute epochs).
This is how we are trying to run kcdetect:

kcs = kcdetect(data, sf, proba_thr=0.6, amp_thr=1., hypno=None, nrem_only=False, tmin=80, tmax=240, kc_min_amp=.1, kc_max_amp=.7, fmin=.5, fmax=4., delta_thr=.75, smoothing_s=20, spindles_thresh=2., range_spin_sec=20, min_distance_ms=500.)

We get the following error:
image

Does this code support running the functions directly on raw signals? If not, what are the preprocessing steps required?
If this is not the intended use, please point us to our mistakes.
Your help would be much appreciated

topo code issue

for the add_topoplot function,

according to example, is it possible to run this function by just passing in the argument into it ?
t.add_topoplot(self, name, data, none, none, none, none, 'inferno', 3, 'cartesian', 'degree', None, 'black', 5, 'black', 2, 2., (0., 0., 0.), 'white', 'disc', 'black', 'white', True, None, 4., None, 'white', 'viridis', None, None, 'gray', None, 'red', 0, 0, 1, 1, .05):

vispy version 0.5?

I've noticed when installing this that you can't automatically get vispy 0.5. You have to install it via git. Also, after doing so, some things don't load (ColorBar from vispy.scene.visuals, for example). Is there a stable version of vispy that I can download to use this?

PSD in scored sleep data so in SLEEP module

Hi!
I have been using the SLEEP module of visbrain for quite a while and use it to score my PSG data. It's really nice that it can visualize the power spectrum on the topoplot for different frequency bands. However, I wanted to know how I can calculate PSD for each sleep stage across different frequency bands and be able to export the results in a .csv format for further statistical analyses.

Your help is very much appreciated!

Regards,

Mo

[Sleep] Facilitate scoring of long recordings

Hi @raphaelvallat @EtienneCmb ,

When scoring long recordings (24h in this example), the "current time" indicator is extremely thin

Screen Shot 2020-02-26 at 1 23 09 PM

and sometimes even disappears
image

It is now very impractical and error prone to use Sleep with long recordings, because it's hard to navigate to specific parts of data or to see what periods have been scored already or not (all the more since the default hypnogram value for unscored periods is "wake" rather than "artifact" or left unspecified).

In the Sleepscore software, there's a way to display the hypnogram for only, say 1h segments of data at a time.

What would you think about updating Sleep with the following:
1- set a minimum size of the "current time" marker so that it doesn't become too small for long recording
2- Add another kind of "Zoom" mode that would let the user display in the hypnogram and spectrogram only the 1h period surrounding the current time.
3- Format the displayed Cursor time (and possibly the Go-To?) in hh:mm:ss format rather than seconds
4- (Possibly in a separate update): Set the default hypnogram value to 'artifact' rather than wake to avoid including unscored periods in further analysis. User can set it back to wake manually easily in the Scoring panel

Cheers, Tom

installing visbrain

Hi,

I'm trying to install visbrain but I get this error:

Could not find a version that satisfies the requirement pyqt5 (from visbrain) (from versions: )
No matching distribution found for pyqt5 (from visbrain)

I am using anaconda python2. I couldn't install pyqt5 through pip but I installed it using conda.

thanks.

Encountering an error when running a demo code (sometimes)

I'm trying to run the demo code for cross-section control: http://visbrain.org/auto_examples/gui_brain/01_cross_sections_and_volume.html. My code works at times and sometimes it produces the following output.

File already dowloaded (/home/deeplearning/visbrain_data/example_data/GG-853-WM-0.7mm.nii.gz).
CrossSecObj(name='/home/deeplearning/visbrain_data/example_data/GG-853-WM-0.7mm.nii.gz') created
    GG-853-WM-0.7mm volume loaded
VolumeObj(name='/home/deeplearning/visbrain_data/example_data/GG-853-WM-0.7mm.nii.gz') created
    GG-853-WM-0.7mm volume loaded
GG-853-WM-0.7mm is now a default ROI object. Use `r_obj = RoiObj('GG-853-WM-0.7mm')` to call it.
RoiObj(name='brodmann') created
    brodmann ROI loaded.
BrainObj(name='B1') created
Traceback (most recent call last):
  File "/home/deeplearning/Documents/LiClipse Workspace/fMRI_iML/src/deepbrain/main.py", line 115, in <module>
    vis.showCrossSections()
  File "/home/deeplearning/Documents/LiClipse Workspace/fMRI_iML/src/utils/Visualization.py", line 346, in showCrossSections
    vb = Brain(cross_sec_obj=cs_obj, vol_obj=v_obj)
  File "/usr/local/lib/python3.5/dist-packages/visbrain/gui/brain/brain.py", line 126, in __init__
    BrainCbar.__init__(self, camera)
  File "/usr/local/lib/python3.5/dist-packages/visbrain/gui/brain/cbar.py", line 65, in __init__
    self.cbqt._fcn_change_object()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/visuals/cbar/CbarQt.py", line 306, in _fcn_change_object
    self._initialize()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/visuals/cbar/CbarQt.py", line 258, in _initialize
    self._gui_to_visual()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/visuals/cbar/CbarQt.py", line 273, in _gui_to_visual
    self._fcn_cmap_changed()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/visuals/cbar/CbarQt.py", line 15, in wrapper
    fn(self)
  File "/usr/local/lib/python3.5/dist-packages/visbrain/visuals/cbar/CbarQt.py", line 360, in _fcn_cmap_changed
    self.cbobjs.update()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/visuals/cbar/CbarObjects.py", line 135, in update
    self._objs[self._selected]._fcn()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/gui/brain/cbar.py", line 103, in _fcn_link_roi
    self.roi._update_cbar()
  File "/usr/local/lib/python3.5/dist-packages/visbrain/objects/roi_obj.py", line 660, in _update_cbar
    self.mesh.update_colormap(**self.to_kwargs())
AttributeError: 'RoiObj' object has no attribute 'mesh'

Please help me.

using visbrain with x11 forwarding and xming

I get the following error trying to import the visbrain modules. This is a RHEL7 workstation which I ssh into using putty and forward x11 to xming. Is visbrain not compatible with xming?

#-> python3
Python 3.6.8 (default, Apr 25 2019, 21:02:35)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from visbrain.gui import Brain
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

Aborted

#-> python3 -m pip list | egrep -i 'numpy|matplot|vispy|pyqt5|pyopengl|pillow|visbrain'
matplotlib 2.1.0
numpy 1.16.4
Pillow 6.0.0
PyOpenGL 3.1.0
PyQt5 5.12.2
PyQt5-sip 4.19.17
visbrain 0.4.4
vispy 0.5.3

Thanks,
Keith

Import error: cannot import _cntr

Getting this error on trying to import visbrain:

>>> import visbrain
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/visbrain/__init__.py", line 23, in <module>
    from .brain import Brain
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/visbrain/brain/__init__.py", line 2, in <module>
    from .brain import Brain
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/visbrain/brain/brain.py", line 11, in <module>
    import vispy.scene.cameras as viscam
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/vispy/scene/__init__.py", line 33, in <module>
    from .visuals import *  # noqa
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/vispy/scene/visuals.py", line 18, in <module>
    from .. import visuals
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/vispy/visuals/__init__.py", line 23, in <module>
    from .isocurve import IsocurveVisual  # noqa
  File "/home/roger/Documents/jacob/pieces/eoiayidbyII/eoiaw/lib/python3.6/site-packages/vispy/visuals/isocurve.py", line 18, in <module>
    from matplotlib import _cntr as cntr
ImportError: cannot import name '_cntr'

I'm on Python 3.6.5rc1, all packages installed today from pip. In my poking around I found this: https://stackoverflow.com/questions/49160142/how-should-cntr-should-be-imported-on-matplotlib-lts-2-2-0 Must one use an old version of matplotlib?

Automatic scaling while loading single channel data

I see that there is automatic scaling applied via io.read_sleep.py

        if np.abs(np.ptp(data, 0).mean()) < 0.1:
            warn("Wrong data amplitude for Sleep software.")
            data *= 1e6

In this way, we look at the Peak-to-Peak values across channels.
This has two problems:

  1. If there is only one channel, this method will rescale, regardless of the dimensions.
  2. If the channels have different dimensions (one EOG in mV and one EEG in uV) I assume that this function will also not rescale, even if it would be necessary (didn't test this).

Question:
Would it not make more sense to look at the ptp across one channel, or do you expect drift?
Or find a different way of detecting wrong scaling (just simple min/max should suffice) ?

I can implement a check for single-channel data if you want.

Add .rec loading capability

As described in title.

A lot of sleep studies are collected in .rec file format - would be excellent to have the opportunity to open this in the Sleep tool!

Please Help!!

Every time I try to install visbrain I get this error.

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-2c_5bqh_/visbrain/

What can I do??

I'm no expert in python, but Im very interested in using the sleep software for my PhD thesis.

[Sleep] Generalize `Sleep` module to facilitate sleep scoring of non-human EEG/LFP data

First of all: fantastic project. It installs fine (although I would change the installation instructions a bit), mostly everything just works, the docs are great, and the code is very readable (at least in the parts that I have looked at).

Secondly, I am raising this issue because I hope this will ultimately end in a PR (from me) but I wanted to get the conversation started before I start coding, so that you can stop me dead in the track before I veer off into areas that you (or any of the other maintainers) don't want to go.

Goal

Generalize the Sleep submodule to allow / facilitate sleep scoring of non-human EEG/EMG data, specifically from mouse.

Motivation

I have implemented an automated sleep scoring algorithm that achieves a very high accuracy / concordance with experienced experimenters (98.5% on a held out test set). Nevertheless, most experimenters would still like to be able to amend the annotation, e.g. to annotate artifact states.
However, the sleep scoring software (sleepsign) that people in my lab and other labs in the institution use is a) proprietary (including the file format) and b) doesn't allow the import of a hypnogram from a different source (it has a lot of other issues, too, but those are the most pressing for me at the moment). Trying to integrate my pipeline with their existing sleep scoring pipeline is hence a bit of a non-starter. The integration with visbrain, in contrast, seems pretty straightforward (I love the fact that you provide templates to replace existing functionality).

Proposed changes

This list will undoubtedly grow as I and others in my lab actively use visbrain / Sleep.
So far I have got:

  1. Allow definition of arbitrary vigilance states, e.g. in a config file, instead of hard coding N1, N2, N3, REM, wake, and artifact. "Mouse people" don't distinguish between different non-REM states; however, they do denote an additional "sleep movement" state, which occurs when the animal moves (and is potentially briefly awake) during sleep phases.

  2. Associated with arbitrary vigilance states, allow arbitrary(-ish) shortcut keys, which could again be implemented by separating out some keybindings to a config file. My intended target audience has scored dozens and dozens of complete days of data at 4 second intervals, and I don't want to mess with their muscle memory at this stage.

I would love to hear your thoughts and feedback on this proposal.

I would completely understand if you would rather keep everything human-centric, as the proposed generalisation will increase the complexity and scope of the project. That being said, the code base seems pretty modular so I don't think this would entail massive changes all in all (but you would have a better idea of this). Also, such a move could drastically increase your user base (which would be much deserved!), and bring in some fresh ideas.

Looking forward to your reply,
Paul

How to open .csv or .txt in Visbrain Sleep

I can get Visbrain.gui.Sleep to open. However it says there is a way to load basic files formats like .hyp and .csv and even .txt. There is a read_sleep.py and a write_sleep.py and others in a directory, but I am not sure how to get these added. I don't know what I am missing.

Visbrain ImportError

Dear all
I want to run "from visbrain import Sleep", but i have a problem. this is my script:

import visbrain
_from visbrain.gui import Sleep
/anaconda3/lib/python3.7/site-packages/vispy/visuals/isocurve.py:22: UserWarning: VisPy is not yet compatible with matplotlib 2.2+
warnings.warn("VisPy is not yet compatible with matplotlib 2.2+")
Traceback (most recent call last):

File "", line 1, in
from visbrain.gui import Sleep

File "/anaconda3/lib/python3.7/site-packages/visbrain/gui/init.py", line 1, in
from .brain import Brain # noqa

File "/anaconda3/lib/python3.7/site-packages/visbrain/gui/brain/init.py", line 2, in
from .brain import Brain

File "/anaconda3/lib/python3.7/site-packages/visbrain/gui/brain/brain.py", line 13, in
from .interface import UiInit, UiElements, BrainShortcuts

File "/anaconda3/lib/python3.7/site-packages/visbrain/gui/brain/interface/init.py", line 2, in
from .ui_init import UiInit, BrainShortcuts

File "/anaconda3/lib/python3.7/site-packages/visbrain/gui/brain/interface/ui_init.py", line 9, in
from PyQt5 import QtWidgets

ImportError: dlopen(/anaconda3/lib/python3.7/site-packages/PyQt5/QtWidgets.so, 2): Symbol not found: _os_log_default
Referenced from: /anaconda3/lib/python3.7/site-packages/PyQt5/Qt/lib/QtCore.framework/Versions/5/QtCore (which was built for Mac OS X 10.11)
Expected in: /usr/lib/libSystem.B.dylib
in /anaconda3/lib/python3.7/site-packages/PyQt5/Qt/lib/QtCore.framework/Versions/5/QtCore

system information:
OS X Yosemite: 10.10.5

Thank you in advance...

add_topoplot function error in PYTHONNET

I trying to implement Visbrain in a c# program, using Visual Studio 2017 Community.

  • Using Anaconda3 5.2.0.
  • Window 7, 64 bit.
  • Python version
(py36) C:\Users\JamesTan\Desktop>python --version
Python 3.6.6 :: Anaconda, Inc.

Embedding python in c# by using Pythonnet.

below is my pip list

(py36) C:\Users\JamesTan\Desktop>pip list
Package         Version
--------------- ----------
certifi         2018.8.13
click           6.7
cycler          0.10.0
kiwisolver      1.0.1
matplotlib      2.2.3
mkl-fft         1.0.4
mkl-random      1.0.1
numpy           1.15.0
pandas          0.23.4
Pillow          5.2.0
pip             18.0
PyOpenGL        3.1.0
pyparsing       2.2.0
PyQt5           5.11.2
PyQt5-sip       4.19.12
python-dateutil 2.7.3
pythonnet       2.4.0.dev0
pytz            2018.5
scipy           1.1.0
setuptools      40.0.0
six             1.11.0
visbrain        0.4.2
vispy           0.5.3
wheel           0.31.1
wincertstore    0.2

Follow is the code :

using (Py.GIL())
{
                // import visbrain
                dynamic myVisbrain = Py.Import("visbrain");
                Console.WriteLine("hi visbrain");

                // import topo from Visbrain
                dynamic myTopo = myVisbrain.Topo();
                Console.WriteLine("hi topo");

                // Create a list of channels, data, title and colorbar label :
                dynamic myName = "Test";
                dynamic myTitle = "Basic topoplot illustration";
                dynamic myCblabel = "Colorbar label";
                dynamic myChannels = new List<String> { "C3", "C4", "Cz", "Fz", "Pz" };
                dynamic myData = new List<Double> { 10, 20, 30, 10, 10 };

                 // ERROR OCCURS HERE
                // Add a central topoplot :
                myTopo.add_topoplot(myName, myChannels, Py.kw("channels", myData), Py.kw("title", myTitle), Py.kw("cblabel", myCblabel));

                // show
               // myTopo.show();
}

error message :

Python.Runtime.PythonException: 'UnboundLocalError : local variable 'keeponly' referenced before assignment'

stack trace :

Python.Runtime.PythonException
  HResult=0x80131500
  Message=UnboundLocalError : local variable 'keeponly' referenced before assignment
  Source=Python.Runtime
  StackTrace:
['  File "d:\\Anaconda3\\envs\\py36\\lib\\site-packages\\visbrain\\topo\\topo.py", line 153, in add_topoplot\n    margin)\n', '  File "d:\\Anaconda3\\envs\\py36\\lib\\site-packages\\visbrain\\visuals\\TopoVisual.py", line 197, in __init__\n    auto = self._get_channel_coordinates(xyz, channels, system, unit)\n', '  File "d:\\Anaconda3\\envs\\py36\\lib\\site-packages\\visbrain\\visuals\\TopoVisual.py", line 388, in _get_channel_coordinates\n    if any(keeponly):\n']

Any idea on this issue ? Thank You.

Spectogram fails for epochs of 0

I realized that if there is an epoch with 0s, the 2*np.log10(mesh) will create an -inf, which in place prevents the correct scaling of the spectogram, making it more or less invisible.

I propose we replace all NaN and -inf with the lowest number:

visuals.py

            mesh = 20 * np.log10(mesh)
            idx_notfinite = np.isfinite(mesh)==False
            mesh[idx_notfinite] = np.min(mesh[idx_notfinite==False])

What do you think?

Change font by vispy

In vispy, text object is declared like below.

    def __init__(self, text=None, color='black', bold=False,
                 italic=False, face='OpenSans', font_size=12, pos=[0, 0, 0],
                 rotation=0., anchor_x='center', anchor_y='center',
                 method='cpu', font_manager=None):

I can see the bold, italic, and font_size in visbrain, but is there any connection to 'face' which might allow the font change in the canvas using system font?

Visbrain in Pythonnet

Is it possible to run Visbrain in Pythonnet that support up to python 3.5 environment ?

Not starting with newest Anaconda

I just tried spinning up the Sleep GUI via
from visbrain.gui import Sleep, but besides a warning, nothing happened
warnings.warn("VisPy is not yet compatible with matplotlib 2.2+")

Therefore, I installed an older version of matplotlib==2.1 and Python 3.5.
With this installed, not even an error message occured.

Am I missing something?

Running Windows 10 64bit using newest Anaconda, with python 3.7+ matplotlib 3.0 or 2.2, or Python 3.5 with matplotlib 2.1.

installed packages (for python 3.5)

Package Version


alabaster 0.7.10
anaconda-client 1.6.14
anaconda-project 0.8.2
asn1crypto 0.24.0
astroid 1.6.3
astropy 3.0.2
attrs 18.1.0
Babel 2.5.3
backcall 0.1.0
backports.shutil-get-terminal-size 1.0.0
beautifulsoup4 4.6.0
bitarray 0.8.1
bkcharts 0.2
blaze 0.11.3
bleach 2.1.3
bokeh 0.12.16
boto 2.48.0
Bottleneck 1.2.1
certifi 2018.4.16
cffi 1.11.5
chardet 3.0.4
click 6.7
cloudpickle 0.5.3
clyent 1.2.2
colorama 0.3.9
comtypes 1.1.4
contextlib2 0.5.5
cryptography 2.2.2
cycler 0.10.0
Cython 0.28.2
cytoolz 0.9.0.1
dask 0.17.5
datashape 0.5.4
decorator 4.3.0
distributed 1.21.8
docutils 0.14
entrypoints 0.2.3
et-xmlfile 1.0.1
fastcache 1.0.2
filelock 3.0.4
Flask 1.0.2
Flask-Cors 3.0.4
gevent 1.3.0
glob2 0.6
greenlet 0.4.13
h5py 2.7.1
heapdict 1.0.0
html5lib 1.0.1
idna 2.6
imageio 2.3.0
imagesize 1.0.0
ipykernel 4.8.2
ipython 6.4.0
ipython-genutils 0.2.0
ipywidgets 7.2.1
isort 4.3.4
itsdangerous 0.24
jdcal 1.4
jedi 0.12.0
Jinja2 2.10
jsonschema 2.6.0
jupyter 1.0.0
jupyter-client 5.2.3
jupyter-console 5.2.0
jupyter-core 4.4.0
jupyterlab 0.32.1
jupyterlab-launcher 0.10.5
kiwisolver 1.0.1
lazy-object-proxy 1.3.1
llvmlite 0.23.1
locket 0.2.0
lxml 4.2.1
MarkupSafe 1.0
matplotlib 2.1.0
mccabe 0.6.1
menuinst 1.4.14
mistune 0.8.3
mkl-fft 1.0.0
mkl-random 1.0.1
more-itertools 4.1.0
mpmath 1.0.0
msgpack 0.5.6
msgpack-python 0.5.6
multipledispatch 0.5.0
nbconvert 5.3.1
nbformat 4.4.0
networkx 2.1
nltk 3.3
nose 1.3.7
notebook 5.5.0
numba 0.38.0
numexpr 2.6.5
numpy 1.14.3
numpydoc 0.8.0
odo 0.5.1
olefile 0.45.1
openpyxl 2.5.3
packaging 17.1
pandas 0.23.0
pandocfilters 1.4.2
parso 0.2.0
partd 0.3.8
path.py 11.0.1
pathlib2 2.3.2
patsy 0.5.0
pep8 1.7.1
pickleshare 0.7.4
Pillow 5.1.0
pip 18.1
pkginfo 1.4.2
pluggy 0.6.0
ply 3.11
prompt-toolkit 1.0.15
psutil 5.4.5
py 1.5.3
pycodestyle 2.4.0
pycosat 0.6.3
pycparser 2.18
pycrypto 2.6.1
pycurl 7.43.0.1
pyflakes 1.6.0
Pygments 2.2.0
pylint 1.8.4
pyodbc 4.0.23
PyOpenGL 3.1.0
pyOpenSSL 18.0.0
pyparsing 2.2.0
PyQt5 5.11.3
PyQt5-sip 4.19.13
PySocks 1.6.8
pytest 3.5.1
pytest-arraydiff 0.2
pytest-astropy 0.3.0
pytest-doctestplus 0.1.3
pytest-openfiles 0.3.0
pytest-remotedata 0.2.1
python-dateutil 2.7.3
pytz 2018.4
PyWavelets 0.5.2
pywin32 223
pywinpty 0.5.1
PyYAML 3.12
pyzmq 17.0.0
QtAwesome 0.4.4
qtconsole 4.3.1
QtPy 1.4.1
requests 2.18.4
rope 0.10.7
ruamel-yaml 0.15.35
scikit-image 0.13.1
scikit-learn 0.19.1
scipy 1.1.0
seaborn 0.8.1
Send2Trash 1.5.0
setuptools 39.1.0
simplegeneric 0.8.1
singledispatch 3.4.0.3
six 1.11.0
snowballstemmer 1.2.1
sortedcollections 0.6.1
sortedcontainers 1.5.10
Sphinx 1.7.4
sphinxcontrib-websupport 1.0.1
spyder 3.2.8
SQLAlchemy 1.2.7
statsmodels 0.9.0
sympy 1.1.1
tables 3.4.3
tblib 1.3.2
terminado 0.8.1
testpath 0.3.1
toolz 0.9.0
tornado 5.0.2
traitlets 4.3.2
typing 3.6.4
unicodecsv 0.14.1
urllib3 1.22
visbrain 0.4.3
vispy 0.5.3
wcwidth 0.1.7
webencodings 0.5.1
Werkzeug 0.14.1
wheel 0.31.1
widgetsnbextension 3.2.1
win-inet-pton 1.0.1
win-unicode-console 0.5
wincertstore 0.2
wrapt 1.10.11
xlrd 1.1.0
XlsxWriter 1.0.4
xlwings 0.11.8
xlwt 1.3.0
zict 0.1.3

Loading vertex directly as ROI

Is it possible to load vertex (i.e. pre-generated .vtk file) directly as ROI?

I saw below line in roi_obj.py, and believe it's the starting point.
Can I add some brief define or class to do so?
The vert and faces are already loaded, so, just integration is required.

vert, faces, data = np.array([]), np.array([]), np.array([])

I'm trying to run this `Connect deep sources` and I get import error

I get the following error:
is_pandas_installed raise IOError("pandas not installed. See https://pandas.pydata.org/#" OSError: pandas not installed. See https://pandas.pydata.org/#best-way-to-install for installation instructions.
This get resolved by installing pandas, however since I am in a virtualenv it would be nice to have it setup in the requirements since I don't know which version of pandas you tested this library on.

This is what I get by doing pip install -U visbrain:
freetype-py==2.1.0.post1
kiwisolver==1.1.0
matplotlib==3.1.1
numpy==1.17.4
Pillow==6.2.1
PyOpenGL==3.1.0
pyparsing==2.4.5
PyQt5==5.13.2
PyQt5-sip==12.7.0
python-dateutil==2.8.1
scipy==1.3.2
six==1.13.0
visbrain==0.4.5
vispy==0.6.2

After installing pandas I have this:
freetype-py==2.1.0.post1
kiwisolver==1.1.0
matplotlib==3.1.1
numpy==1.17.4
pandas==0.25.3
Pillow==6.2.1
PyOpenGL==3.1.0
pyparsing==2.4.5
PyQt5==5.13.2
PyQt5-sip==12.7.0
python-dateutil==2.8.1
pytz==2019.3
scipy==1.3.2
six==1.13.0
visbrain==0.4.5
vispy==0.6.2

error: downloading example_data

Downloading /Users/visbrain_data/example_data/xyz_sample.npz

TimeoutError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1318 try:
-> 1319 h.request(req.get_method(), req.selector, req.data, headers,

it always like this, how to solve this issue? thx

Spectrogram updates

I would suggest the possibility of doing spectrograms for the segment length that is chosen - if the user is viewing a 30 s epoch, then the corresponding spectrogram should also be made available.

Furthermore, there is an issue with the spectogram changing when the view is changing to the next epoch. The spectrogram cannot be reapplied without changing eg. the Method:
screen shot 2017-10-03 at 17 57 12
screen shot 2017-10-03 at 17 57 26

MEG inverse solution seems to be using the smoothing matrix badly

I'm talking about this example. The solution looked to me too ''grainy'', so I increased the number of smoothing steps to 15 and the activation started to look completely f***** up:

deepinscreenshot_ _20180731155054

I've looked into the source source code of BrainObj.add_activation function and it seems to me this
line in brain_obj.py is to blame:

sm_data = data[sm_mat.col]

You should multiply the data by the smoothing matrix instead of indexing it.
I've managed to make a dirty fix for it work on my laptop. The solution now looks like this with the same 15 smoothing steps:
deepinscreenshot_20180731155126

I can create a pull request if you want me to.

Issues importing data_url.json with pyenv virtual environments

I use pyenv to manage my python virtual environments.

I'm having trouble executing your example for loading an edf file

Here's the stacktrace I get:

Traceback (most recent call last):
  File "load_edf.py", line 28, in <module>
    Sleep(data=dfile, hypno=hfile, config_file=cfile).show()
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/gui/sleep/sleep.py", line 157, in __init__
    Visuals.__init__(self)
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/gui/sleep/visuals/visuals.py", line 1103, in __init__
    parent=self._topoCanvas.wc.scene)
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/gui/sleep/visuals/visuals.py", line 752, in __init__
    TopoMesh.__init__(self, **kwargs)
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/visuals/topo_visual.py", line 196, in __init__
    auto = self._get_channel_coordinates(xyz, channels, system, unit)
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/visuals/topo_visual.py", line 382, in _get_channel_coordinates
    xyz, keeponly = self._get_coordinates_from_name(channels)
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/visuals/topo_visual.py", line 437, in _get_coordinates_from_name
    path = download_file('eegref.npz', astype='topo')
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/io/download.py", line 99, in download_file
    filename, url = name, get_data_url(name, astype)
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/io/download.py", line 35, in get_data_url
    urls = load_config_json(get_data_url_path())[astype]
  File "/Users/elijahc/.pyenv/versions/aether/lib/python3.5/site-packages/visbrain/io/rw_config.py", line 47, in load_config_json
    with open(filename) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/elijahc/.pyenv/vers/data_url.json'

I think this is due to the way that you find where the visbrain module has been installed in get_data_url_path()

currently you're doing this by walking back the path from where visbrain.io.path is installed but I think using inspect might get you the information more directly and also work better with other python virtual environment managers?

Thoughts?

I can submit a PR for this if that helps

Load Annotation with custom brain

Annotation with parcellize comment shows strange result. I guess it is somehow related faces # just changed above? when annotation file loaded single hemisphere, just yellow brain or yellow brain with one region shows in purple.

b_obj.parcellize(lh_annot, hemisphere='left')
b_obj.parcellize(rh_annot, hemisphere='right')

Tested both

b_obj.parcellize(lh_annot)
b_obj.parcellize(rh_annot)

and

b_obj.parcellize(lh_annot, hemisphere='left')
b_obj.parcellize(rh_annot, hemisphere='right')

but the only difference was the warning.

WARNING | left hemisphere(s) inferred from filename
WARNING | right hemisphere(s) inferred from filename

It seems the code load annotation properly, but cannot map it properly.
If I re-call the parcelize code as below, both hemisphere shows the proper segmented regions, but with changed color.

b_obj.parcellize(lh_annot, hemisphere='left')
b_obj.parcellize(rh_annot, hemisphere='right')
b_obj.parcellize(rh_annot, hemisphere='right')

Screen Shot 2020-02-03 at 10 20 15 PM

Issues with setup.py for pip >=10

Hello,

A quick note. I had to change the following two import statements:

  1. from pip.req import parse_requirements -> from pip._internal.req import parse_requirements
  2. from pip.download import PipSession # pylint:disable=E0611 -> from pip._internal.download import PipSession # pylint:disable=E0611

in setup.py to make it work with pip v10.0.1

Thanks.

Visbrain in IronPython

I am using Window 7, 64 bit version.

I am trying to implement visbrain using ironPython in Visual Studio 2017, framework 4.0.

below is my iron python environment :
IronPython v2.7.8
IronPython.StdLib v2.7.8.1

This is the code that i am working on.

// create python engine
var myPythonEngine = Python.CreateEngine();

// create search path
var searchPaths = myPythonEngine.GetSearchPaths();

searchPaths.Add(@"D:\Python35\");
searchPaths.Add(@"D:\Python35\Scripts\");
searchPaths.Add(@"D:\Python35\Lib"); // python standard library
         
// set search path
myPythonEngine.SetSearchPaths(searchPaths);

// create scope to execute python code and get backs results from python code
var myScope = myPythonEngine.CreateScope();

// create the source
var mySource = myPythonEngine.CreateScriptSourceFromFile(@"testtopo.py");

// execute the scope
mySource.Execute(myScope);

testtopo.py only contains 2 lines of code, which is

from visbrain import Topo

# Create a topoplot instance :
t = Topo()

But it pops out an error said
IronPython.Runtime.Exceptions.ImportException: 'No module named visbrain'

Would like to know what have i miss out, so that i can solve this issue. Thank You.

OpenGL unrecognized on Linux

Description:
after installation and running

from visbrain import Brain
vb = Brain(a_template='B3')

I get the following error:

libGL error: unable to load driver: i965_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: i965
libGL error: unable to load driver: i965_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: i965
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
Unrecognized OpenGL version
Unrecognized OpenGL version

Apparently some libs are missing but quick forums search didn't get me anywhere.
And I installed pyopengl.

Did you have an issue like that?

Distinguishing N3 from N2 by computing length of slow waves with a line

Hey everybody, first of all congrats to such a nice toolbox! My team is considering using your toolbox as the new default program for sleep scoring data. However, for distinguishing between N2 and N3, we need some kind of ruler functionality, with which we can measure the length of slow waves (20% of slow waves in an epoch --> N3). The amplitude is easy to determine with the grid. But how do you determine whether 20% of the epoch contains slow waves? Our old program had a ruler or line (in green) that we could draw from positive to positive peak of a slow wave, which then added up the time in seconds of counted slow waves (top right corner). When 6s were reached, we staged the epoch as N3. Can such a feature be implemented? Thanks a lot!

example (Note that negative is up in this recording)

Accept different decimal separator

In some countries (i.e. France and Germany), the comma is the decimal separator.
However, in most countries, a dot indicates the decimal.

Right now, my dots are just ignored and I create a filter of 8 Hz instead of '0.8 Hz'.

What would be the best way to implement such options system wide? If you point me towards a good entry point I can see if I can implement it.

cannot import name 'imread' [deprecated in scipy 1.2]

from scipy.misc import imread
ImportError: cannot import name 'imread'

I installled github version of visbrain with solved imresize problem.
The imread is deprecated in Scipy since 1.2, maybe use matplotlib.pyplot.imread instead.

weird overlap in bilateral '.gii' import.

download (1)

For same data, below code directly loading *.pial of freesurfer works fine.

lh_pial=lh.pial
rh_pial=rh.pial
files = [lh_pial, rh_pial]
b1 = BrainObj(files)

However, when try to load gii file as faces and vertexes, then merge like below

lh_pial=test_L.gii
rh_pial=test_R.gii
(vert_l, faces_l) = read_gii(lh_surf_mesh)
(vert_r, faces_r) = read_gii(rh_surf_mesh)

vert=np.vstack([vert_l, vert_r])
faces=np.vstack([faces_l, faces_r])
b2 = BrainObj('Custom', vertices=vert, faces=faces)

The result is broken 3D shape as attached.
Is there any specific step required before make brain Obj?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.