Coder Social home page Coder Social logo

navis-org / navis Goto Github PK

View Code? Open in Web Editor NEW
79.0 6.0 28.0 135.01 MB

Python library for analysis of neuroanatomical data.

Home Page: https://navis.readthedocs.io

License: GNU General Public License v3.0

Shell 0.03% Python 99.85% Java 0.13%
neurons swc anatomy visualization neuroscience python 3d neuprint skeletons segmentation

navis's Introduction

Documentation Status Tests Run notebooks Open In Colab DOI Downloads

NAVis is a Python 3 library for Neuron Analysis and Visualization.

Documentation

NAVis is on ReadTheDocs.

Features

  • polyglot: work and convert between neuron skeletons, meshes, dotprops and images
  • visualize: 2D (matplotlib) and 3D (vispy, plotly or k3d)
  • process: skeletonization, meshing, smoothing, repair, downsampling, etc.
  • morphometrics: Strahler analysis, cable length, volume, tortuosity and more
  • similarity: compare & cluster by morphology (e.g. NBLAST, persistence or form factor) or connectivity metrics
  • transform: move data between template brains (built-in support for HDF5, CMTK, Elastix and landmark-based transforms)
  • interface: load neurons directly from neuPrint, neuromorpho.org and other data sources
  • model neurons and networks using the NEURON simulator
  • render: use Blender 3D for high quality visualizations
  • R neuron libraries: interfaces with nat, rcatmaid, elmr and more
  • import-export: read/write SWCs, neuroglancer's "precomputed" format, NMX/NML, NRRD, mesh-files and more
  • scalable: out-of-the-box support for multiprocessing
  • extensible: build your own package on top of navis - see pymaid for example

Getting started

See the documentation for detailed installation instructions, tutorials and examples. For the impatient:

pip3 install 'navis[all]'

which includes all optional extras providing features and/or performance improvements. Currently, this is igraph, pathos, shapely, kdtree, hash, flybrains, cloudvolume, meshes, and vispy-default.

3D plotting from a python REPL is provided by vispy, which has a choice of backends. Different backends work best on different combinations of hardware, OS, python distribution, and REPL, so there may be some trial and error involved. vispy's backends are listed here, and each can be installed as a navis extra, e.g. pip3 install 'navis[vispy-pyqt6]'.

movie

Questions?

Questions on how to use navis are best placed in discussions. Same goes for cool projects or analyses you made using navis - we'd love to hear from you!

Changelog

A summary of changes can be found here.

NAVis & friends

NAVis comes with batteries included but is also highly extensible. Some libraries built on top of NAVis:

  • flybrains provides templates and transforms for Drosophila brains to use with navis
  • pymaid pulls and pushes data from/to CATMAID servers
  • fafbseg contains tools to work with auto-segmented data for the FAFB EM dataset including FlyWire

Who uses NAVis?

NAVis has been used in a range of neurobiological publications. See here for a list.

We have implemented various published algorithms and methods:

  1. NBLAST: Comparison of neurons based on morphology (Costa et al., 2016)
  2. Vertex Similarity: Comparison of neurons based on connectivity (Jarrell et al., 2012)
  3. Comparison of neurons based on synapse distribution (Schlegel et al., 2016)
  4. Synapse flow centrality for axon-dendrite splits (Schneider-Mizell et al., 2016)

Working on your own cool new method? Consider adding it to NAVis!

Citing NAVis

We'd love to know if you found NAVis useful for your research! You can help us spread the word by citing the DOI provided by Zenodo DOI

License

This code is under GNU GPL V3.

Acknowledgments

NAVis is inspired by and inherits much of its design from the excellent natverse R packages by Greg Jefferis, Alex Bates, James Manton and others.

Contributing

Want to contribute? Great, here is how!

Report bugs or request features

Open an issue. For bug reports please make sure to include some code/data with a minimum example for us to reproduce the bug.

Contribute code

We're always happy for people to contribute code - be it a small bug fix, a new feature or improved documentation.

Here's how you'd do it in a nutshell:

  1. Fork this repository
  2. git clone it to your local machine
  3. Install the full development dependencies with pip install -r requirements.txt
  4. Install the package in editable mode with pip install -e ".[all]"
  5. Create, git add, git commit, git push, and pull request your changes.

Run the tests locally with pytest -v.

Docstrings should use the numpydoc format, and make sure you include any relevant links and citations. Unit tests should be doctests and/or use pytest in the ./tests directory.

Doctests have access to the tmp_dir: pathlib.Path variable, which should be used if any files need to be written.

Feel free to get in touch either through an issue or discussion if you need pointers or input on how to implement an idea.

navis's People

Contributors

antortjim avatar aschampion avatar azatian avatar bdpedigo avatar clbarnes avatar dokato avatar floesche avatar pnewstein avatar robbie1977 avatar schlegelp avatar sridharjagannathan avatar stuarteberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

navis's Issues

use hash values to check temporary attributes

Could use this to either invalidate or - in a first step - just warn if the neuron's node table has changed and temporary attributes (graph, node types, etc) are outdated:

n = navis.example_neurons(1)
hashlib.md5(n.nodes[['node_id', 'parent_id']].to_msgpack()).hexdigest()

This only adds overhead of a few ms but we need to make some changes to properties.

Explore options to speed/scale up NBLAST

Just to keep track of some ideas:

  1. When just trying to find matches, it might be worth not computing a precise/final similarity score for pairs of neurons that clearly aren't the same thing. Minimally, this could mean setting a max distance for the KDtree query and not bothering to compute the dot product of the vectors if there is too little overlap.
  2. Add option to limit the amount of memory collectively used by the NBLAST processes (basically chunking the queries).
  3. Run a pre-NBLAST using a (random) sample of points to determine if something "might" be a good match.

Neuromorpho neuron fetch

When using NAVis integrated in Blender, I tried to fetch a neuron from the neuromorpho repository in the following way:

import navis
import navis.interfaces.blender as b3d
from navis.interfaces import neuromorpho as nmc

nl = nmc.get_neuron('Fig2a_cell1_1112071rm')
Traceback (most recent call last):
  File "<blender_console>", line 1, in <module>
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\interfaces\neuromorpho.py", line 187, in get_neuron
    n = read_swc(url, **kwargs)
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\io\swc_io.py", line 322, in read_swc
    return reader.read_any(f, include_subdirs, parallel)
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\io\base.py", line 646, in read_any
    return self.read_any_single(obj, attrs)
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\io\base.py", line 544, in read_any_single
    return self.read_url(obj, attrs)
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\io\base.py", line 445, in read_url
    return self.read_buffer(
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\io\swc_io.py", line 101, in read_buffer
    header_rows = read_header_rows(f)
  File "C:\Program Files\Blender Foundation\Blender 2.93\2.93\python\lib\site-packages\navis\io\swc_io.py", line 222, in read_header_rows
    f.seek(0)
io.UnsupportedOperation: underlying stream is not seekable

Fetching neuron info via navis.interfaces.neuromorpho.get_neuron_info(x) works fine.

soma size

I try to show somata in lager size, while I see the code in which "soma" is a boolean variable.
in your opinion, how should I change the soma size?
Thanks.

use of dict mapping on navis.plot3d

I am trying to plot a 3d skeleton of a neuron of interest (nl) where only the connectors to/from another neuron of choice (APL) are visualized. I have tried the following code but i still get all connectors with the std red/blue code for output/input synapses. What am I doing wrong?

ca = neu.fetch_roi('CA(R)')
nl = neu.fetch_skeletons(5813069089)
navis.plot3d([ca, nl], connectors = True, cn_colors = {'APL': (0,0,0)})

Screenshot 2021-04-22 at 15 03 18

In the long term: make interfaces stricter

The goal is to make navis' behaviour more strict for use in a server context while staying flexible in user-facing functions.

@clbarnes do you have an example for a function that could do with a stricter interface?

meshio for mesh I/O

Spotted some commits implementing more mesh formats - would it be worth just (optionally?) depending on meshio and calling it a day? Improvements and additional formats could be contributed back upstream. It has neuroglancer's mesh format from a while back - reducing the number of implementations out in the wild is probably for the best.

New feature: function to visually compare neurons

Would work with distdots and flag e.g. branches that exist in one but not the other neuron. The main use case I have in mind is to facilitate visual inspection of matches generated by NBLAST. This would hence need to be tightly intertwined with the plotting methods. Might even write this primarily as plotting function.

This could be implemented as a type of NBLAST that instead of a single combined score, returns a score for every node.

size of points in 3d plots are not scaled properly

To reproduce:

import numpy as np
import navis

pts = np.array([[18475.98789451, 28713.40566046, 31338.67255543],
                         [17709.24328012, 29031.00978332, 27768.92545872]])
navis.plot3d(pts,scatter_kws={"size": 5})

The size of the plotted points are not scaled, the issue is here:
https://github.com/schlegelp/navis/blob/b655256d7bac8e09bb6d1040306e0f1ce590b0f8/navis/plotting/ddd.py#L293

this line should be changed to:

trace_data += scatter2plotly(points, **scatter_kws)

transforms as standalone library

I was experimenting with some image slicing stuff and didn't find any nice generic standalone library providing an interface for general-purpose coordinate transforms. This kind of thing is obviously useful outside of neuron analysis and I like utilities you've got in navis.transforms. What would you think about splitting it out as a standalone library which navis depends on? The functions in navis could then be thin wrappers (if any wrapping was needed at all).

The hope would be that new coordinate transform libraries could use the base class provided by that library and be instantly compatible with downstream libraries, or thin wrappers could be added to the upstream library (like imageio/meshio). Downstream libraries would just have to be generic over the base class.

I hacked a bit on a simplified prototype with some API changes, some of which are much of a muchness:

minor

  • use __call__() instead of xform(), so that they could be used interchangeably with regular free functions
  • __add__() instead of .append() for producing a sequence (it would be quite nice to overload an operator for this, although I'm not sure __add__ is the right one)

could be added to navis

  • add a class which simply wraps another callable

possibly more contentious

  • every instance can hold a reference to which spaces it goes to and from - any hashable can be used as a space reference (probably strings in most cases). This allows you to build a transform sequences and a graph/ registry from an unordered bag of transforms, as well as doing some sanity checks.
  • take coordinates in the form supported by scipy.ndimage.map_coordinates so that entire volumes can be transformed at once. In the 2D case, this is closer to what affine transforms use. It's also slightly closer to the output of numpy.meshgrid. It doesn't really matter in the 2D case as it's just a .T away, but having an arbitrary shape after the first dimension is handy for the image/volume case.

add clusters parameter for plotting

This would make it easier to color neurons (perhaps also add to the same legend group) in some sort of grouping (e.g. from NBLAST via scipy's fcluster):

plot3d(nl, groups=[0,0,0,1,2,1,1,2,2], palette='muted')

pykdtree requires OMP_NUM_THREADS flag on Linux

Just to leave a paper trail for this observation:

I was setting up a large NBLAST on a Linux (Ubuntu 16.04) server and noticed that it was running suspiciously slow. Some digging flagged pykdtree as the culprit: it was much slower than scipy's KDTree solution (order 5-10 fold). This is weird because on my OSX laptop it's the other way around.

A quick glance at the pykdtree Github brought up the OMP_NUM_THREADS environment variable (which I never bothered with on OSX). Setting OMP_NUM_THREADS=4 indeed brings up pykdtree to the expected speed.

FYI: @clbarnes @sdorkenw

Converting Navis to Pymaid neuron objects.

Trying either:

navis_n = navis.example_neurons(n=1, kind='skeleton')
pymaid_n = pymaid.CatmaidNeuron(navis_n.nodes)

or

navis_n = navis.example_neurons(n=1, kind='skeleton')
df = navis_n.nodes.copy()
pymaid_n = pymaid.CatmaidNeuron(df)

or

navis_n = navis.example_neurons(n=1, kind='skeleton')
pymaid_n = pymaid.CatmaidNeuron(navis_n)

Gives these errors:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-8-5ead88bfea2d> in <module>
----> 1 pymaid_n = pymaid.CatmaidNeuron(navis_n)

~/.local/lib/python3.8/site-packages/pymaid/core.py in __init__(self, x, remote_instance, meta_data)
    227 
    228         if not isinstance(x, (str, int, np.int64, pd.Series, CatmaidNeuron)):
--> 229             raise TypeError('Unable to construct CatmaidNeuron from data '
    230                             'type %s' % str(type(x)))
    231 

TypeError: Unable to construct CatmaidNeuron from data type <class 'navis.core.neurons.TreeNeuron'>

and

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-9-10ed5fb3567d> in <module>
      3 navis_n = navis.example_neurons(n=1, kind='skeleton')
      4 df = navis_n.nodes.copy()
----> 5 pymaid_n = pymaid.CatmaidNeuron(df)

~/.local/lib/python3.8/site-packages/pymaid/core.py in __init__(self, x, remote_instance, meta_data)
    218         if isinstance(x, (pd.DataFrame, CatmaidNeuronList)):
    219             if x.shape[0] != 1:
--> 220                 raise Exception('Unable to construct CatmaidNeuron from data '
    221                                 'containing multiple neurons. Try '
    222                                 'CatmaidNeuronList instead.')

Exception: Unable to construct CatmaidNeuron from data containing multiple neurons. Try CatmaidNeuronList instead.

Given your recent updates, this should be possible from the node table? but the if statement in the second error is preventing this.

blender integration macosX

I installed a fresh anaconda environment python==3.9.2 on max os 11.6 "Big Sur" and then pip install navis into that environment. I then symlinked the conda environment into the location that blender 2.93.5 had for its python path (moving the old one to a backup location).

After starting blender I was able to import pandas, numpy and other packages I installed in the conda environment, but when trying to import navis i got the following error.

Builtin Modules:       bpy, bpy.data, bpy.ops, bpy.props, bpy.types, bpy.context, bpy.utils, bgl, blf, mathutils
Convenience Imports:   from mathutils import *; from math import *
Convenience Variables: C = bpy.context, D = bpy.data

>>> import navis
Traceback (most recent call last):
  File "<blender_console>", line 1, in <module>
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/navis/__init__.py", line 26, in <module>
    from .plotting import *
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/navis/plotting/__init__.py", line 15, in <module>
    from .dd import *
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/navis/plotting/dd.py", line 38, in <module>
    from .plot_utils import segments_to_coords, tn_pairs_to_coords
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/navis/plotting/plot_utils.py", line 32, in <module>
    from vispy.util.transforms import rotate
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/vispy/__init__.py", line 27, in <module>
    from .util import config, set_log_level, keys, sys_info  # noqa
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/vispy/util/__init__.py", line 14, in <module>
    from . import fonts       # noqa
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/vispy/util/fonts/__init__.py", line 13, in <module>
    from ._triage import _load_glyph, list_fonts  # noqa, analysis:ignore
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/vispy/util/fonts/_triage.py", line 14, in <module>
    from ._quartz import _load_glyph, _list_fonts
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/vispy/util/fonts/_quartz.py", line 12, in <module>
    from ...ext.cocoapy import cf, ct, quartz, CFRange, CFSTR, CGGlyph, UniChar, \
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/site-packages/vispy/ext/cocoapy.py", line 1288, in <module>
    quartz = cdll.LoadLibrary(util.find_library('quartz'))
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/ctypes/__init__.py", line 460, in LoadLibrary
    return self._dlltype(name)
  File "/Applications/Blender.app/Contents/Resources/2.93/python/lib/python3.9/ctypes/__init__.py", line 382, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: dlopen(Quartz.framework/Quartz, 6): no suitable image found.  Did find:
    file system relative paths not allowed in hardened programs

I'm gonna try doing it the way you suggested in the documentation, but being able to use anaconda to install packages feels quite painful to me for many things, and it would be great to have a fully functional conda environment that could also work with blender.

Break navis into submodules

This is just to get a discussion starting (@clbarnes):

Currently most functions in navis are automatically imported at top-level.

Pros:

  • convenient because most functions are easy to call via navis.{function}

Cons:

  • importing navis is slow
  • for a many analyses one doesn't even need all functions
  • crowded namespace
  • any missing dependency breaks the whole package

My preferred way of improving the situation would be to break navis into functionalities and only the core functions are imported at top-level. It might be worth doing some profiling to see where the actual bottlenecks are.

What I would consider core functionality:

  1. All neuron data models (i.e. navis.core)
  2. The above requires navis.graph
  3. navis.converters to move between neuron representations

In practice there aren't all that many modules that could be easily decoupled without re-organizing:

  1. navis.nbl(NBLAST) is independent
  2. navis.morpho and navis.plotting could be decoupled using lazy imports in the core modules
  3. navis.tranforms and navis.data are also independent but probably won't save much time on import
  4. navis.intersection could be decoupled

Other alternatives:

  • make imports conditional on some environment flag. For example backend user could set NAVIS_CORE=TRUE and navis only imports core functions at top level while for all other users the experience doesn't change
  • make expensive imports (like Matplotlib, VisPy, etc) lazy

I had a quick crack at the imports using python -X importtime and pasting the results of import navis into this website for visualisation:

importtime

The top 100 worst imports (if I understand the output correctly) are these:

 time[s]                              name
    6.02                             navis
    5.46                navis.connectivity
    5.45        navis.connectivity.predict
    4.42                        navis.core
    3.73                navis.core.volumes
    2.91                       navis.utils
    1.90             navis.utils.iterables
    1.08                      navis.config
    0.98                  navis.utils.misc
    0.87                            pandas
    0.86                  navis.transforms
    0.83        navis.transforms.templates
    0.82                              pint
    0.81                           trimesh
    0.80                      pint.context
    0.80                  pint.definitions
    0.79                   pint.converters
    0.78                       pint.compat
    0.77                            xarray
    0.66                     tqdm.notebook
    0.65                        ipywidgets
    0.65                     pkg_resources
    0.58                      trimesh.base
    0.55                  navis.clustering
    0.55          navis.clustering.cluster
    0.55                    navis.plotting
    0.50                           IPython
    0.43                 matplotlib.pyplot
    0.42            IPython.terminal.embed
    0.36                     trimesh.poses
    0.36                          networkx
    0.34                   navis.core.base
    0.32                 navis.plotting.dd
    0.32         navis.plotting.plot_utils
    0.32             vispy.util.transforms
    0.32                        vispy.util
    0.32                             vispy
    0.31                        vispy.util
    0.30                  vispy.util.fonts
    0.30          vispy.util.fonts._triage
    0.30          vispy.util.fonts._quartz
    0.30                 vispy.ext.cocoapy
    0.30                             numpy
    0.30                   pandas.core.api
    0.29                           seaborn
    0.28                         OpenGL.GL
    0.25 IPython.terminal.interactiveshell
    0.25               networkx.generators
    0.24                     seaborn.rcmod
    0.24                  seaborn.palettes
    0.23                     seaborn.utils
    0.23                       scipy.stats
    0.22               navis.core.skeleton
    0.22                navis.plotting.ddd
    0.22                 scipy.stats.stats
    0.21  networkx.generators.intersection
    0.20               networkx.algorithms
    0.20             trimesh.exchange.load
    0.20         scipy.stats.distributions
    0.18       navis.plotting.vispy.viewer
    0.18              navis.plotting.vispy
    0.18                     pandas.compat
    0.17               matplotlib.colorbar
    0.16                       trimesh.ray
    0.16                     scipy.spatial
    0.16          trimesh.ray.ray_triangle
    0.16              trimesh.ray.ray_util
    0.16                ipywidgets.widgets
    0.16                        numpy.core
    0.15                    trimesh.bounds
    0.15               pandas.core.groupby
    0.15       pandas.core.groupby.generic
    0.15    scipy.stats._continuous_distns
    0.13                pandas.core.arrays
    0.13       navis.plotting.vispy.viewer
    0.13                   trimesh.nsphere
    0.13                    scipy.optimize
    0.13                       vispy.scene
    0.12             navis.core.core_utils
    0.12                   navis.core.mesh
    0.12                          requests
    0.12                matplotlib.contour
    0.12            pathos.multiprocessing
    0.12                            pathos
    0.11                 pandas.core.frame
    0.11          navis.transforms.factory
    0.11         IPython.terminal.debugger
    0.11             trimesh.exchange.misc
    0.11                  navis.conversion
    0.11         ipywidgets.widgets.widget
    0.11                            meshio
    0.11               pandas.compat.numpy
    0.11            IPython.core.completer
    0.11                    ipykernel.comm
    0.11               pandas.util.version
    0.11          navis.conversion.meshing
    0.11               vispy.scene.visuals
    0.11                      pathos.pools
    0.11              prompt_toolkit.enums
    0.11                    prompt_toolkit

Most of these navis doesn't directly import but for example pint is unfortunately very expensive.

Plotting errors in NaVis

I have noticed two plotting errors in Navis:
The full code to reproduce the problem is here: https://github.com/flyconnectome/2020hemibrain_ALLN/blob/fba561dbb0a0ebe4b63c885c831dc65f9d902b0a/Python/HemibrainPN_kde.ipynb

  1. When connectors are plotted in 3-d plot like fig = navis.plot3d([pn_skeletons[0], al],connectors=True) refer to cell#48
    newplot (5)

  2. When connectors are only plotted for a particular neuron, they are in the wrong orientation(location), please refer to cell#26, I have cross verified by plotting only the connectors (synapeses) where they are in correct location, but however when using navis.plot2d([to_plot_pr, al], connectors=True, connectors_only=True, ax=ax, color=[sns.color_palette('muted', 4)[0], (.8, .8, .8, .2)], method='3d', lw=.8) they are plotted incorrectly.

Issue plotting ROI with navis.plot3d()

Following the neuprint tutorial:

import navis
# Make a 3D plot
fig = navis.plot3d([mbon_skeletons[0], mb])

I get this error :

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-9-7726b8aa70a7> in <module>
      2 
      3 # Make a 3D plot
----> 4 fig = navis.plot3d([mbon_skeletons[0], mb])

/opt/anaconda3/lib/python3.7/site-packages/navis/plotting/ddd.py in plot3d(x, **kwargs)
    219         return plot3d_vispy(x, **kwargs)
    220     else:
--> 221         return plot3d_plotly(x, **kwargs)
    222 
    223 

/opt/anaconda3/lib/python3.7/site-packages/navis/plotting/ddd.py in plot3d_plotly(x, **kwargs)
    311         data += neuron2plotly(neurons, **kwargs)
    312     if volumes:
--> 313         data += volume2plotly(volumes, **kwargs)
    314     if points:
    315         scatter_kws = kwargs.pop('scatter_kws', {})

/opt/anaconda3/lib/python3.7/site-packages/navis/plotting/plotly/graph_objs.py in volume2plotly(x, **kwargs)
    379                                       volumes=x,
    380                                       alpha=kwargs.get('alpha', None),
--> 381                                       color_range=255)
    382 
    383     trace_data = []

ValueError: not enough values to unpack (expected 3, got 2)

The issue may be with the mb volume ROI as the following works well:

fig = navis.plot3d([mbon_skeletons[0]])

Citing navis

Hi @schlegelp!

I'd love to be able to cite this awesome package for use in a publication.

Is the best way to do so currently just using the Zenodo DOI? Let me know if something else would be better!

Import errors

To reproduce this issue, install the latest navis from github and then do

from navis.interfaces.r import neuron2r, neuron2py

This results in

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-9b74208d3d05> in <module>
----> 1 from navis.interfaces.r import neuron2r, neuron2py

~/Documents/Python/navis/navis/interfaces/r.py in <module>
     43     from rpy2.robjects.conversion import localconverter
     44 
---> 45 from .. import cluster as pyclust
     46 from .. import core, plotting, config, utils
     47 

ImportError: cannot import name 'cluster' from 'navis' (/Users/sri/Documents/Python/navis/navis/__init__.py)

Unable to load swc file from simple neurite tracer

Hi @schlegelp ,
I recently tried to trace a gal4 line in the simple neurite tracer in FiJi and tried to load the trace (.swc) into navis however, I'm running into the below error. I was able to load the same .swc file in natverse. The error is as follows ( I have also attached the file here VFB_00023647_snt_skel-000.swc.zip)

swc_files = '/Users/sri/Documents/Python/gal4_flywire/VFB_00023647_snt_skel-000.swc'
vfb_neuron = navis.read_swc(swc_files)

resulting in..

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-7-d0edd8d77f78> in <module>
----> 1 vfb_neuron = navis.read_swc(swc_files)

~/opt/anaconda3/lib/python3.8/site-packages/navis/io/swc_io.py in read_swc(f, connector_labels, soma_label, include_subdirs, delimiter, parallel, precision, **kwargs)
    192                                 skiprows=len(header),
    193                                 header=None)
--> 194             nodes.columns = ['node_id', 'label', 'x', 'y', 'z',
    195                              'radius', 'parent_id']
    196         except BaseException:

~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py in __setattr__(self, name, value)
   5150         try:
   5151             object.__getattribute__(self, name)
-> 5152             return object.__setattr__(self, name, value)
   5153         except AttributeError:
   5154             pass

pandas/_libs/properties.pyx in pandas._libs.properties.AxisProperty.__set__()

~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py in _set_axis(self, axis, labels)
    562     def _set_axis(self, axis: int, labels: Index) -> None:
    563         labels = ensure_index(labels)
--> 564         self._mgr.set_axis(axis, labels)
    565         self._clear_item_cache()
    566 

~/opt/anaconda3/lib/python3.8/site-packages/pandas/core/internals/managers.py in set_axis(self, axis, new_labels)
    224 
    225         if new_len != old_len:
--> 226             raise ValueError(
    227                 f"Length mismatch: Expected axis has {old_len} elements, new "
    228                 f"values have {new_len} elements"

ValueError: Length mismatch: Expected axis has 4 elements, new values have 7 elements

Prune by volume produces - AttributeError - Can't pickle local object

This is also a new error in functionality that was working earlier:

#al - any mesh like antenna lobe mesh
#intraglom_skel - this could be any skeleton
intraglom_skel.prune_by_volume(al,mode = 'IN', prevent_fragments = True)

This results in

 Reason: 'AttributeError("Can't pickle local object 'subgraph_view.<locals>.reverse_edge'")'

Dotprops plotting: improve scale vector

Currently the plotting defaults really only work if data is in microns. If data is in nanometers the Dotprops are often not visible. A few ways to improve:

  1. Allow setting dps_scale_vect as Dotprops property
  2. Calculate dps_scale_vect based on dotprops units
  3. Calculate dps_scale_vect based on min distance between points (sampling resolution)

Could feasibly do all of the above in order.

TreeNeurons: How to deal with missing radii

There is a question of how to best deal with missing radii. A couple different options:

  1. Set missing radii to None
    Advantages: very obvious that they are missing; no problem with scaling
    Disadvantages: does not work with SWC IO (although that would be easy to catch); need to make sure navis does not expect integers/floats

  2. Set missing radii to -1
    This is how CATMAID (and hence pymaid+CatmaidNeurons) tracks missing radii.
    Advantages: easy to store (although Python doesn't really care if data is float anyway); works with SWC format
    Disadvantages: certain operations don't know if negative radii should be ignored or not (e.g. multiplication/division or resampling)

  3. Set missing radii to 0
    Advantages: probably easiest to work with when it comes to scaling, resampling etc; works with SWC format
    Disadvantages: not obvious what's missing and what has been set to 0 on purpose

As things stand, navis does not enforce any of the above but it would certainly be easiest to settle for one of the options. Currently, I'm leaning towards option 2 and hard-code that negative radii are effectively ignored. But we should also make sure that navis works well with None for radii.

navis doesn't work with rpy2

The navis (and I think pymaid) as well doesn't work with the latest rpy2 module (3.3.2)
Steps are as follows:

  1. Install rpy2 like pip install rpy2
  2. Peform setup like
import navis
navis.set_pbars(hide=True, jupyter=False)
from navis.interfaces import r
import matplotlib.pyplot as plt
from rpy2.robjects.packages import importr
import rpy2.robjects as robjects

  1. Import nat and try to convert a r object to navis neuron format like below:
nat = importr('nat')
kcs20 = robjects.r('kcs20')
kcs20_py = r.dotprops2py(kcs20)

This gives following error:

AttributeError                            Traceback (most recent call last)
<ipython-input-9-338476923f8b> in <module>
----> 1 kcs20_py = r.dotprops2py(kcs20)

~/anaconda3/lib/python3.7/site-packages/navis/interfaces/r.py in dotprops2py(dp, subset)
    453     # This only works if inputs is a neuronlist of dotprops
    454     if 'neuronlist' in cl(dp):
--> 455         df = data2py(dp.slots['df'])
    456         df.reset_index(inplace=True, drop=True)
    457     else:

~/anaconda3/lib/python3.7/site-packages/navis/interfaces/r.py in data2py(data, **kwargs)
    225             return {n: float(data[i]) for i, n in enumerate(names(data))}
    226     elif cl(data)[0] == 'data.frame':
--> 227         return pandas2ri.ri2py(data)
    228     elif cl(data)[0] == 'matrix':
    229         mat = np.array(data)

AttributeError: module 'rpy2.robjects.pandas2ri' has no attribute 'ri2py'

Bug Sur, problem importing navis

With import navis, I can't seem to import vispy, since upgrading to Big Sur (see: vispy/vispy#1885). This hasn't been fixed yet, even with python 3.9.1. I'm mostly using notebooks, and therefore don't require vispy. Any workaround for this until it's fixed to use navis without vispy? Thanks!

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-5-78772b4632e1> in <module>
----> 1 import navis
      2 import numpy as np
      3 import pandas as pd
      4 import networkx as nx
      5 import matplotlib.pyplot as plt

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/navis/__init__.py in <module>
     16 from .core import *
     17 from .data import *
---> 18 from .clustering import *
     19 from .connectivity import *
     20 from .graph import *

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/navis/clustering/__init__.py in <module>
     12 #    GNU General Public License for more details.
     13 
---> 14 from .cluster import (cluster_by_connectivity, cluster_by_synapse_placement,
     15                       cluster_xyz, ClustResults)
     16 

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/navis/clustering/cluster.py in <module>
     28 from ..core.neurons import TreeNeuron
     29 from ..core.neuronlist import NeuronList
---> 30 from .. import plotting, utils, config
     31 
     32 # Set up logging

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/navis/plotting/__init__.py in <module>
     13 
     14 from .d import *
---> 15 from .dd import *
     16 from .ddd import *
     17 from .vispy import *

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/navis/plotting/dd.py in <module>
     35 from .. import utils, config, core
     36 from .colors import prepare_colormap, vertex_colors
---> 37 from .plot_utils import segments_to_coords, tn_pairs_to_coords
     38 
     39 __all__ = ['plot2d']

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/navis/plotting/plot_utils.py in <module>
     30 with warnings.catch_warnings():
     31     warnings.simplefilter("ignore")
---> 32     from vispy.util.transforms import rotate
     33 
     34 __all__ = ['tn_pairs_to_coords', 'segments_to_coords', 'fibonacci_sphere', 'make_tube']

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/vispy/__init__.py in <module>
     28     pass
     29 
---> 30 from .util import config, set_log_level, keys, sys_info  # noqa
     31 from .util.wrappers import use, test  # noqa
     32 # load the two functions that IPython uses to instantiate an extension

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/vispy/util/__init__.py in <module>
     12 from .fetching import load_data_file  # noqa
     13 from .frozen import Frozen  # noqa
---> 14 from . import fonts       # noqa
     15 from . import transforms  # noqa
     16 from .wrappers import use, run_subprocess  # noqa

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/vispy/util/fonts/__init__.py in <module>
     11 __all__ = ['list_fonts']
     12 
---> 13 from ._triage import _load_glyph, list_fonts  # noqa, analysis:ignore
     14 from ._vispy_fonts import _vispy_fonts  # noqa, analysis:ignore

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/vispy/util/fonts/_triage.py in <module>
     12     from ...ext.fontconfig import _list_fonts
     13 elif sys.platform == 'darwin':
---> 14     from ._quartz import _load_glyph, _list_fonts
     15 elif sys.platform.startswith('win'):
     16     from ._freetype import _load_glyph  # noqa, analysis:ignore

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/vispy/util/fonts/_quartz.py in <module>
     10 from ctypes import byref, c_int32, c_byte
     11 
---> 12 from ...ext.cocoapy import cf, ct, quartz, CFRange, CFSTR, CGGlyph, UniChar, \
     13     kCTFontFamilyNameAttribute, kCTFontBoldTrait, kCTFontItalicTrait, \
     14     kCTFontSymbolicTrait, kCTFontTraitsAttribute, kCTFontAttributeName, \

/opt/anaconda3/envs/navis/lib/python3.9/site-packages/vispy/ext/cocoapy.py in <module>
   1124 
   1125 NSDefaultRunLoopMode = c_void_p.in_dll(appkit, 'NSDefaultRunLoopMode')
-> 1126 NSEventTrackingRunLoopMode = c_void_p.in_dll(
   1127     appkit, 'NSEventTrackingRunLoopMode')
   1128 NSApplicationDidHideNotification = c_void_p.in_dll(

ValueError: dlsym(RTLD_DEFAULT, NSEventTrackingRunLoopMode): symbol not found

Implement Hdf5 storage

Save to/load from Hdf5 files. Schema following @ceesem's:

Meshwork H5 files:
Meshwork files combine the morphology, tree-like topology, and annotations into
the same file. 
Meshes, which describe the morphology at high resolution, are represented in a
standard triangle mesh format.
Skeletons, which describe the topology, are described in terms of their nodes
and edges.
Synapses are described as dataframes, with points in space that can be mapped to
locations on the mesh. Synapse data makes use of pandas, which is available at
https://pandas.pydata.org/.
--------------
Metadata
	voxel_resolution : voxel resolution in nanometers per voxel for annotation
	points
	seg_id : uint64 Segment ID, specifying the neuron.
	version : Meshparty version used to create the meshwork file.
--------------
Mesh Data
	mesh/vertices : Nx3 float array of vertex locations in nanometers.
	mesh/faces : Mx3 integer array of triangular mesh faces
	mesh/link_edges : Lx2 integer array of extra edges to add to the mesh to span
	gaps
	mesh/node_mask : N_base (where N_base>=N) boolean with N True values. This
	allows one to map these mesh indices to the meshwork mesh acquired directly
	from the segmentation volume.
	mesh/voxel_scaling : 3-element float array giving a multiplicative scaling for
	each dimension to better calibrate distances. Optional.
	mesh/mesh_mask :  N_base (where N_base>=N) boolean. Used to specify an
	additional mask used to 
---------------
Skeleton Data
	skeleton/vertices : Ns x 3 float array of vertex locations in nanometers.
	skeleton/edges : Ms x 2 integer array of skeleton edges.
	skeleton/root : int, The index of the root node of the skeleton.
	skeleton/mesh_to_skel_map: N_base-length integer array. Values indicate which
	skeleton index each mesh vertex is closest to.
    skeleton/radius : Ms-length float array. Estimate of the caliber of the
    neuron at each skeleton node.
    skeleton/mesh_index : Ns-length int array. Values indicate which mesh index
    the skeleton node maps to exactly.
    skeleton/voxel_scaling : 3-element float array giving a multiplicative
    scaling for each dimension to better calibrate distances. Optional.

*gzip compression

move generation of visuals to the objects

This would make it easy(er) for data objects from external libraries to work with navis' plotting functions.

Minimally, we could implement three new methods like _visvispy(self, **kwargs), _vismpl() and _visplotly() which return a list of visuals/artists/graph objects that can be directly passed to the canvas/figure/ax respectively.

upload_neuron sends wrong argument to navis.write_swc()

I get an error here when trying to upload a neuron https://github.com/schlegelp/pymaid/blob/9bdd2cc5a3312e4971fff12e772b90129b298c75/pymaid/upload.py#L512

TypeError: write_swc() got an unexpected keyword argument 'filename'

Navis.write_swc() appears to expect 'filepath' instead of 'filename'.
https://github.com/schlegelp/navis/blob/fb706e2ba7f7776aaf204a67131396d3730027a2/navis/io/swc_io.py#L887

Changing 'filename' to 'filepath' in L512 of upload.py in pymaid seemed to fix the problem.

NBLAST: look into setting an upper distance for nearest-neighbour search

With the default scoring matrix, distances greater than 40um are essential treated the same with only minor differences based on vectors. We could leverage this to cut down the time spent in the nearest-neighbour search for clear non-matches by setting an upper distance limit.

A quick test:

>>> import pykdtree
>>> n = navis.example_neurons(1)
>>> # Generate the tree
>>> tree = pykdtree.kdtree.KDTree(n.nodes[['x', 'y', 'z']].values)  
>>> # A set of 10,000 test points
>>> q = np.random.randint(0, 40000, (10000, 3)).astype(np.float32)
>>> # Test without upper limit
>>> %timeit _ = tree.query(q)                                                                                            
24.9 ms ± 5.48 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> # Test with upper limit
>>> %timeit _ = tree.query(q, distance_upper_bound=40000 / 8)
3.19 ms ± 856 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

This test case is likely to work out less favourable for actual neurons but still looks promising.

@clbarnes @sdorkenw @dokato @jefferis FYI

Problem importing navis

When using
pip install git+git://github.com/schlegelp/navis@master
followed by
import navis
I get the following error:
AttributeError: type object 'pyoctree.pyoctree.pyoctree.PyOctree' has no attribute 'reduce_cython'

The following seems to work:
pip install navis==0.5.1

This seems to be a new problem, it was still working properly yesterday.
Thanks.

TreeNeuron.prune_at_depth

Remove all nodes greater than distance away from the root, keeping the proximal part. Looks like I could use geodesic_matrix to find all nodes further than distance, and remove_nodes to take all of them out - but I wonder if it might be better to traverse the tree from the root, getting a list of the first such node encountered on each branch, and cutting those. That way we avoid chopping up or rewiring bits of neuron we're going to throw away anyway.

Transformation graphs

This might be a use case already covered by something else, or might be a stupid idea, but I can envision a situation where you have different transformations for different types of task; e.g. between brains, between sides, and between segments. In this case, rather than generating a Transform for every permutation, it might be handy to create a TransformSequence on the fly which e.g. transforms a neuron from its current segment into a better-reconstructed segment, then flips it to the other side, then registers it in a different brain.

This could be done with a TransformGraph class, which would wrap a networkx.OrderedDiGraph. You populate the graph in some arbitrary nodes each representing a space, and then add edges which each have a Transform object for when you know how to transform between those spaces. Reverse edges could be populated with inverse transforms if they are not given explicitly. Then when you need to transform some points from space A to space B, the graph is traversed and a TransformSequence is created based on the path.

You could easily add weights to this graph if there were some transformations you had more faith in than others.

You could make it more robust by looking for multiple paths and finding a consensus deformation by weighting the results of each path by the length of the path, but of course this would be slower.

Parallel processing

Question for @clbarnes:

I recently added a @map_neuronlist link decorator that just applies the decorated function over all neurons in a neuron list. This saves a lot of boilerplate.

My plan had been to extend this decorator to allow parallel processing. A mock example:

@map_neuronlist(desc='Pruning', allow_parallel=True)
def prune_neuron(x):
   # do stuff 
   return x

I tried that now by modifying the decorator to look something like this (again mock example):

def map_neuronlist(desc=None, allow_parallel=False):
    def decorator(function):
        @functools.wraps(function)
        def wrapper(nl, *args, **kwargs):
            if kwargs.pop('parallel', False) and allow_parallel:
                with multiprocessing.Pool() as pool:
                    # Map function over all neurons
                    res pool.imap(function, nl)
            else:
                res = [function(n) for n in nl]
            return NeuronList(res) 
        return wrapper    

Unfortunately, it does not work like that because pickling the decorated function fails with something along the lines of:

PicklingError: Can't pickle <function prune_by_strahler at 0x131b30710>: it's not the same object as navis.morpho.manipulation.prune_by_strahler

I found two ways to work around this:

  1. Don't use @decorators and keep undecorated function. For the first example this would look something like this:
def _prune_neuron(x):
   # do stuff 
   return x

prune_neuron = map_neuronlist(desc='Pruning', allow_parallel=True)(x)

Not exactly pretty but it works.

  1. Use pathos instead of the built-in multiprocessing. pathos.multiprocessing.ProcessingPool uses dill instead of pickle and seems to be able to deal with this issue. It's seems also much faster to spawn the processes but I haven't tested that properly. This would introduce a new (soft?) dependency.

Thoughts/suggestions?

question about geodasic distance

Hi, I'm trying to measure the geodesic distance between nodes and I noticed that some distance value was inf (I used geodesic_matrix function). And when I do dist_between the two specific nodes say node IDs are X and Y, I got the error message Node Y not reachable from X. Could you please elaborate on what it means and why some nodes are not reachable to another? Thank you very much.

intersection_matrix returns error when running a dict with key as glom, value as navis.Volume

To reproduce this issue run like below..

navis.intersection_matrix(allln_skel,meshes, attr = 'cable_length')

Here, allln_skel comes from some al local neurons fetched via neu.fetch_skeletons
meshes is a dict with key as glom(or some other name) and values as navis.Volume

The problem seems to be in this line

https://github.com/schlegelp/navis/blob/cf49958669c594e686ca4c98d53e0950a1131eb5/navis/intersection/intersect.py#L249

when enumerating over a dict, the values enumerated are index and key and not key and value as assumed here.
I solved it like

volume = {dictkey: utils.make_volume(volume[dictkey]) for idx, dictkey in enumerate(volume)}

Meaning of presynaptic and postsynaptic connectors

Hi,

I am a little confused with the meaning of presynaptic and postsynaptic connectors. Am I correct in thinking that for a CatmaidNeuron, its presynaptic connectors are points where the neuron is sending an output (i.e. the neuron possesses a presynaptic point that sends a signal to another neuron by connecting to a postsynaptic point) and its postsynaptic connectors are points where the neuron is receiving an input from another neuron?

Multiple values for argument when plotting

import navis.interfaces.neuprint as neu
client = neu.Client('https://neuprint.janelia.org/','hemibrain:v1.1')

al = neu.fetch_roi('AL(R)')
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(1, 1, 1, projection = '3d')
navis.plot2d(al, ax = ax, lw=.8,method='3d')

Results in error:

----> 3 navis.plot2d(al, ax = ax, lw=.8,method='3d')

~/anaconda3/lib/python3.7/site-packages/navis/plotting/dd.py in plot2d(x, method, **kwargs)
    326                              method,
    327                              ax,
--> 328                              **kwargs)
    329 
    330     # Create lines from segments

TypeError: _plot_volume() got multiple values for argument 'ax'

It seems the recent changes in the plotting function have caused this problem of multiple arguments which is not only present in the _plot_volume function but in other functions as well and the multiple arguments issue not only occurs in axes, but also colors etc.

Consider adding support for direct transformation of navis.TreeNeuron objects

For example, when I want to transform an object from say FANC (female nerve cord) space to MANC (male nerve cord) space,
I would first to the following:

  1. Build a thin-plate spline registration
    fanc2manctr=navis.transforms.thinplate.TPStransform(fancreg_pts, mancreg_pts)
  1. Transform a neuron
    fanc2mancskeleton = fancskeleton.copy()
    fanc2mancskeleton.nodes[['x','y','z']] = fanc2manctr.xform(fancskeleton.nodes[['x','y','z']])
    

Here you can see that I will be able to transform neuron fancskeleton to fanc2mancskeleton using the registration.
But it will be easier for users to directly transform say,
fanc2mancskeleton = fanc2manctr.xform(fancskeleton)
Thus, if you have an implementation of transform class that will also work on neuron and neuronlist objects (this is the way it is currently implemented in nat)

Feature: improve Dotprop transforms

Currently, only the poins are transformed and the vector component of a Dotprop is then recalculated using the k. That's probably fairly robust and might actually help reduce noise introduced by the transform. However, it means that we need to know the k. In cases where we don't have the k (or don't want the vectors to be recalculated), we should bite the bullet and transform a second set of points that describe the vectors.

split_axon_dendrite not generating errors and just quiting

Hi Philipp,

For some reason, I am having a problem with navis.split_axon_dendrite() because it does not print any error message but it also just does not do anything and the rest of my code after the function stops running. Below is my code:

import navis
import navis.interfaces.neuprint as neu

#connect with neuprint server
client = neu.Client('https://neuprint.janelia.org',dataset='hemibrain:v1.2.1',token=my_api_token)

#get skeletons
alskels = neu.fetch_skeletons(neu.SegmentCriteria(instance='APL_R'))
on = alskels[0]
on = navis.heal_skeleton(on,method='ALL',drop_disc=True).convert_units('um')
print(f'{on.name}')
print(on)
print(f'cablelength (um) = {on.cable_length}')
print(f'N_pre = {on.n_presynapses}')
print(f'N_pos = {on.n_postsynapses}')
print(navis.split_axon_dendrite(on))
split_sections = navis.split_axon_dendrite(on,metric='flow_centrality')

print(split_sections)
axon = split_sections[split_sections.compartment == 'axon'].convert_units('um')
dend = split_sections[split_sections.compartment == 'dendrite'].convert_units('um')
print(axon)
print(dend)
print(axon.cable_length,axon.n_presynapses,axon.n_postsynapses)
print(dend.cable_length,dend.n_presynapses,dend.n_postsynapses)        

The output is:

APL_R
type            navis.TreeNeuron
name                       APL_R
id                     425790257
n_nodes                   182514
n_connectors              143341
n_branches                 20298
n_leafs                    20835
cable_length        80151.210938
soma                       17368
units             1.0 micrometer
dtype: object
cablelength (um) = 80151.2109375
N_pre = 16190
N_pos = 127151

For some reason, it doesn't print any error messages but it also doesn't print what it should be printing after print(f'N_pos = {on.n_postsynapses}'. I have absolutely no idea why this is happening...Could you please help me solve this problem?

navis.nblast fails in notebook

navis.nblast fails in the notebook but works fine from console. I set all the settings as advised in : #17 (comment)

Following the steps from the tutorial:

 import navis
nl = navis.example_neurons()
nl_um = nl / (1000 / 8)
dps = navis.make_dotprops(nl_um, k=5, resample=False)
nbl = navis.nblast(dps[:3], dps[3:])

I get:

BrokenProcessPool                         Traceback (most recent call last)
<ipython-input-5-0557c052a238> in <module>
----> 1 nbl = navis.nblast(dps[:3], dps[3:])

~/anaconda3/lib/python3.8/site-packages/navis-0.5.2-py3.8.egg/navis/nbl/nblast_funcs.py in nblast(query, target, scores, normalized, use_alpha, n_cores, progress, k, resample)
    687                                scores=scores) for this in nblasters]
    688 
--> 689         results = [f.result() for f in futures]
    690 
    691     scores = pd.DataFrame(np.zeros((len(query_dps), len(target_dps))),

~/anaconda3/lib/python3.8/site-packages/navis-0.5.2-py3.8.egg/navis/nbl/nblast_funcs.py in <listcomp>(.0)
    687                                scores=scores) for this in nblasters]
    688 
--> 689         results = [f.result() for f in futures]
    690 
    691     scores = pd.DataFrame(np.zeros((len(query_dps), len(target_dps))),

~/anaconda3/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
    437                 raise CancelledError()
    438             elif self._state == FINISHED:
--> 439                 return self.__get_result()
    440             else:
    441                 raise TimeoutError()

~/anaconda3/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
    386     def __get_result(self):
    387         if self._exception:
--> 388             raise self._exception
    389         else:
    390             return self._result

BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

For debugging:

I use Mac, BigSur 11.2.3, M1, Python 3.8 from Anaconda.

 navis.__version__
Out[7]: '0.5.2'

BrokenPipeError when calling xform_brain

Greetings, can you please help with this issue?

navis version: 0.5.2

Repro:

import flybrains
import fafbseg
import navis

flybrains.download_jefferislab_transforms()
flybrains.download_saalfeldlab_transforms()
flybrains.register_transforms()

navis.transforms.registry.summary()

sk = navis.read_swc("new_dp.swc")
tsk = navis.xform_brain(x=sk, source="JFRC2", target="FAFB14")

[new_dp.swc.zip](https://github.com/schlegelp/navis/files/6321936/new_dp.swc.zip)

Error:

 File "/root/miniconda3/envs/flywire-transforms/lib/python3.8/site-packages/navis/transforms/templates.py", line 770, in xform_brain
    return xform(x, transform=trans_seq, caching=caching,
  File "/root/miniconda3/envs/flywire-transforms/lib/python3.8/site-packages/navis/transforms/xfm_funcs.py", line 135, in xform
    xyz_xf = xform(xyz,
  File "/root/miniconda3/envs/flywire-transforms/lib/python3.8/site-packages/navis/transforms/xfm_funcs.py", line 196, in xform
    return transform.xform(x, affine_fallback=affine_fallback)
  File "/root/miniconda3/envs/flywire-transforms/lib/python3.8/site-packages/navis/transforms/base.py", line 176, in xform
    xf[~is_nan] = tr.xform(xf[~is_nan],
  File "/root/miniconda3/envs/flywire-transforms/lib/python3.8/site-packages/navis/transforms/cmtk.py", line 412, in xform
    proc.stdin.write(points_str.encode())
BrokenPipeError: [Errno 32] Broken pipe

cut neuron issue

Hi,

I have encountered a problem with the navis.cut_neuron function. I know there was recently an update of this function, but still, installing navis directly from this repository (as of 07.10.2020) and using networkx==2.4 gives me te following error:

The code (simply from tutorial):

import navis as na
n = na.example_neurons(1).simple
bp = n.nodes[n.nodes.type=='branch'].node_id.values
n_cut = na.cut_neuron(n, bp[0])

And the Error:

AttributeError                            Traceback (most recent call last)
<ipython-input-7-37dcecfafc6f> in <module>
      2 n = na.example_neurons(1).simple
      3 bp = n.nodes[n.nodes.type=='branch'].node_id.values
----> 4 n_cut = na.cut_neuron(n, bp[0])

~/anaconda3/envs/py3/lib/python3.7/site-packages/navis/graph/graph_utils.py in cut_neuron(x, cut_node, ret)
   1325             cut = _cut_igraph(to_cut, cn, ret)
   1326         else:
-> 1327             cut = _cut_networkx(to_cut, cn, ret)
   1328 
   1329         # If ret != 'both', we will get only a single neuron - therefore

~/anaconda3/envs/py3/lib/python3.7/site-packages/navis/graph/graph_utils.py in _cut_networkx(x, cut_node, ret)
   1426 
   1427         # Reassign graphs
-> 1428         dist.graph = dist_graph
   1429 
   1430         # Clear other temporary attributes

AttributeError: can't set attribute

Best,
Alex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.