Coder Social home page Coder Social logo

neuronal-analysis's Introduction

Join the chat at https://gitter.im/openworm/neuronal-analysis Stories in Ready

neuronal-analysis

Tools to produce, analyse and compare both simulated and recorded neuronal datasets

This repository is intended to capture the results of both performing analysis on the dynamics of c. elegans neurons that have been recorded, as well as analyzing the results of simulations of c. elegans neurons, and making the two comparable to each other. While our current simulations of c. elegans neurons still leave out many important details and tuning, there is a lot to be learned from even making coarse grained comparisons between real dynamics and dynamics that can come from imprecise models.

Analysis

In a separate repo, progress was reported by @lukeczapla and @theideasmith in reproducing analysis via python code on existing data sets recorded from real c. elegans neurons:

Sample fluorescence heatmap for 107 neurons zimmer-scaled.

PCA on Derivatives derivative-manifolds

They wrote:

We also have some code to deal with the Kato data, which you can access at the link provided, in the folder wbdata. In that directory, if you import transform.py as tf, you can access a dictionary containing all of Katos data with tf.wormData. The dictionary contains calcium imaging data for five worms and is keyed by the original matlab file names as they were sent by Kato himself :). Each worm's data's contains timevector ‘tv’, neural activity traces uncorrected (‘deltaFOverF') and corrected for bleaching (‘deltaFOverF_bc’) as well as derivatives (‘deltaFOverF_deriv'). The identity of neurons is in the same order in the cell array ‘NeuronIds'.

Simulation

The same approach taken for recordings of real neurons can be applied to simulations. While the real neurons were reporting a fluorescence change over time, and the simulations are outputting membrane potential (voltage), at the level of gross dynamics, because those two variables are known to be related, they are comparable.

Bottom-up

The OpenWorm project has been following a bottom-up approach to modeling the nervous system for some time. The c302 project has been a rallying point for this.

c302 structure

Using the libNeuroML library, combined with information about the name and connectivity of c. elegans neurons, we have built up a data structure that represents the nervous system of th ec. elegans, with details like the kinds of synapses, neurotransmitters, gap junction relationships it may have. Specifically, you can see how this is built in this script and this script.

Then we can run the network as a simulation and see how the membrane potential might change over time.

Simulation in jNeuroML

The project of identifying and modeling the dynamics of all the ion channels has been the work of the ChannelWorm project. From there, the muscle model has been our rallying point for a highly detailed single cell model that would use our detailed ion channel models first. Beyond this, we have begun doing some work to improve the motor neurons and get the dynamics of the synapse between motor neurons and muscles corect. At each level, we have endeavored to constrain the model with data. Thus ion channel models are constrained by and tuned with I/V curve information mined from the literature. The same can begin to be done with the muscle model. While this is important infrastructure, there are still many gaps that are remaining.

Top-down

With the advent of new experimental techniques to image hundreds of neurons at the same time, we now have another important kind of data to use to constrain the nervous system model. We can begin to explore what are the minimum dynamical conditions that are necessary to produce, in broad strokes, the same kinds of high-level dynamics. Even simplistic models of neurons hooked together with the c. elegans connectome will likely produce interesting investigations.

Merging the Bottom-up with the Top-down

It is critical that the work of improving the nervous system model use all approaches, and result in a single unified model rather than a disconnected collection of models. While it is acceptable to have different versions of the models temporarily, eventually these must all converge on a single picture of the nervous system. To facilitate this, we have begun with modeling and computational infrastructure that is capable both of representing high-level and low-level aspects of the biophysics of neurons.

The future

Next steps for this exciting endeavor will be described in the issues on this repo. As we collect scripts and code that are relevant to putting these pieces together, we will incorporate them here and make sure the community can execute them.

neuronal-analysis's People

Contributors

benjijack avatar gitter-badger avatar slarson avatar theideasmith avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuronal-analysis's Issues

Trouble Recreating Kato-like visualizations from upstream data

Originally asked by @BenjiJack over here:

As a step toward that goal, building on @theideasmith's work importing Kato's data, I tried to recreate Kato's visualizations myself (see iPython notebook below), but from one step upstream: I tried to use the fluorescence bleach cancelled timeseries to naively compute the derivative myself, rather than use the fluorescence derivative timeseries directly. After all, if we are to take a timeseries output from c302 and compare it to Kato's results, we will have to reproduce the entire pipeline of analysis that Kato used. One could debate whether this is worth the effort, but it seems that Kato is (at least for the moment) our richest resource for neuron-by-neuron behavior in the real world, and worth comparing against.

I had trouble reproducing Kato's results from the upstream data. Although my approach was naive, it's not immediately clear to me how to fix it or how he cleaned the data so thoroughly. He mentions in the supplemental discussion on p. 9 that "total-variation regularization (Chartrand, 2011) was used to compute de-noised time derivatives...". Maybe someone else can shed more light on this or I can go back and read further on this technique.

Here's my attempt:
output_3_1

iPython notebook with code and images:
kato_visualization.pdf

Copyright issues on publicizing kato data?

@lukeczapla and I obtained the data from Manuel Zimmer. Going forward, I'm wondering if we need to check with them if publicizing the data is OK. We didn't get any sharing guidelines in the email from them, but I just want to be safe here.

Thoughts anyone?

Could we train c302 using Kato's data?

Originally asked by @BenjiJack over here:

Another idea I was thinking about is to use Kato's data to train c302. Could we train the neural network using the output of real-world neurons? @theideasmith you seemed to be alluding to this with the paper you referenced previously. I am sure something like this has been debated among the group before and I would be interested to hear your comments on whether this is feasible and how we might do it.

What techniques will we use to analyze and compare activity series data for relevant insight?

One of the goals of neuronal-analysis is to be the project-wide platform for analyzing both in-vivo and in-silico neuronal activity series data. The remarkable part of starting this repo now is the availability of both extensive time series data (from Kato) as well as a full and correctly mapped connectome (which is fairly recent). This opens up the possibility for groundbreaking work to be done in (1) parameterizing comprehensive models of C Elegans with realistic global and local dyamics, and (2) gaining new insight into the function of neural circuitry in CE. I hope this repo is where such work can begin.

Because the space of what can be done with time series data is tremendous, I'm wondering if we can start a discussion of what people think will be useful analysis functions to implement here?

The original Kato paper used PCA on its datasets to correlate global neuronal dynamics with movement behavior. I'm positive there are a plethora of other techniques that can be employed both to analyze individual datasets, to compare datasets, and to relate dynamics to connectivity. We can look at the data statistically, with dynamical systems, etc.

What are your thoughts?

Unable to load data

@theideasmith I am trying to run some of your iPython notebooks in but am having some trouble loading the data.

>>> import data_config as dc
***/neuronal-analysis/data/wbdata/
>>> wormData = dc.kato.data()
>>> len(wormData)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: object of type 'NoneType' has no len()

While trying to figure out why no data was being loaded, I noticed in the source code that the data() function requires the data to be cached, so instead I ran:

>>> dc.kato.retrieve()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "biodatamanager.py", line 88, in retrieve
    self.__cache = self.__reader(self.filepath)
  File "***/anaconda/envs/py27/lib/python2.7/site-packages/pandas/io/pickle.py", line 60, in read_pickle
    return try_read(path)
  File "***/anaconda/envs/py27/lib/python2.7/site-packages/pandas/io/pickle.py", line 57, in try_read
    return pc.load(fh, encoding=encoding, compat=True)
  File "***/anaconda/envs/py27/lib/python2.7/site-packages/pandas/compat/pickle_compat.py", line 118, in load
    return up.load()
  File "***/anaconda/envs/py27/lib/python2.7/pickle.py", line 864, in load
    dispatch[key](self)
KeyError: 'v'

Not sure if I'm going about this the wrong way, if there is a dependency problem, or something else. How should I be loading the data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.