Coder Social home page Coder Social logo

siplab-gt / cleo Goto Github PK

View Code? Open in Web Editor NEW
13.0 3.0 0.0 71.21 MB

Closed-Loop, Electrophysiology, and Optophysiology experiment simulation testbed for mesoscale neuroscience

Home Page: https://cleosim.readthedocs.io

License: MIT License

Python 100.00%
computational-neuroscience optogenetics electrophysiology closed-loop-control simulation

cleo's Introduction

Cleo: the Closed-Loop, Electrophysiology, and Optophysiology experiment simulation testbed

Tests Documentation Status Ruff DOI

Cleo: the Closed-Loop, Electrophysiology, and Optophysiology experiment simulation testbed

Hello there! Cleo has the goal of bridging theory and experiment for mesoscale neuroscience, facilitating electrode recording, optogenetic stimulation, and closed-loop experiments (e.g., real-time input and output processing) with the Brian 2 spiking neural network simulator. We hope users will find these components useful for prototyping experiments, innovating methods, and testing observations about a hypotheses in silico, incorporating into spiking neural network models laboratory techniques ranging from passive observation to complex model-based feedback control. Cleo also serves as an extensible, modular base for developing additional recording and stimulation modules for Brian simulations.

This package was developed by Kyle Johnsen and Nathan Cruzado under the direction of Chris Rozell at Georgia Institute of Technology. See the preprint here.

Overview table of Cleo's closed-loop control, ephys, and ophys features

๐Ÿ–ฅ๏ธ Closed Loop processing

Cleo allows for flexible I/O processing in real time, enabling the simulation of closed-loop experiments such as event-triggered or feedback control. The user can also add latency to the stimulation to study the effects of computation delays.

๐Ÿ”Œ Electrode recording

Cleo provides functions for configuring electrode arrays and placing them in arbitrary locations in the simulation. The user can then specify parameters for probabilistic spike detection or a spike-based LFP approximation developed by Teleล„czuk et al., 2020.

โšก 1P/2P optogenetic stimulation

By modeling light propagation and opsins, Cleo enables users to flexibly add photostimulation to their model. Both a four-state Markov state model of opsin kinetics is available, as well as a minimal proportional current option for compatibility with simple neuron models. Cleo also accounts for opsin action spectra to model the effects of multi-light/wavelength/opsin crosstalk and heterogeneous expression. Parameters are for multiple opsins, and blue optic fiber (1P) and infrared spot (for 2P) illumination.

๐Ÿ”ฌ 2P imaging

Users can also inject a microscope into their model, selecting neurons on the specified plane of imaging or elsewhere, with signal and noise strength determined by indicator expression levels and position with respect to the focal plane. The calcium indicator model of Song et al., 2021 is implemented, with parameters included for GCaMP6 variants.

๐Ÿš€ Getting started

Just use pip to installโ€”the name on PyPI is cleosim:

pip install cleosim

Then head to the overview section of the documentation for a more detailed discussion of motivation, structure, and basic usage.

๐Ÿ“š Related resources

Those using Cleo to simulate closed-loop control experiments may be interested in software developed for the execution of real-time, in-vivo experiments. Developed by members of Chris Rozell's and Garrett Stanley's labs at Georgia Tech, the CLOCTools repository can serve these users in two ways:

  1. By providing utilities and interfaces with experimental platforms for moving from simulation to reality.
  2. By providing performant control and estimation algorithms for feedback control. Although Cleo enables closed-loop manipulation of network simulations, it does not include any advanced control algorithms itself. The ldsCtrlEst library implements adaptive linear dynamical system-based control while the hmm library can generate and decode systems with discrete latent states and observations.

CLOCTools and Cleo

๐Ÿ“ƒ Publications

Cleo: A testbed for bridging model and experiment by simulating closed-loop stimulation, electrode recording, and optophysiology
K.A. Johnsen, N.A. Cruzado, Z.C. Menard, A.A. Willats, A.S. Charles, and C.J. Rozell. bioRxiv, 2023.

CLOC Tools: A Library of Tools for Closed-Loop Neuroscience
A.A. Willats, M.F. Bolus, K.A. Johnsen, G.B. Stanley, and C.J. Rozell. In prep, 2023.

State-Aware Control of Switching Neural Dynamics
A.A. Willats, M.F. Bolus, C.J. Whitmire, G.B. Stanley, and C.J. Rozell. In prep, 2023.

Closed-Loop Identifiability in Neural Circuits
A. Willats, M. O'Shaughnessy, and C. Rozell. In prep, 2023.

State-space optimal feedback control of optogenetically driven neural activity
M.F. Bolus, A.A. Willats, C.J. Rozell and G.B. Stanley. Journal of Neural Engineering, 18(3), pp. 036006, March 2021.

Design strategies for dynamic closed-loop optogenetic neurocontrol in vivo
M.F. Bolus, A.A. Willats, C.J. Whitmire, C.J. Rozell and G.B. Stanley. Journal of Neural Engineering, 15(2), pp. 026011, January 2018.

cleo's People

Contributors

kjohnsen avatar nac0005 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cleo's Issues

Future improvements for processing delays

At present, "serial" processing only operates on the entire processor, leading to the unrealistic scenario of the processor being unable to sample until the output is finished. On the contrary, we expect that I/O could take up much of the round-trip time and not cause samples to back up, even if the core computations must be executed serially.

image

What would be better would be for serial sampling to separate the computational part of the processing, which can't overlap one sample to another, from the I/O, which could.

image

But what about when some processing blocks can be executed in parallel while others can't? In that case, the most correct, general capability would be to specify the processing as a directed, acyclical graph between components, where the dependencies would automatically determine what could be parallelized and the resulting latency would reflect the longest path through that graph.

That said, on 1/26/22 Chris, Adam, Nathan, and I decided that this complexity is unneeded for the vast majority of use cases, and we'll simply make fixed sampling, parallel processing the default. The other two (fixed+parallel, when idle+serial) would probably only be useful after implementing some of these proposed changes, if we ever decide it's worth the effort.

Unit mismatch in MultiUnitSpiking when running LFP tutorial

Hi!

First, thanks a lot for developing this package, it's excellent and really well documented!

I've been toying around with the Advanced LFP tutorial and I believe there is an issue with the units of the MultiUnitSpiking signal:

  File "simulation.py", line 365, in init
    simulation.inject(
  File "/python3/lib/python3.10/site-packages/cleo/base.py", line 383, in inject
    device.connect_to_neuron_group(ng, **kwparams)
  File "/python3/lib/python3.10/site-packages/cleo/ephys/probes.py", line 158, in connect_to_neuron_group
    signal.connect_to_neuron_group(neuron_group, **kwparams)
  File "/python3/lib/python3.10/site-packages/cleo/ephys/spiking.py", line 209, in connect_to_neuron_group
    neuron_channel_dtct_probs = super(
  File "/python3/lib/python3.10/site-packages/cleo/ephys/spiking.py", line 100, in connect_to_neuron_group
    probs = self._detection_prob_for_distance(distances)
  File "/python3/lib/python3.10/site-packages/cleo/ephys/spiking.py", line 149, in _detection_prob_for_distance
    decaying_p = h / (r - c)
  File "/python3/lib/python3.10/site-packages/brian2/units/fundamentalunits.py", line 1193, in __array_ufunc__
    fail_for_dimension_mismatch(
  File "/python3/lib/python3.10/site-packages/brian2/units/fundamentalunits.py", line 266, in fail_for_dimension_mismatch
    raise DimensionMismatchError(error_message, dim1, dim2)
brian2.units.fundamentalunits.DimensionMismatchError: Cannot calculate [[107.43123428  44.86179886 ... 193.94388688 214.15676509]
 [149.46418226  84.59855499 ... 188.94152985 195.68866257]
 ...
 [226.23937524 204.19338619 ...  36.16518571  97.96112797]
 [254.65660985 220.12370993 ...  32.13300059  49.14026405]] mm^2 subtract 0. m, the units do not match (units are m^2 and m).

Let me know if you can not reproduce this then I can try and come up with a more minimal reproducer.

Thanks!

Support heterogeneous sample times

i.e., a microscope could be imaging at 30 Hz while the ephys is at 1 kHz. Control could also be on a separate schedule and simply use the latest sample if it's faster than one of the recorders.

2p imaging

  • limit image to 2D plane (I think. verify)
  • potential cross-talk via low-level background light level at specified wavelength (more realistic would be tiny blips of higher intensity light corresponding to scanning, but we don't want to simulate at extreme temporal resolution)
  • separate out the indicator model to be injected separately. Hopefully straightforward kinetics models exist for voltage and Calcium indicators. Or in the case of Ca++ maybe don't simulate ODE, but compute closed-form convolutions of spikes. And noise.
  • output ฮ”F/F traces for each ROI
  • (maybe) generate images from output data (show ROIs from traces, add noise to background)

holo stim

  • Gaussian ellipsoid light model (separate axial vs. lateral std dev)
  • limit targets to 2D plane
  • add option to specify intensity by mW instead of mW/mm2

Questions for experimentalists:

  • What's the temporal resolution limit for updating values?
  • Do I understand right that light is scanned across the plane of imaging, that two neurons won't actually be stimulated at the same time? Do I need to simulate short bursts of strong light or can I just approximate that with lower-intensity light over the whole sampling period?
    • Probably don't need to worry: Packer 2015 shows how SLM can do it

normalize excitation/action spectra

since constant power seems more common, probably make that the native representation. Create helper function spectrum_for_constant_photons() when otherwise. epsilon would report value for photons, while a new sensitivity function would report relative to power

Many-to-many light-opsin support

Currently, only one OptogeneticIntervention could be used on a neuron group at once. This is because only one Iopto gets added. We could support more by making Iopto a sum of currents identified by names, e.g., Iopto_name.

revise docs

  • opsin changes
  • neo export
  • make overview executable test like tutorials

make simple indicator not light dependent

Make LightDependentProtein a subclass of Protein. Need function for synapse source: ng in Protein and light agg ng in LightDependentProtein

make constructor functions include option for simple or light_dependent, e.g.,

def jgcamp7f(light_dependent=False):
    if light_dependent:
        return LDGECI(**params)
    else:
        return SimpleGECI(**params)
        
gcamp = jgcamp7f(light_dependent=[True|False])

Neo export

  • Try to make Neo an optional dependency.
  • Add Neo export to end of CLOC and all-optical tutorials
  • would need a Neo export function defined for each device
    • tricky for tetrodes. maybe ignore

Bug when using stochastic variable xi

CLEOSim's adding coordinates appears to run into problems trying to re-create the xi variable (since I just copy-and-pasted code from Brian to create the x, y, and z variables)

Nathan's experience:

I caught another cleosim/Brian2 interaction bug while working on getting the Wilmes and Clopath simulation going. Brian2 uses the name "xi" as a special variable for white noise https://brian2.readthedocs.io/en/stable/user/equations.html. For some reason, in cleosim utilities, you redefine xi again, the same way Brian2 does. This creates the error: KeyError: "The name 'xi' is already present in the variables dictionary.". Adding a suffix (as Brian2 allows for) to the xi name, still gets the error again. (For instance I renamed xi to xi_model and got KeyError: "The name 'xi_model' is already present in the variables dictionary.")
...
This bug was not showing up when I was just putting the brian2 network into a cleosim sim, but is now showing up that I am assigning coordinates using assign_coords_rand_rect_prism

Look at # Stochastic variables in utilities.py

Stuff for cleaning/documenting

  • move plot function to utils, make available at top level, shorten name
  • change ProcLoop's name? Not always a loop
  • Black pre-commit

Latest Matplotlib is not compatible

Installing matplotlib-3.9.0 breaks cleosim visualization functions

File "/home/brosnan/.conda/envs/cleosim/lib/python3.10/site-packages/cleo/light/light.py", line 437, in add_self_to_plot
handles = ax.get_legend().legendHandles
AttributeError: 'Legend' object has no attribute 'legendHandles'

However, matplotlib-3.8.2 works.

Remove implied units and just use units everywhere

I had all the code surrounding the IOProcessor not use units, the reasoning being that we could plug in a non-Python controller that wouldn't have the same Brian unit system. But this is silly---the Python IOProcessor would need to wrap such a foreign controller and it would be trivial to wrap/unwrap with unis

WIP on #43

Remove bug workaround in tutorials

A bug between matplotlib 3.5.2 and Brian breaks some plotting. I used the workaround (converting VariableView to array via mon.i[:] but this should be removed when the bug is fixed to teach users the simplest syntax possible.

This is in the tutorials: two places in on_off_ctrl, two in PI_ctrl, and once in optogenetics

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.