Coder Social home page Coder Social logo

spiketools / spiketools Goto Github PK

View Code? Open in Web Editor NEW
20.0 2.0 7.0 8.08 MB

Tools for analyzing spiking data.

Home Page: https://spiketools.github.io

License: Apache License 2.0

Python 97.85% Makefile 0.77% TeX 1.38%
neuroscience spikes single-unit spike-analysis python data-analysis

spiketools's Introduction

spiketools

project status version build statue coverage license python versions publication

spiketools is a collection of tools and utilities for analyzing spiking data.

Overview

spiketools is an open-source module for processing and analyzing single-unit activity from neuro-electrophysiological recordings.

Available sub-modules in spiketools include:

  • measures : measures and conversions that can be applied to spiking data
  • objects : objects that can be used to manage spiking data
  • spatial : space related functionality and measures
  • stats : statistical measures for analyzing spiking data
  • sim : simulations of spiking activity and related functionality
  • plts : plotting functions for visualizing spike data and related measures
  • utils : additional utilities for working with spiking data

Scope

spiketools is currently organized around analyses of single cell activity.

The current scope does not include population measures, though this may be extended in the future.

Note that spiketools does not cover spike sorting. Check out spikeinterface for spike sorting.

Documentation

Documentation for spiketools is available here.

The documentation includes:

  • Tutorials: which describe and provide examples for each sub-module
  • API List: which lists and describes everything available in the module
  • Glossary: which defines key terms used in the module

If you have a question about using spiketools that doesn't seem to be covered by the documentation, feel free to open an issue and ask!

Dependencies

spiketools is written in Python, and requires Python >= 3.6 to run.

It has the following required dependencies:

There are also optional dependencies, that offer extra functionality:

  • statsmodels is needed for some statistical measures, for example ANOVAs
  • pytest is needed to run the test suite locally

We recommend using the Anaconda distribution to manage these requirements.

Installation

The current release version of spiketools is the 0.X.X series.

See the changelog for notes on major version releases.

Stable Release Version

To install the latest stable release, you can use pip:

$ pip install spiketools

Optionally, to include dependencies required for the stats module:

$ pip install spiketools[stats]

Development Version

To get the current development version, first clone this repository:

$ git clone https://github.com/spiketools/spiketools

To install this cloned copy, move into the directory you just cloned, and run:

$ pip install .

Editable Version

To install an editable version, download the development version as above, and run:

$ pip install -e .

Reference

If you use this code in your project, please cite:

Donoghue, T., Maesta-Pereira, S., Han C. Z., Qasim, S. E., & Jacobs, J. (2023)
spiketools: A Python package for analyzing single-unit neural activity.
Journal of Open Source Software, 8(91), 5268. DOI: 10.21105/joss.05268

Direct Link: https://doi.org/10.21105/joss.05268

For citation information, see also the citation file.

Contribute

This project welcomes and encourages contributions from the community!

To file bug reports and/or ask questions about this project, please use the Github issue tracker.

To see and get involved in discussions about the module, check out:

  • the issues board for topics relating to code updates, bugs, and fixes
  • the development page for discussion of potential major updates to the module

When interacting with this project, please use the contribution guidelines and follow the code of conduct.

spiketools's People

Contributors

claire98han avatar maestapereira avatar neuromusic avatar tomdonoghue avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

spiketools's Issues

0.1 code checks / fixes

Some things to check and potentially update in the code:

  • For functions that use / depend on check_orientation to infer array orientation, there can be an issue with 1 element arrays, which can't be inferred properly (see #162)
  • There are some issues with the shuffle_poisson function, including
    • It is not guaranteed to return the same number of spikes...
    • Due to it's use of permute_vector subsequent shuffles aren't independent of previous ones, but rather are a "step" over (see stats tutorial, and also #163)
  • For occupancy related functions, the time threshold may not be clear and may require additional documentation (see #179)
  • For functions that use time_range, there is a warning that can be overly verbose, if spike times can go beyond the given time range. Might want to make this easier to stop printing the warning.
    • This relates to functions including detect_empty_ranges and compute_presence_ratio

Plot tweaks / updates

Plot helpers / utilities:

  • add a helper function for initializing subplot axes (done in #87)
  • add a basic plot_line(s) plot to plts/data (done in #86 )
  • add annotate funcs, including _add_vline and _add_shade (done in #89)
  • add a basic plot_dots plot to plts/data

Plot edits:

  • plot_rasters: update to take a shade region (done in #89)
  • plot_positions: update to allow for trial-by-trial input (done in #100)
  • plot_positions: update to visualize extra landmarks, etc

Naming consistency:

  • 'shade' is currently used (across plots) to refer to error shading and avlines. Maybe make more consistent?
  • 'times' is currently used inconsistently between plot_waveform and plot_waveform3d

[DOC] - Make tutorials

We should make tutorial pages for each of our core modules.

Tutorials to create:

  • measures (Claire) added in #81
  • spatial (Sandra) added in #74
  • stats (Sandra) added in #77
  • simulations (Claire) added in #88

Development suggestions:

  • Initial files, with something of a layout, are already made in the tutorials folder
  • When writing tutorials, I usually work in a Jupyter notebook, testing things
    • You can then copy elements into the tutorial file
  • In general, we will use simulated data to show each thing (some basic simulation code has been added)
  • Make a separate PR for each topic / tutorial

Note: tutorial pages are something we will workshop and edit a bunch. Start with drafts, hitting any of the main points you think we should hit. It's fine to leave template sections (like: "do X here", or "describe Y here"). Based on the early drafts we can organize what to look further into, and how to edit these

Examples

Examples from NeuroDSP:

Documentation Approach

Links to documentation tools we are using:

[MNT] - Add unit tests

Functions that need unit-tests adding:

  • spiketools/spatial/occupancy - compute_bin_time (Sandra) - #44
  • spiketools/spatial/occupancy - compute_occupancy (Sandra) - #45
  • spiketools/spatial/information - compute_spatial_information_2d (Sandra) #47
  • spiketools/spatial/information - compute_spatial_information_1d (Sandra) #48
  • spiketools/spatial/information - compute_spatial_information (Sandra) #53
  • spiketools/sims/prob - sim_spiketrain_prob (Sandra) #72
  • spiketools/sims/dist - sim_spiketrain_binom (Sandra) #73
  • spiketools/sims/dist - sim_spiketrain_poisson (Sandra) #62
  • spiketools/sims/utils - refractory (Claire) #68

[ENH] - Add randomness to `shuffle_poisson`

Opening one full issue for item 7 of #159

Context: The shuffle_poisson function is just permutations of one spike train.
In the source code of shuffle_poisson, we have isis = permute_vector(compute_isis(poisson_spikes), n_permutations=n_shuffles) and permute_vector says that there is no randomness - the permutation is just moving an ISI and a spike from the end to the beginning.

Issue: If your num_shuffles is small, then your random permutations will be very similar to each other while being quite different from the original data. Therefore, this would not serve as a good baseline distribution for stats.

Recommendation from @rly to address this issue by doing one of the following:

  1. Add a warning about this behavior in the shuffle_poisson documentation
  2. Introduce randomness such that poisson_generator is called once for each shuffle instead of just once to introduce randomness in permute_vector.

Aligning Bins in Spatial Information Funcs

Currently the spatial information take spike X & Y data, and pre-computed occupancy, and then re-bin the spike positions, given a bin definition. If passing a number of bins, this does not assure that the binning is the same as was used to compute the occupancy, in which case the results are null. To Fix.

Orientation of position data

If we want to mimic the organizational structure of NWBfile, note that we currently expect position data as row data, where as NWB expects it to be in column data. This means that we currently transpose position data when we pull it from NWBfiles.

  • Possible ToDo: switch to using column-wise position data (?)
  • Note: regardless of choice, we should more clearly document the expected orientation / shape of position data.
  • However, note that if we do change orientation, this would complicate using things like the extract functions on the position data, as these functions assume row-wise data, so there would have to be some strategy to address where these functions are currently used on position data

Tutorials & Documentation Updates

This issue is to keep track of updates to the documentation / tutorials:

Code documentation:

  • check through the documentation for time units, and add notes on using seconds (#11)

Tutorials:

  • check through the documentation for time units, and add notes on using seconds (#11)
  • check through the tutorials for spatial analysis, and update for code changes (@maestapereira)

[DSG] - Spike Train Sampling rates

Spike trains have a sampling rate, but right now when we use them, this is little impicit (defaults to 1kHz, but doesn't document this, and has no options for changing.) This should be documented, and made more generalizable

Functions to check for this:

  • convert_train_to_times
  • create_spike_train
  • maybe others (to check)

Also: computing spike trains depends on the unit of the data (ms vs. second), and the start time of the task. For examples, it blows up if times are encoded in UNIX time (as it tries to create a spike_train from time 0).

Note: relates to #5

[DOC] - Notes on occupancy related documentation

  • compute_bin_counts_assgn & compute_bin_counts_pos should have the same top level description, and have a clearer output description (can be counts, if not normalized, or rates, if normalize)

Current Quirks with 0.1

A running list of quirks with the current version that we might fix:

  • 1) The name spatial/utils/compute_bin_time is a bit confusing, as 'bin' does not refer to spatial bins (as is generally standard across the rest of spatial, but instead refers to something like "compute time bins of sampling"

    • this should be renamed
    • it seems this is perhaps in the wrong place, and could be moved
  • 2) The shuffling functions shuffle_bins and shuffle_circular throw errors when there is more than one spike within a single millisecond.

    • Error is thrown when there is a size mismatch caused by conversions between spike times->train->times
    • Potentially catch and throw error in convert_times_to_train asking for an increase in sampling rate
    • Add refractory period default to sim_spiketimes
  • 3) The naming of parameters that refers to time indices, timepoints and timestamps are not consistent.

    • see utils/epoch/epoch_data_by_time,
    • see the following in utils/extract.py: get_value_range, get_ind_by_time, get_inds_by_times, get_value_by_time, get_values_by_times, get_values_by_time_range, threshold_spikes_by_times, threshold_spikes_by_values.
  • 4) since the plts.annotate functions can and do get imported to use by the user, they shouldn't have a leading underscore. Also, some related fixes:

    • Check the docs in plts.annotate, fixing _add_side_text

[BUG] - Double check occupancy & time thresholds

There's is maybe a bug / issue with using the time_threshold in occupancy computations - that presumably relates back to create_position_df (where the time_threshold is applied).

Notes:

  • need to add explicit tests for time thresholding
  • double check how this gets enacted when doing trial level (compute_trial_occupancy)

Potential updates for position related plots

Updates:

  • need a new plot for one dimensional position data
    • note: this is basically a raster plot (can use something similar to plot_rasters
    • maybe key difference: support multiple types of plot elements ("ticks"), with customizable height
  • plot_position_by_time should have an option to indicate events (such as responses)
    • note: this would also allow for indicating landmarks / object positions per trial, etc.

[DEV] - Refactors

There are some noted potential refactors for the repo, that need to be checked & updated.

This includes:

  • in occupancy (mostly around dropping pandas -> numpy).

[BUG] - Issue with `shuffle_circular`

There is an issue with the shuffle_circular function, in that it sometimes fails with a shape error (from line 216).

It's unclear why this is happening.

ToDo:

  • stress test this function, figure out what causes the error
  • add a fix to the function
  • add a new test to address this case

[MNT] - Test Checks & Orgs

Things to do:

  • Do a sweep of the test, checking for completion, code style, etc
  • Update to use any sensible test fixture objects
  • Check coverage, and update any missing lines / cases

suggestion: give users more guidance on installing optional dependencies

Its great that the Install page notes that statsmodels is an optional dependency, however, you can make this even easier for users, with one (or both) of the following:

  1. Under the "installation" section, add a second "optional" instruction to use pip install statsmodels
  2. Alternatively, add statsmodels as an extra dependency for setuptools so that pip install spiketools[all] or the like will install all dependencies

(related to openjournals/joss-reviews#5268)

[BUG] - Issue with `compute_spatial_bin_assignment`

There is an issue with the compute_spatial_bin_assignment function, in that it misplaces positions that were supposed to be in the last bin. Instead, they are placed in the before last bin.

Here is an example:
position = np.array([[1.5, 2.5, 3.5, 4.5], [6.5, 7.5, 8.5, 9.5]])
x_edges = np.array([1, 2, 3, 4, 5])
y_edges = np.array([6, 7, 8, 9, 10])
compute_spatial_bin_assignment(position, x_edges, y_edges)

Expected output:
(array([1, 2, 3, 4], dtype=int64), array([1, 2, 3, 4], dtype=int64))
Actual output:
(array([1, 2, 3, 3], dtype=int64), array([1, 2, 3, 3], dtype=int64))

ToDo:

  • Fix index error

[DEV] - Plans for 0.1.0

Sometimes soonish, we can make an initial 0.1.0 release with an initial alpha version of the module. This issue collects things to do for this initial release.

Code updates (functionality):

  • general plot updates (done in #79)
  • add the start of the sims sub-module (done in #21)
  • figure out a plan for the objects (in progress in #19)

Module updates:

  • finish an initial test suite (done in #34)
  • add doctests to (some) functions (done in #30)
  • do a sweep for organization & name consistency
    • for example, having spiketools/spatial/ and spiketools/plts/space is inconsistent
    • we mix between the terms "firing rate" and "spike rate"
  • fix up some module conventions (overview in #13)
    • standardize units & approaches (in progress in #11 & #12)

Documentation updates:

  • fix up the glossary page, including a sweep for consistent terminology (#119)
  • fix up README, including defining scope, etc
  • draft some initial tutorials (#32)

Plot quirks

Plot quirks to check:

  • plot_task_structure: check inputs for event lines (seems to require lists, but inputs of arrays are common)
  • plot_position_1d: passing in a custom alpha value fails

[BUG] - Issue with `create_spike_train`

There is an issue with the create_spike_train function, in that it sometimes ignores the last spike time.
This affects shuffle_circular and shuffle_bins functions. Might be one cause of #5

Example:

spikes = np.array([ 5, 91, 186, 206, 236, 325, 378, 677, 720, 763])
spike_train = create_spike_train(spikes)
# check that the last spike is missing by the following line:
np.where(spike_train)

This seems to be happening because the pre-allocated space for the output (spike_train) in the create_spike_train is too small by one. Additionally, when setting 1 to the correct spike_train positions, the last one is missing.

ToDo:

  • create PR to fix it (Sandra) #76

Label of plots in legend

I was trying to mix plt plot functions with spiketools plot functions and manually add a legend and noticed the spiketools plot function label did not work properly.

Here are the scenarios:

  1. Weird legend:
ax = get_grid_subplot(grid, 1, slice(0, 3))
a1 = ax.plot(trace_times, trace_values, label='trace')
a2 = plot_scatter(spike_times, np.zeros(len(spike_times)), c='r', ax=ax, label='spikes')
ax.legend([a1, a2])

This shows a legend, but the spiketools plot has no label and the plt plot has a weird label (not "trace").

  1. No legend entirely:
ax = get_grid_subplot(grid, 1, slice(0, 3))
a1 = ax.plot(trace_times, trace_values)
a2 = plot_scatter(spike_times, np.zeros(len(spike_times)), c='r', ax=ax)
ax.legend([a1, a2], ['trace', 'spikes'])

Which doesn't show a legend (just a tiny square where the legend is supposed to be. Adding the handles= and labels= to the above code in ax.legend also shoes no legend.

Alignment between spatial representations

We are currently not guaranteed to create and plot different spatial representations in the same orientation - meaning, for example, that computing and plotting a spatially binned firing rate map could end up in a flipped orientation as compared to plotting the raw position data.

ToDo:

  • step through the procedures here, to see where there are possible issues
  • make updates and or at least add options and flags, to make this consistent

Targets:

  • key steps to check here are compute_spatial_bin_assignment and plot_heatmap
    • note that plot_heatmap (using imshow) plots the data in a different orientation than standard plots (this could be it)

Update approach / API for spatial information

I think there's a bit of an issue with how the spatial information analyses are implemented: basically the public facing functions take in spike positions and occupancy, and then recomputes the binning and later normalizes by occupancy, before computing the actual spatial information.

There are a couple quirks with this approach:

  • typically, we have already or will later again compute binned firing (for plotting, for example), and so often end up recomputing the same thing inside and outside of the function
  • because each set of positions is re-binned, it's currently not necessarily enforced that the the same bin definitions are used (since if the observed spike positions cover a different range, the binning could end up being different)
  • note (edit): the function can be passed pre-computed bin edges, to use those, but it without doing this, it would be pretty easy to end up computing different things.

It seems it might make more sense that we could have a general compute_spatial_info function, that takes in a 1d or 2d array, that is the binned firing rate (normalized as desired), and then simply computes the spatial information.

@maestapereira & @claire98han - I'm tagging you in since you've both worked on this code and explored some place cell analyses - so let me know if you have any thoughts here!

[BUG] - Computing & plotting continuous firing rates

Currently when we compute binned firing rates, we don't explicitly compute the time values they relate to, meaning that it defaults to plotting the value at the beginning of each bin. For visualization purposes, it should really plot the data values at the center of the bins.

This relates to:

  • compute_trial_frs: computes the binned firing rates (does not compute time values)
  • plot_rate_by_time: plots the binned firing rates (does not offset the time values)

Note: this could be fixed in one or the other, maybe requiring updates in both.

[MNT] - Add doctest examples

Goal - add doctest examples across the module

Files to add doctests to (check off when PR's are merged):

  • spiketools/measures/measures (Claire) - #40
  • spiketools/measures/conversions (Claire) - #70
  • spiketools/utils/spikes (Claire) - #69
  • spiketools/spatial/information (Sandra) - #42
  • spiketools/spatial/occupancy (Sandra) - #26, #27, #35, #36
  • spiketools/spatial/position (Sandra) - #41
  • spiketools/sims/prob (Sandra) - #55
  • spiketools/sims/dist (Claire) - #71

Doctest Examples

For examples of doctests, check the NeuroDSP module (https://github.com/neurodsp-tools/neurodsp). Note that one of the goals of doctests is to write in this Example section, that gets rendered on the documentation website.

For example,

Doctest Goals

The goal of doctests is to provide a minimal example of running the function.

In terms of scope, typically we want:

  • a very simplified version of running the function
  • first, write a plain text sentence explaining the
  • if there are multiple "ways" to use the function, then there can be multiple different examples
  • keep each example specific to this function (generally, not adding subsequent processing / plotting, etc)

Doctest HowTos

Technical notes on doctests:

  • and code after >>> will get run by doctests
  • anything written right after a code line is the expected output
  • if you want to set a doctest line to not run, add # doctest:+SKIP at the end of the line
  • the tests that run when you open a PR will now check the doctests

Running Doctests

Once you have pytest installed, you can check doctests locally by doing:

# Move into the 'spiketools' folder '~/spiketools/spiketools` (eg. `cd `spiketools`)
# Run doctests, ignoring the tests folder
pytest --doctest-modules --ignore=tests

OR

# Run this command from the base repository ('~/spiketools/)
pytest --doctest-modules --ignore=spiketools/tests spiketools

[BUG] - Array orientation for 2d position with a single datapoint

If both following conditions are satisfied:

  • there is only one 2d position data point in row orientation (position.shape == (2, 1))
  • the orientation is not specified as an input to spiketools.spatial.utils.get_position_xy

There is an error when spiketools.spatial.utils.get_position_xy tries to imply the orientation.
This happens because check_array_orientation(arr) returns 'column' when it should return 'row'.
This is due to the condition position.shape[-2] > position.shape[-1] in check_array_orientation(arr), which yields 'column' as an output. This output is usually correct, but for the case of position.shape == (2, 1), it is not.

I assume this happens with all other functions that imply the orientation if the position input has position.shape == (2, 1), but I have not checked.

Extract & Epoch functions

Example use case:
Have continuously sampled player position (with times), and an object position of interest. Goal: get the time point the player position is closest to the object. How: get position index closest to object, index into timestamps.
Notes: can do this with get_ind_by_time, however, we are actually selecting index by value (so the name & description are not right / obvious).

Potential ToDo: add some kind of updated description or new function to explicitly support this kind of selection by values.

idea: add Binder support to help users kick the tires

Nice job using sphinx-gallery to give users downloadable, executable examples!

If you want to go a step farther, I'd recommend exploring configuring sphinx gallery to generate Binder links as well. This will let prospective users run your tutorials with one click (rather than needing to configure a local environment and download the examples).

There's more info on how do do this in the Sphinx-Galley docs here: https://sphinx-gallery.github.io/stable/configuration.html#generate-binder-links-for-gallery-notebooks-experimental

(related to openjournals/joss-reviews#5268)

[ENH] - 1d and 2d function compatibility

Make functions allow for 1d and 2d positions, bins, and edges.
The goal is to reduce unnecessary overlapping code.
PR #102

  1. spatial\occupancy.py
  • compute_spatial_bin_edges
  • compute_spatial_bin_assignment
  • compute_occupancy
  1. spatial\utils.py
  • get_pos_ranges (also added example/docstring test for this one)

[DEV] - Add spike simulations

ToDo: Add spike simulations.

Notes / to finish:

  • Add sim functions to API list
  • When a random seed approach is added, also use this across shuffle
  • Clean up a consistent naming structure (across files & functions)

Suggested improvements / typo fixes in tutorials

Hi, I am reviewing spiketools as part of openjournals/joss-reviews#5268. Great documentation and tutorials. I appreciate the effort put into both. In going through the tutorials, I found a few points of potential improvement and typos to address.

https://spiketools.github.io/spiketools/auto_tutorials/plot_measures.html#sphx-glr-auto-tutorials-plot-measures-py:

  1. The text "Again, we can see the converted data has the same raster plot as the original data." is not exactly accurate -- the converted data has the same raster plot as the original data, minus an offset.

https://spiketools.github.io/spiketools/auto_tutorials/plot_spatial.html#sphx-glr-auto-tutorials-plot-spatial-py:

  1. In the second cell
bin_widths = np.array([1, 1, 1, 2, 1.5, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0.5])

is defined but not used and is therefore confusing. I suggest removing it.

  1. The bins variable defined in that cell is also not used until considerably later and might be confusing to define this early. I suggest moving the definition to right before it is used.

  2. This code:

# Compute speed at each point
bin_widths = np.array([1, 1, 1, 2, 1.5, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0.5])

is confusing where it comes from. I suggest replacing it with np.diff(timestamps) which is equivalent and more clear.

  1. The comment # Compute the durations of the timestamps does not make sense to me. "the durations of the samples` might be better.

  2. I think in the comment # Define speed threshold, used to position values for speed less than the threshold, "position" should be changed to "remove" or "ignore" or "remove position".

https://spiketools.github.io/spiketools/auto_tutorials/plot_stats.html#sphx-glr-auto-tutorials-plot-stats-py

  1. The shuffle poisson raster looks like one spike train permuted. Indeed, the source code says isis = permute_vector(compute_isis(poisson_spikes), n_permutations=n_shuffles) and permute_vector says that there is no randomness - the permutation is just moving an ISI and a spike from the end to the beginning. If your num shuffles is small, then your random permutations will be very similar to each other while being quite different from the original data. Therefore, this would not serve as a good baseline distribution for stats. I recommend one of the following: add a warning about this behavior in the function documentation, introduce randomness such that poisson_generator is called once for each shuffle instead of just once, introduce randomness in permute_vector.

  2. In the code shuffled_spikes = shuffle_spikes(spikes, 'CIRCULAR', shuffle_min=200, n_shuffles=10), I imagine that the 'CIRCULAR' arg is not case sensitive, but having it in all caps perhaps suggests that it might be? Not important, but just looks odd.

  3. In the cell with df_pre_post = create_dataframe(data), I think it would be useful to print df_pre_post in the same cell to give readers a better understanding of the table.

  4. Now that we have our data organized into a dataframe, we can run an ANOVA using. - remove "using" at the end.

  5. Next line analyze whether there is an of the event on firing rates - "an of" -> "an effect"

  6. This will give us a the surrogate distribution - "a the" -> "a"

  7. To do so, we will simulating spike trains across trials (8 seconds) across spatial bins. - "simulating" -> "simulate"

  8. Like in point 9 above, in the cell with df = create_dataframe_bins(bin_firing_all, dropna=True), I think it would be useful to print df in the same cell to give readers a better understanding of the table.

Future developments / refactors

This issue is to keep track of some ideas that are not a priority for a 0.1 release, but could go into a 0.2 update.

Potential Refactors:

  • the space related functions that use histogram & check for dimensionality, could potentially use np.histogramdd
    • this supports multi-dimensionality, potentially allowing for consolidating code by passing through 1d or 2d arrays

Potential Updates:

  • Would it be useful for functions like get_value_by_time to (optionally?) return the difference between requested timepoint and identified closest time sample (this would allow for keeping track of proximity). Similar to threshold exclusion, but allows for more detail.
  • plot_rasters: add option to colour raster dots by spike amplitude
  • plot_positions: add option to color trajectory by elapsed time

Potential Additions:

  • Add a new simulation function that simulates event-related data

[BUG] - Issue with `sim_spiketrain_poisson` in `sim/dist.py`

Hey @TomDonoghue , there is an issue with the 'sim_spiketrain_poisson' function. In the documentation, the n_samples is optional. But it raises a TypeError when not included.

Example:
rate = 0.4 sim_spiketrain_poisson(rate, 1000, bias=0)

Error:

TypeError: sim_spiketrain_poisson() missing 1 required positional argument: 'fs'

ToDo:

  • fix the function
  • update the documentation

[ENH] - Plot Updates

General ToDos:

  • Work through and check / edit function names

Plot tweaks:

  • Add support for colouring sub-groups in plot_trial_rasters

Plots to add:

  • A firing rate over time plot, including support for multiple conditions (colours)

[DSG] - Figure out and describe units better (seconds vs. ms)

Main issue: spike times in seconds vs. milliseconds

For example, shuffle_spikes assumes ms, can fail weirdly with seconds (#5).

ToDo:

  • Choose a convention
  • Document that convention, in general, and within relevant docstrings (for example, any spike_times inputs)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.