Coder Social home page Coder Social logo

pycbsdk's Introduction

PyPI version

pycbsdk

Pure Python package for communicating with Blackrock Cerebus devices

Quick Start

From a shell...

pip install pycbsdk

Then in python

from pycbsdk import cbsdk


params_obj = cbsdk.create_params()
nsp_obj = cbsdk.get_device(params_obj)  # NSPDevice instance. This will be the first argument to most API calls. 
runlevel = cbsdk.connect(nsp_obj)  # Bind sockets, change device run state, and get device config.
config = cbsdk.get_config(nsp_obj)
print(config)

You may also try the provided test script with python -m pycbsdk.examples.print_rates or via the shortcut: pycbsdk_print_rates.

Introduction

pycbsdk is a pure Python package for communicating with a Blackrock Neurotech Cerebus device. It is loosely based on Blackrock's cbsdk, but shares no code nor is pycbsdk supported by Blackrock.

pycbsdk's API design is intended to mimic that of a C-library. Indeed, a primary goal of this library is to help prototype libraries in other languages. After all, Python is a poor choice to handle high throughput data without some compiled language underneath doing all the heavy lifting.

However, it's pretty useful as is! And so far it has been good-enough for some quick test scripts, and it even drops fewer packets than CereLink. So, please use it, and contribute! We are more than happy to see the API expand to support more features, or even to have an additional "pythonic" API.

Design

Upon initialization, the NSPDevice instance configures its sockets (but no connection yet), it allocates memory for its mirror of the device state, and it registers callbacks to monitor config state.

When the connection to the device is established, two threads are created and started:

  • CerebusDatagramThread
    • Makes heavy use of asyncio
      • A Receiver Coroutine retrieves datagrams, slices into generic packets, enqueues them in the receiver queue
      • A Sender Coroutine monitors a sender queue and immediately sends found packets.
  • PacketHandlerThread
    • Monitors the receiver queue.
    • Updates device state (e.g., mirrors device time)
    • Materializes the generic packets into specific packets.
    • Calls registered callbacks depending on the packet type.

connect() has startup_sequence=True by default. This will cause the SDK to attempt to put the device into a running state. Otherwise, it'll stay in its original run state.

After the connection is established, the client can use API functions to:

  • Get / Set config
    • set_config and set_channel_config do not do anything yet
    • set_channel_spk_config and set_channel_config_by_packet do things and are blocking.
    • get_config is non-blocking by default and will simply read the local mirror of the config. However, if force_refresh=True is passed as a kwarg, then this function will block and wait for a reply from the device. Use this sparingly.
  • Register a callback to receive data as soon as it appears on the handler thread.

This and more should appear in the documentation at some point in the future...

Limitations

  • This library takes exclusive control over the UDP socket on port 51002 and thus cannot be used with Central, nor any other instance of pycbsdk. You only get one instance of pycbsdk or Central per machine.
    • CereLink's cerebus.cbpy uses shared memory and therefore can work in parallel to Central or other cbpy instances.
  • The API is still very sparse and limited in functionality.
  • For now, Python still has the GIL. This means that despite using threading, if your callback functions are slow and hold up the PacketHandlerThread, this could hold up datagram retrieval and ultimately cause packets to be dropped.
    • Callbacks may enqueue the data for a longer-running multiprocessing process to handle.
    • Switch to No GIL Python as soon as it is available.
    • Use pycbsdk to prototype an application in a language that uses real parallelism.

pycbsdk's People

Contributors

cboulay avatar mongoonlypawn avatar

Stargazers

 avatar  avatar

Watchers

 avatar

pycbsdk's Issues

`set_channel_config_by_packet` should allow scoped CHANSET packet types.

From @dkluger here:

better default argument definition for cbsdk.configure_channel_by_packet and cbhw.device.nsp.set_channel_config_by_packet Right now, cbhw.device.nsp.set_channel_config_by_packet mandates packet.header.type to be a 'chan_info' packet, but the header can be a different packet type per class CBPacketType, which tracks with cbproto. I've made changes to cbsdk.py to allow optional declaration of the packet type, but default to CBPacketType['CHANSET'], and then modified cbhw\device\nsp.py similarly.

I override the packet header type because I want to ensure that the response from the device will reach the _handle_chaninfo callback, which is necessary to self._config_events["chaninfo"].set() and let control return to the client. However, setting the packet.header.type to CBPacketType.CHANSET means that every field is being set on the device and this only works if every field is correct. This works fine as long as the client creates their chan config packet from a copy of the known config.

Contrast this with Central where the config packet is initialized with random data and only a subset of fields are set to their correct values, but the pkt.header.type is set to a scoped type so the device only looks at the subset of fields.

If we wanted to enable clients to initialize CHANSETXXX config packets with incomplete data and only have to set the fields for the scoped packet type, then we would have to make the following changes:

  1. Register more callbacks for currently-unregistered CHANREP*** packet header types (see list below).
  2. Check that the argument packet header.type is a chan config type ((x & cbPKTTYPE_CHANSET) != 0) and if not set it to CHANSET.
  3. Assert that the argument packet header.type is not in the set of known unhandled CHANSET packets.

Step 1 is worthwhile anyway because we want to handle packets that are responses to other clients modifying the config.

Here is the list of CHANREP packet.header.types that do not currently have a registered callback:

  • CHANREPNTRODEGROUP
  • CHANREPSPKTHR
  • CHANREPDISP
  • CHANREPLABEL
  • CHANREPUNITOVERRIDES
  • CHANREPSPKHPS
  • CHANREPDINP
  • CHANREPDOUT
  • CHANREPAOUT
  • CHANREPSCALE

PacketHandlerThread should have an option to disable `warn_unhandled`

In situations where your device is sending data in multiple modalities (to Central for recording) but your pycbsdk is only interested in some of that data, currently we get lots of warnings about unhandled packets.

A current solution is to register a do-nothing callback for that packet type.

However, it would be nice if the PacketHandlerThread simply had an option to not warn about unhandled packets.

e.g., in the following:

elif b_debug_unknown:

We could simply make that

elif b_debug_unknown and self._b_warn_unhandled:

where _b_warn_unhandled is set during thread initialization. This is set on device connection and cannot be modified while running.

`_handle_procmon` should warn if dropped packets

The device maintains a count of pkts_received. I think the procmon packets also have a count of how many packets were sent since the last procmon.

I suspect this approach won't work with multi-threaded systems because procmon might come out of order w.r.t. sent packets.

`pycbsdk.cbsdk` module does not expose all methods in `__all__`

For people using the cbsdk module like a proto-C-API, it's helpful to import cbsdk as a module / namespace and access its methods as follows.

from pycbsdk import cbsdk

...

cbsdk.register_group_callback(...)
...
cbsdk.unregister_group_callback(...)

However, this requires that all such API methods are in the module's __all__, which sadly hasn't been kept up to date.

Parse CCF and send config

It would be great if this library could parse CCF files and send the enclosed config. That would make it possible to skip having Central send the config which greatly simplifies the order of operations when starting an experiment.

pycbsdk.examples.print_rates with --skip_startup reports Firing rate = 0 with specific dataset

I'm running nPlay server and pycbsdk print_rates example on the same machine, i.e.,

Terminal 1

$ /opt/CBNSP/bin/nPlayServer --network inst=127.0.0.1:51001 --autostart sampleData.nev 

Terminal 2

$ python -m pycbsdk.examples.print_rates --inst_addr 127.0.0.1 --inst_port 51001 -v   --protocol=4.1
Found 128 channels with spiking enabled and 0 with spiking disabled.
0 of the spike-enabled channels are using auto-thresholding.
Firing rate:    0.00 Hz +/- 0.00 (0.00 - 0.00)
0.39 +/- 747.22,        15.83 +/- 756.45,       15.10 +/- 739.81,       -0.17 +/- 0.37
12.97 +/- 747.33,       18.86 +/- 754.88,       11.35 +/- 740.68,       -0.17 +/- 0.37
[....]

and everything works as expected.

If I try to skip the startup on the pycbsdk side (i.e., only listening for incoming packets, using autostart on the nPlay side), some firing rate is reported for the standard sampleData.nev file (test_data.zip), although no channel avg/stddev is reported between firing rates.

$ python -m pycbsdk.examples.print_rates --inst_addr 127.0.0.1 --inst_port 51001 -v --skip_startup  --protocol=4.1
Found 128 channels with spiking enabled and 0 with spiking disabled.
0 of the spike-enabled channels are using auto-thresholding.
Firing rate:    0.00 Hz +/- 0.00 (0.00 - 0.00)
Firing rate:    25.33 Hz +/- 43.87 (0.00 - 102.00)

If I try to use from nPlay the input file Cyril001.nev (also attached), then I get firing rate 0, even though it works without the --skip_startup.

Terminal 1

$ /opt/CBNSP/bin/nPlayServer --network inst=127.0.0.1:51001 --autostart ~/Documents/demo_pycbsdk/Cyril001.nev 

Terminal 2

$ python -m pycbsdk.examples.print_rates --inst_addr 127.0.0.1 --inst_port 51001 -v --skip_startup  --protocol=4.1
Found 128 channels with spiking enabled and 0 with spiking disabled.
0 of the spike-enabled channels are using auto-thresholding.
Firing rate:    0.00 Hz +/- 0.00 (0.00 - 0.00)
Firing rate:    0.00 Hz +/- 0.00 (0.00 - 0.00)
Firing rate:    0.00 Hz +/- 0.00 (0.00 - 0.00)

NSPDevice should raise error on failed connection and cbsdk should return runlevel

cbhw. should throw an error when the true instrument IP is not as configured. For example, on my 257-512ch hub, the instrument IP is hard coded to 192.168.137.201. I then tell python to use the instrument address 192.168.137.200 in the hub object, call cbsdk.connect with the hub object, and got no connection error. Confirmed that an error should have been received because run_level = 0 after cbsdk.connect() ran with no errors.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.