This repository contains the source code for the spiking neural model of the WCST implemented in Nengo, and published in Kajić, I. and Stewart, T.C (2021): Biologically Constrained Large-Scale Model of the Wisconsin Card Sorting Test (to appear in Proceedings of the 43th Annual Meeting of the Cognitive Science Society).
This code has been tested with Python 3.8.5.
Start by cloning this repository and installing the Python packages in requirements.txt
:
pip install -r requirements.txt
From the cloned repository, import the model and run it:
from model import WCSTModel
import pandas as pd
result = WCSTModel().run(T=5, d=64, x_seq_correct=3, x_deck_size=16)
This will create a model with 64-dimensional semantic pointers and run it for 5 seconds. As well, it automatically creates an experimental module under the hood that controls the experiment logistics: x_seq_correct
says that the rule will change after a sequence of 3 correct responses, and x_deck_size
determines how many cards there are in the deck. This module can be thought of as the experiment administrator recording the responses and providing feedback.
After at most 10 minutes of building and running the model, we get the results and pretty-print them:
pd.DataFrame(result)
r_tstart | r_tend | trial | match | stimulus | target | similarity | choice | rule | rule_seq_id | correct | n_categories | error | p_error | p_response | fail_shift | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.20 | 0.50 | 1 | SN | Y-SQ-ONE | 1 | 1.0 | 1 | N | 12 | 1 | 0 | 0 | 0 | 0 | 0 |
1 | 0.80 | 1.10 | 2 | SN | B-SQ-ONE | 1 | 1.0 | 1 | N | 12 | 2 | 0 | 0 | 0 | 0 | 0 |
2 | 1.40 | 1.70 | 3 | SN | R-CR-TWO | 2 | 1.0 | 2 | N | 12 | 3 | 0 | 0 | 0 | 0 | 0 |
3 | 2.00 | 2.30 | 4 | S | R-CR-THREE | 2 | 1.0 | 2 | S | 12 | 1 | 1 | 0 | 0 | 0 | 0 |
4 | 2.60 | 2.90 | 5 | S | B-ST-ONE | 4 | 1.0 | 4 | S | 12 | 2 | 1 | 0 | 0 | 0 | 0 |
5 | 3.21 | 3.50 | 6 | CS | Y-ST-ONE | 4 | 1.0 | 4 | S | 12 | 3 | 1 | 0 | 0 | 0 | 0 |
6 | 3.81 | 4.11 | 7 | CS | B-CR-ONE | 2 | 1.0 | 2 | C | 12 | 1 | 2 | 0 | 0 | 0 | 0 |
7 | 4.41 | 4.71 | 8 | SN | R-TR-THREE | 1 | 1.0 | 3 | C | 12 | X | 2 | 1 | 1 | 1 | 0 |
Ideally, the model should be run with the nengo_ocl
backend to take advantage of GPU optimizations that speed up simulation run times. Results presented in the paper were generated with simulations ran that way. The script run-all.py
shows an example of configuration for running large-scale simulations. The nengo_ocl repository has installation instructions for those wishing to experiment with this setup.
As well, to run the run-all
script one needs to manually check out the ocl-use-context branch, which allows nengo_ocl
to be provided as the backend.