Coder Social home page Coder Social logo

psychoinformatics-de / studyforrest-data-eyemovementlabels Goto Github PK

View Code? Open in Web Editor NEW
2.0 9.0 1.0 15.17 MB

studyforrest.org: Eye movement events for the Forrest Gump movie stimulus [BIDS]

Home Page: http://studyforrest.org

License: Other

Shell 100.00%
eye-tracking datalad studyforrest natural-viewing

studyforrest-data-eyemovementlabels's Introduction

A studyforrest.org dataset extension

made-with-datalad PDDL-licensed No registration or authentication required

Eye movement events for the Forrest Gump movie

Two groups of participants (each n=15) watched this movie. One in a lab setup, another one in a MRI scanner. The original data are described in Hanke et al. (2016, http://www.nature.com/articles/sdata201692). This dataset contains eye movements results of fixations, saccades, post-saccadic oscillations, and pursuit events. Details of the detection procedure are available in:

Asim H. Dar, Adina S. Wagner & Michael Hanke (2019). REMoDNaV: Robust Eye Movement Detection for Natural Viewing

For more information about the project visit: http://studyforrest.org

Dataset content

For each participant and recording run in the original dataset, two files are provided in this dataset:

  • sub-??_task-movie_run-?_events.tsv
  • sub-??_task-movie_run-?_events.png

The TSV files are BIDS-compliant event (text) files that contain one detected eye movement event per line. For each event the following properties are given (in columns):

  • onset: start time of an even, relative to the start of the recording (in seconds)
  • duration: duration of an event (in seconds)
  • label: event type label, known labels are:
    • FIXA: fixation
    • PURS: pursuit
    • SACC/ISAC: saccade
    • LPSO/ILPS: low-velocity post-saccadic oscillation
    • HPSO/IHPS: high-velocity post-saccadic oscillation
  • start_x, start_y: the gaze coordinate at the start of an event (in pixels)
  • end_x, end_y: the gaze coordinate at the end of an event (in pixels)
  • amp: movement amplitude of an event (in degrees)
  • peak_vel: peak velocity of an event (in degrees/second)
  • med_vel: median velocity of an event (in degrees/second)
  • avg_vel: mean peak velocity of an event (in degrees/second)

The PNG files contain a visualization of the detected events together with the gaze coordinate time series, for visual quality control. The algorithm parameters are also rendered into the picture.

How to obtain the dataset

This repository is a DataLad dataset. It provides fine-grained data access down to the level of individual files, and allows for tracking future updates. In order to use this repository for data retrieval, DataLad is required. It is a free and open source command line tool, available for all major operating systems, and builds up on Git and git-annex to allow sharing, synchronizing, and version controlling collections of large files. You can find information on how to install DataLad at handbook.datalad.org/en/latest/intro/installation.html.

Get the dataset

A DataLad dataset can be cloned by running

datalad clone <url>

Once a dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual content of the (sometimes large) data files.

Retrieve dataset content

After cloning a dataset, you can retrieve file contents by running

datalad get <path/to/directory/or/file>

This command will trigger a download of the files, directories, or subdatasets you have specified.

DataLad datasets can contain other datasets, so called subdatasets. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run

datalad get -n <path/to/subdataset>

Afterwards, you can browse the retrieved metadata to find out about subdataset contents, and retrieve individual files with datalad get. If you use datalad get <path/to/subdataset>, all contents of the subdataset will be downloaded at once.

Stay up-to-date

DataLad datasets can be updated. The command datalad update will fetch updates and store them on a different branch (by default remotes/origin/master). Running

datalad update --merge

will pull available updates and integrate them in one go.

Find out what has been done

DataLad datasets contain their history in the git log. By running git log (or a tool that displays Git history) in the dataset or on specific files, you can find out what has been done to the dataset or to individual files by whom, and when.

More information

More information on DataLad and how to use it can be found in the DataLad Handbook at handbook.datalad.org. The chapter "DataLad datasets" can help you to familiarize yourself with the concept of a dataset.

studyforrest-data-eyemovementlabels's People

Contributors

adswa avatar aqw avatar asimhdar avatar christian-monch avatar loj avatar mih avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

loj

studyforrest-data-eyemovementlabels's Issues

Ideas on paper content

  • start with problem: low-quality (but precious) data from natural viewing (movies)

  • outline our algorithm idea

  • compare to ultimate comparison (see #16)

  • show performance on MRI eyegaze data

  • compare to HQ lab eyegaze data

  • link code repo with DOI from zenodo

  • link data repo

  • put in brainhack collection

Missing eyemovement class

In this article, we define the two groups of glissades as mutually exclusive; that is, low-velocity glissades are not a subset of high-velocity glissades.

Again, a statement directly from the paper. This distinction is not made.

Saccade end detection is wrong

End detection needs to be done from the last sample exceeding the peak threshold velocity, ATM it is done by search from the first sample onwards.

Missing algorithm step

Velocity and acceleration data were appropriately adjusted to compensate for the time shift introduced by the filters.

This is stated in Nyström et al, 2010. I cannot see this being done in the code at all.

Output compared to Nyström's Matlab Algorithm --- needs a dataset with 'x' 'y' coordinates to run

So for some reason all the sources of the input data from the author seem to be corrupt --- the input is basically two columns with the x and y coordinates and you can feed in specific data(screen size, viewing distance, etc). Tried it with a sample from one of our subjects by removing everything other than the x and y coordinates and the algorithm worked fine. (Note: this was just a test, and so the screen size, etc aren't according to the actual ones used in our study)

I've attached the outputs (and input: randomsample.csv) currently trying to find out what these event labels stand for (They are labelled 0-4) @AdinaWagner Any ideas?

DetectionResults.zip

Glissade end detection is implemented the wrong way

is defined when Vi - Vi+1 <= 0 after the last velocity peak sample in the glissade. Glissades with an amplitude larger than their preceeding saccades were omitted.

The latter part is not done at all. The former is limited to a 40ms window.

Investigate warning reported in #8

code/tests/test_nystrom.py::test_real_data
/home/adina/Documents/MastersThesis/Asim/studyforrest-data-eyemovementlabels/code/detect_events.py:361: RuntimeWarning: Mean of empty slice.
sacc_start, sacc_end, peakvels.mean())
/usr/lib/python3/dist-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)

Smooth pursuit detection

I set out to look for ideas on how to detect smooth pursuit. I found a paper by Larsson that describes a Matlab-based algorithm (also together with nyström): https://ac.els-cdn.com/S1746809414002031/1-s2.0-S1746809414002031-main.pdf?_tid=2f59fb52-3fa6-4579-ac3b-c590f3f10abe&acdnat=1535098835_0894ae76c22c47c2e79d3b67844b8f8f

I also found a paper by Agtzidis et al. (http://delivery.acm.org/10.1145/2860000/2857521/p303-agtzidis.pdf?ip=141.44.98.70&id=2857521&acc=ACTIVE%20SERVICE&key=2BA2C432AB83DA15%2E88D216EC9FFA262E%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1535102822_e5cd2e4bf831a77c5d3f6d152378af8a). Their algorithm uses data from multiple subjects to detect similar gaze patterns that are neither saccades nor fixations of subj. watching dynamic stimuli. If several people show movements that are neither saccades nor fixations the movements are seen as likely being pursuit. Their implementation is publicly available in Python here: michaeldorr.de/smoothpursuit/sp_tool.zip
(I'm not sure whether their approach combining several subj. data after saccade and fixation detection could be easily integrated in the current way the algorithm is working)

Larsson developed an algorithm to classify fixations and smooth pursuit in eye tracking data when dynamic stimuli are used.
A reimplementation of this algorithm in Matlab has been made publicly available by Agtzidis & Startsev here: michaeldorr.de/smoothpursuit/larsson_reimplementation.zip

The matlab code has the following steps:

  1. Preprocessing: The algorithm removes all samples in the beginning or end of intersaccadic intervals exceeding 100°/s of velocity. This is based on Meyer et al. (1985) upper limits of human smooth pursuits velocity paper (https://ac.els-cdn.com/0042698985901609/1-s2.0-0042698985901609-main.pdf?_tid=59400c5b-9841-404f-b51e-84b1b5d27150&acdnat=1535099337_81f8b59d931722c16c906e78b319bb0e), stating 100°/s as the upper limit for human smooth pursuits.
  2. Preliminary segmentation: Intersaccadic intervals are divided into overlapping windows. For all pairs of x-y coordinates in the window, the angle between two consecutive pairs of coordinates to the x-axis is computed as the sample-to-sample direction alpha . All directions within a window are tested with a Rayleigh test (exists in python: http://docs.astropy.org/en/stable/api/astropy.stats.rayleightest.html#astropy.stats.rayleightest) on the Nullhypothesis that the samples are distributed uniformly around the unit circle. The p-value of the test of each window is used to calculate the mean p-value of all windows j which a sample k belongs to. Consecutive samples in the interval sharing similar directionality properties are grouped together into preliminary segments
  3. Evaluation of spatial features in the position signal: For the preliminary segments, four parameters "that are typical for a smooth pursuit movement" are calculated.
  • dispersion Pd: PCA, the first (pc1) and second (pc2) component are used. The lengths of the vectors of the two components are divided to obtain Pd: pc2/pc1 (measures "if a preliminary segment is more dispersed in one direction than in the other, i.e., a value of pD close to one means that the segment is equally spread in both directions.")
  • consistency in the direction Pcd: Euclidean distance between start and end of interval (dED), comparison of dED to pc1: Pcd = dED/pc1 ("a value of pCD close to one corresponds to that the data in the preliminary segment are starting and ending in the largest direction of the data.")
  • position displacement Ppd: Relationship between dED and trajectory length of the segment dTL: Ppd = dED/dLT
  • range Pr: absolute spatial range of the segment: Pr = sqrt((max(x)-min(x))^2 + (max(y) - min(y))^2)
  1. The four parameters are compared to "individual thresholds" resulting in one criterion per parameter (I haven't yet understood where these thresholds derive from, in the implementation default values are given). If none of the criteria are satisfied, the segment is classed as fixation. If 1-3 criteria are satisfied segments are labeled as "uncertain". If all 4 criteria are satisfied, the segment is classed as smooth pursuit.

  2. All segments in the "uncertain" category are evaluated again on criterion 3 (relating to positional displacement, "the most typical feature of a smooth pursuit movement compared to a fixation"). If satisfied, the spatial range is recalculated with by adding spatial ranges of other smooth pursuit segments in a intersaccadic interval that is comparable in regard to direction of the uncertain segment (based on a threshold phi). If range is larger than a threshold (default given in reimplementation), the segment is classified as smooth pursuit, otherwise as a fixation. If criterion 3 is not satisfied, criterion 4 is evaluated: if criterion 4 is not satisfied, the segment is classifies as a fixation

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.