Coder Social home page Coder Social logo

nsviewer's Introduction

Neurosynth Viewer

NSViewer is a CoffeeScript/JS library for visualization of functional MRI data.

Installation

To install, just drop the viewer.js file into your project and link to it. You'll also need to make sure the folllowing dependencies are available:

  • jQuery and jQueryUI
  • Sylvester
  • RainbowVis-JS
  • xtk (only necessary if you want to load Nifti volumes directly)
  • Bootstrap (not strictly necessary, but strongly recommended for glyphs)

Probably the easiest way to make sure you have everything you need is to add all of the files in the example/js/ folder to your project.

Usage

The source code for the following example can be found in example/js/app.js. You can also play with a live demo of the example here. This quickstart just walks through the contents of the app.js file.

First, we initialize a new Viewer:

viewer = new Viewer('#layer_list', '.layer_settings')

The arguments passed here identify the HTML containers we want to use to display the list of image layers and the active layer's settings, respectively.

Next, we create three views, for axial, sagittal, and coronal slices. The first argument indicates the HTML container to use; the second specifies the axis to slice along (axial, coronal, or sagittal).

viewer.addView('#view_axial', Viewer.AXIAL);
viewer.addView('#view_coronal', Viewer.CORONAL);
viewer.addView('#view_sagittal', Viewer.SAGITTAL);

Now we add sliders for manipulating the active layer's opacity and positive and negative thresholds. These calls involve more arguments, most of which are passed to jQuery-UI to set up the sliders (e.g., arguments 4 through 7 in each of the calls below reflect the minimum value, maximum value, initial value, and size of incremental step, respectively). See the API documentation for details.

viewer.addSlider('opacity', '.slider#opacity', 'horizontal', 0, 1, 1, 0.05);
viewer.addSlider('pos-threshold', '.slider#pos-threshold', 'horizontal', 0, 1, 0, 0.01);
viewer.addSlider('neg-threshold', '.slider#neg-threshold', 'horizontal', 0, 1, 0, 0.01);

We can also add navigation sliders that allow us to surf through a single plane by holding down and dragging a slider:

viewer.addSlider("nav-xaxis", ".slider#nav-xaxis", "horizontal", 0, 1, 0.5, 0.01, Viewer.XAXIS);
viewer.addSlider("nav-yaxis", ".slider#nav-yaxis", "vertical", 0, 1, 0.5, 0.01, Viewer.YAXIS);
viewer.addSlider("nav-zaxis", ".slider#nav-zaxis", "vertical", 0, 1, 0.5, 0.01, Viewer.ZAXIS);

The calls here are similar to those above, except that because these sliders only apply to a single plane (i.e., we want sliders that manipulate the X, Y, and Z axes respectively), we need to tell the viewer that in the last argument.

So much for sliders. Now, let's think about other elements of the UI. Color palette selection would be nice, so let's add a drop-down select element for that:

viewer.addColorSelect('#color_palette');

We'd also like to manipulate whether the user sees all values in an image, or only positive or negative values (e.g., only activations or only deactivations in a functional overlay):

viewer.addSignSelect('#select_sign');

Oh, we probably also want to see some information about the current voxel as we navigate around the brain, so let's add a couple of text fields to show the current coordinates and current voxel value:

viewer.addDataField('voxelValue', '#data_current_value')
viewer.addDataField('currentCoords', '#data_current_coords')

Now that the viewer itself is all set up, let's make sure everything is nicely painted to the canvas:

viewer.clear()

This step is completely optional, but it ensures that the user sees something sensible (i.e., empty black canvases) until we have something more meaningful to show.

Okay, at this point we're ready to finally load some images into the viewer. We can specify our images as an array of JSON objects, where each object contains the parameters for a single image to load. In this example, we load four different images. Here's the full specification:

images = [
	{
		'url': 'data/MNI152.json',
		'name': 'MNI152 2mm',
		'colorPalette': 'grayscale',
		'cache': true
	},
	{
		'url': 'data/language_meta.json',
		'name': 'language meta-analysis',
		'colorPalette': 'blue'			
	},
	{
		'url': 'data/emotion_meta.nii.gz',
		'name': 'emotion meta-analysis',
		'colorPalette': 'green'
	},
	{	
		'name': 'spherical ROI',
		'colorPalette': 'yellow',
		'data': {
			'dims': [91, 109, 91],
			'peaks':
				{ 'peak1':
					{'x': -48, 'y': 20, 'z': 20, 'r': 6, 'value': 1 }
				}
		}
	}
]

Notice that there are differences in how these images are specified. In the first two cases, we're loading the images from JSON files (the viewer will infer this from the .json extension). In the third case, we load the image directly from a Nifti image. And in the last case, we're actually dynamically generating our image by telling the viewer to create a blank image and then draw a sphere in left DLPFC.

Once we're done defining our images, we tell the viewer to load them:

viewer.loadImages(images)

And that's it, we're all done!

Developing

You'll need a javascript runtime and CoffeeScript. Install node.js, and then the CoffeeScript compiler ("npm install -g coffee-script"). To compile all of src/*coffee into lib/coffee.js, run:

cake build

Once you've built coffee.js, you'll probably want to bundle node dependencies like ndarray for the browser. You can npm install browserify, and then (from the root NSViewer directory):

browserify lib/coffee.js -o lib/viewer.js

To view the examples, the simplest way is probably to go into the examples directory, run:

python -m SimpleHTTPServer 8888

and point your browser to http://localhost:8888.

Note that you'll need to manually copy lib/viewer.js to examples/js/viewer.js every time you update the source files. Alternatively (and recommended!), for automated building, browserifying, and copying the updated viewer.js to your examples directory, install the guard and guard-shell ruby gems ("gem install guard guard-shell"), and run:

guard

Once guard is running, any changes to any of the source CoffeeScript, JS, HTML, or CSS files under examples/ should be automatically detected and viewer.js should be immediately updated.

nsviewer's People

Contributors

machow avatar mwaskom avatar njvack avatar tyarkoni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nsviewer's Issues

Add colorbrewer palettes

There are several versions of these palettes floating around github, and they'd be a nice addition to the current set of palettes.

Also some way of explicitly banning the "jet" colormap from ever being used would be good too...

Update image/scene parameters from object

Key image/layer parameters (e.g., color palettes, initial crosshair position, etc.) should be encodable in an object so settings can be easily updated (e.g., when loading a viewer with parameters encoded in the URL).

Nifti size in nsviewer

Does nsviewer only support niftis of size 91 109 91? If so, is there chance that different image sizes will be supported in the future?

current version is broken: example does not work

Uncaught TypeError: Cannot use 'in' operator to search for 'xyz' in undefined viewer.js:40
Viewer viewer.js:40
(anonymous function) app.js:3
c jquery.min.js:3
p.fireWith jquery.min.js:3
b.extend.ready jquery.min.js:3
H

Link statistical images to 4D data and plot across time at selected voxel

There's no pressing need for this feature, but I think it would be really nice to have at some point.

The basic idea is that almost all blob images collapse (possibly with complicated statistics) across some fourth dimension, whether you're plotting a subject's activations (so, time), a group map (subjects) or a metaanalysis (studies). There's a lot of information that gets lost in the blob maps that might be interesting, and there's no good way to represent it within the brain image paradigm.

But! Once you have a nice dynamic voxel picker, it's straightforward to link your statistical image to some 4D dataset that you consider "underlying" and plot the values across the last dimension at the selected voxel. In other words:

4d info

(imagine that the IPython plot is part of the viewer app and dynamically shows you the values across time/subjects/studies in the voxel that the crosshairs are centered on).

This seems like something that wouldn't be very hard to do with the existing tools + d3, so I thought I might put it on the agenda while I'm spamming the github issues.

Possibly switch to custom nifti reader

Hi there,

So, for a variety of reasons, I fooled around with creating a custom nifti1 reader in javascript, and made something that actually works, and a toy viewer to go along with it. Here's a demo:

http://njvack.github.io/jsnifti

... and the source:

https://github.com/njvack/jsnifti

It's small (minified and gzipped, the reader is about 19k) and fast -- parsing the structural image from the NIH test dataset takes about 20ms on Chrome/my 5-year-old Air if the file's endianness matches, and 500ms if it doesn't.

All the header fields are included (yay jBinary), and getting slices is easy (#ras_slice). Slicing and rendering in the (broken, unoptimized!) toy viewer is insanely fast (about 3ms/frame).

It's still missing some features (most notably, converting ijk <-> ras coordinates) but that shouldn't be too hard to add.

Anyhow! We may wind up using this internally, but it also might be useful for you.

Package dependencies properly

The changes introduced in #23 rely on npm-sourced libraries called via require(). This is a good opportunity to bundle all the other dependencies properly with npm as well.

Images not loading

Hello there!

Looks like there hasn't been much activity here in awhile, but I was looking to use this viewer as part of an html report generator I'm working on. Everything seems to work fine, except loading the images. When I cloned the repo and opened index.html in the example, the images wouldn't load. The spherical ROI does load, so it seems like app.js isn't finding the images. I tried using absolute paths to no avail.

Let me know if anyone has any ideas or clarifying questions, etc

Table of clusters

Currently there's no way to view clusters, which makes it hard to determine what's in an image. It would be nice to add an SPM-like table that displays the peaks and cluster sizes for all clusters of contiguous voxels in an image, and navigates to the peak voxel when a row is clicked on. This should ideally done dynamically (i.e., the table is updated whenever a threshold is changed).

Singleton pattern?

Heya,

I'm wondering about your reasoning behind making Viewer a singleton class -- I can see having two synchronized views for comparison purposes.

If I made it a non-singleton, would you be sad?

De-fuzzing the fuzz

Hey Tal,

I've been looking into how to get rid of the overlap artifacts that happen with zooming and transparent layers, and I think I have a pretty good handle on it.

My plan would be to have each view have its own array of layers and its own canvas, draw the pixels directly to it (with color mapping done at this step), and then use drawImage() to blit the layer to the View's canvas.

If I were to work up a PR along these lines, would you be receptive?

Pre-compute final color image and render only once

At the moment, all visible layers are independently painted to the canvas, which is hugely inefficient. Instead, we should loop once over all voxels, compute the final color image in memory, then blit to the canvas just once. This should take us from linear time in the number of layers to close to constant time, since profiling suggests that nearly all of the time is currently taken up by the painting operation.

Loading progress indicator needed

It would be good to add a progress bar when loading images; at the moment large images can take a while to load (particularly when read from binary file), and the user may wonder whether anything's happening.

Layers in different colors

Might be a good idea to have the viewer load different functional layers in different colors by default. I'd fix it if i knew Coffee Script!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.