Coder Social home page Coder Social logo

niivue / niivue Goto Github PK

View Code? Open in Web Editor NEW
204.0 9.0 51.0 621.08 MB

a WebGL2 based medical image viewer. Supports over 30 formats of volumes and meshes.

Home Page: https://niivue.github.io/niivue/

License: BSD 2-Clause "Simplified" License

JavaScript 10.16% HTML 0.68% CSS 0.20% TypeScript 88.96%
nifti webgl2 image viewer web niivue

niivue's Introduction

NiiVue

NiiVue is web-based visualization tool for neuroimaging that can run on any operating system and any web device (phone, tablet, computer). This repository contains only the core NiiVue package that can be embedded into other projects. We have additional repositories that wrap NiiVue for use in jupyter notebooks, vscode, and electron applications.

Click here to see NiiVue live demos

What makes NiiVue unique is its ability to simultaneously display all the datatypes popular with neuroimaging: voxels, meshes, tractography streamlines, statistical maps and connectomes. Alternative voxel-based web tools include ami, BioImage Suite Web, BrainBrowser, nifti-drop, OHIF DICOM Viewer, Papaya, VTK.js, and slicedrop.

Local Development

To run a hot-loading development that is updated whenever you save changes to any source files, you can run:

git clone [email protected]:niivue/niivue.git
cd niivue
npm install
npm run dev

The command npm run demo will minify the project and locally host all of the live demos. The DEVELOP.md file provides more details for developers.

Developer Documentation

Click here for the docs web page

Projects and People using NiiVue

Funding

Supported Formats

NiiVue supports many popular brain imaging formats:

niivue's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

niivue's Issues

if cal_min and cal_max set, then don't waste time running calMinMax

in order to speed up loading, If cal_min, and cal_max are set then trust that the user (And the image creator) want to use those values. If cal_min/max are not set (zero) then run calMinMax as normal.

@neurolabusc , what do you think of this strategy? It we'd need to workout what to do with global_min and global_max, but perhaps they could be set to cal_min and cal_max if cal_min/max > 0?

implement 2D slice panning feature

if a user is zoomed on an image, it would be nice to pan around the 2D slice.

Since left and right mouse click events are already used I propose the following user interaction:

Panning happens when:

  • zoomScale is > 0
  • "shift" held down
  • left click and drag

mobile gesture: two finger touch and drag

implement unit tests for core niivue

Use mocha to implement simple unit tests for niivue. We may be able to use a headless browser to run the webgl tests, but a real, local browser may be ok as the first pass.

create 4D data volume scrolling canvas UI

I imagine the canvas UI could look something like this...

Untitled presentation

It would be nice to enable picking in the timeline section of the canvas so that a mouse click and drag in that region scrolls through the volumes and re-renders the selected volume in the timeline.

The line plot shows the time series of the voxel under the crosshair.

The y axis min and max could come from cal_min/max (or global min/max)

selection box intensity calculation should accept a volIdx parameter

when using the selection box, it would be nice to pass in a parameter (via some widget) so that you can choose which overlay to select from with the selection box. Alternatively, maybe implement an "active layer" property of the niivue instance so that it can be set to a volume index via a user interface.

@cdrake, what do you think of this for the future?

Remove haze function

NiiVue should include operations to remove haze from a volume. This can help the appearance of the volume rendering, and ensure good behavior for depth picking. I would suggest copying the method from MRIcroGL. As an aside, I wonder if there are any JavaScript tools that do something like FSL's BET.

haze

Make clip plane draggable along each axis

Add handler to allow clip plane to be dragged along each axis. This will update the relative position of the visible plane as well as change the distance from the origin.

create NVImage class

the NVImage class will become the standardised image container with documented properties. This is cleaner than our current object image method where we add properties within various functions

add niivue.syncWith so that two or more niivue instances can stay in sync

adding a feature like niivue.syncWith(otherNiivueInstance) could allow one niivue instance to drive another. Useful for viewing images that are coregistered but viewed in two different panels. Similar to the MRIcroGL yoke window functionality.

This could allow the crosshair locations to be updated together in real time.

WebGPU

niivue uses WebGL2 which supports features unavailable in WebGL1 (3D textures and ability to write to the depth buffer) which result in lean code that is easy to maintain and leverages modern graphics cards well. The timing is nice, as Safari has just started to support this.

However, since @cdrake is working on making our GL code modular, it might be worth bearing in mind that we may want to keep the WebGL code modular from the other logic, in case at some future date we decide to target future graphics APIs.

WebGPU looks like the next generation web graphics API. Will Usher (who wrote some of the code we already leverage) just released a minimal WebGPU volume renderer. It does not support many browsers.

Reading the code reveals some interesting limitations of the current WebGPU implementation

I suspect this is years away from being prime time, and that WebGL 2 will be supported for many years. However, it is worth keeping an eye on emerging technologies.

update readme

The readme needs to be updated to reflect the latest information.

to add:

  • contributing
  • goals/mission statement
  • testing info
  • automated checks
  • protected branches
  • funding (P50 DC014664/DC/NIDCD NIH HHS/United States) 2021-2022
  • usage examples
  • import .md docs from hanayik/niivue

implement 2D slice zooming feature

this feature would enable users to zoom in on 2D slices.

We already devote mouse wheel events to slice scrolling in 2D view modes. So, in order to add slice zooming as well, I propose that we implement a "z + scroll" event handler.

We would update the zoom level under the following conditions:

  • "z" is pressed down
  • mouse wheel event
  • sliceType != sliceTypeRender

Use web worker for calMinMax

We can use webworker transferable objects in order to share data to and from web workers quickly (no need to copy data, which is slow).

This would enable us to spawn new web workers to run long running calculations such as calMinMax, which is for every voxel. Right now, if we want more than one niivue instance on a page, they share a single thread when running these long tasks (this is slow).

WIP: write all niivue unit tests

How did I get this list?

cat src/niivue.js| grep prototype | tr -d '{' | sed -e 's/^/- [ ] /'

  • Niivue.prototype.attachTo = function (id)
  • Niivue.prototype.arrayEquals = function(a, b)
  • Niivue.prototype.getRelativeMousePosition = function(event, target)
  • Niivue.prototype.getNoPaddingNoBorderCanvasRelativeMousePosition = function(event, target)
  • Niivue.prototype.mouseContextMenuListener = function(e)
  • Niivue.prototype.mouseDownListener = function(e)
  • Niivue.prototype.mouseLeftButtonHandler = function(e, rect)
  • Niivue.prototype.mouseRightButtonHandler = function(e, rect)
  • Niivue.prototype.calculateMinMaxVoxIdx = function(array)
  • Niivue.prototype.calculateNewRange = function()
  • Niivue.prototype.mouseUpListener = function()
  • Niivue.prototype.mouseMoveListener = function(e)
  • Niivue.prototype.resetBriCon = function()
  • Niivue.prototype.wheelListener = function(e)
  • Niivue.prototype.mouseDown = function mouseDown(x, y)
  • Niivue.prototype.mouseMove = function mouseMove(x, y)
  • Niivue.prototype.sph2cartDeg = function sph2cartDeg(azimuth, elevation)
  • Niivue.prototype.clipPlaneUpdate = function (azimuthElevationDepth)
  • Niivue.prototype.setCrosshairColor = function (color)
  • Niivue.prototype.setSelectionBoxColor = function (color)
  • Niivue.prototype.sliceScroll2D = function (posChange, x, y, isDelta=true)
  • Niivue.prototype.setSliceType = function(st)
  • Niivue.prototype.setOpacity = function (volIdx, newOpacity)
  • Niivue.prototype.setScale = function (scale)
  • Niivue.prototype.overlayRGBA = function (volume)
  • Niivue.prototype.vox2mm = function (XYZ, mtx )
  • Niivue.prototype.nii2RAS = function (overlayItem)
  • Niivue.prototype.loadVolumes = function(volumeList)
  • Niivue.prototype.rgbaTex = function(texID, activeID, dims, isInit=false)
  • Niivue.prototype.loadPng = function(pngName)
  • Niivue.prototype.initText = async function ()
  • Niivue.prototype.init = async function ()
  • Niivue.prototype.updateGLVolume = function() //load volume or change contrast
  • Niivue.prototype.calMinMaxCore = function(overlayItem, img, percentileFrac=0.02, ignoreZeroVoxels = false)
  • Niivue.prototype.calMinMax = function(overlayItem, img, percentileFrac=0.02, ignoreZeroVoxels = false)
  • Niivue.prototype.refreshLayers = function(overlayItem, layer, numLayers)
  • Niivue.prototype.colormap = function(lutName = "")
  • Niivue.prototype.refreshColormaps = function()
  • Niivue.prototype.makeLut = function(Rs, Gs, Bs, As, Is)
  • Niivue.prototype.sliceScale = function()
  • Niivue.prototype.mouseClick = function(x, y, posChange=0, isDelta=true)
  • Niivue.prototype.drawSelectionBox = function(leftTopWidthHeight)
  • Niivue.prototype.drawColorbar = function(leftTopWidthHeight)
  • Niivue.prototype.textWidth = function(scale, str)
  • Niivue.prototype.drawChar = function(xy, scale, char) //draw single character, never call directly: ALWAYS call from drawText()
  • Niivue.prototype.drawText = function(xy, str) //to right of x, vertically centered on y
  • Niivue.prototype.drawTextRight = function(xy, str) //to right of x, vertically centered on y
  • Niivue.prototype.drawTextBelow = function(xy, str) //horizontally centered on x, below y
  • Niivue.prototype.draw2D = function(leftTopWidthHeight, axCorSag)
  • Niivue.prototype.draw3D = function()
  • Niivue.prototype.mm2frac = function(mm )
  • Niivue.prototype.vox2frac = function(vox)
  • Niivue.prototype.frac2vox = function(frac)
  • Niivue.prototype.frac2mm = function(frac)
  • Niivue.prototype.canvasPos2frac = function(canvasPos)
  • Niivue.prototype.scaleSlice = function(w, h)
  • Niivue.prototype.drawScene = function()

update docs and live demos

Include @neurolabusc's docs from the old niivue repo, and separate the live niiuve demos into individual pages for each demo. We will have one index.html with links to other pages that show of features.

update loadVolumes to set canvas text to loading...

at the beginning of loadVolumes we should add loading... to the canvas in order to give a visual indication to the user that images are loading in the background.

This can be set or cleared in progress events from #45

Depth Picking and Crosshairs for Volume Rendering

The feature request if to add optional cross hairs to the volume rendering and allow depth picking where clicking on the volume rendering adjusts the cross hair position.

I had formerly thought this was tricky, but it turns out it only requires a few lines of code.

This was inspired by this feature request:

You can try this out with the latest MRIcroGL pre-release (v1.2.20210816). When you choose the "Multi-Planar A+C+S+R" from the "Disply" menu, you will see all four views show a cross-hair and all interactively move to the location of a mouse click:

The simple formula is described here:

Add clip plane

Add a visible clip plane that will clip the volume in the appropriate plane.

Show edges for overlays

A common task to FSL and AFNI quality assurance is to test the alignment of different images, e.g. is a fMRI scan aligned to the individual's T1 scan, is one individual's T1 scan aligned to a group template. Both FSL and AFNI chauffeur provide methods to show edge of one image on another. AFNI provides references for the method it uses, which seems to be very similar to a Sobel. Here is my implementation of a 3D Sobel. While the wiki describes a separable formula, it does seem to require a lot of memory, and doing the 3D computation is very quick.

I think a good first application for NiiVue is to support these FSL and AFNI QA displays on an interactive web page.

FSL style:
func2standard

MRIcroGL Sobel:

MRIcroGL_Sobel

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.