Coder Social home page Coder Social logo

girder / large_image Goto Github PK

View Code? Open in Web Editor NEW
190.0 21.0 41.0 15.26 MB

Python modules to work with large multiresolution images.

Home Page: http://girder.github.io/large_image/

License: Apache License 2.0

JavaScript 15.93% Python 79.22% Dockerfile 0.34% Shell 0.32% Stylus 0.21% Pug 1.36% Vue 2.61%

large_image's Introduction

Large Image

Build Status codecov.io License doi-badge pypi-badge

Python modules to work with large, multiresolution images.

Large Image is developed and maintained by the Data & Analytics group at Kitware, Inc. for processing large geospatial and medical images. This provides the backbone for several of our image analysis platforms including Resonant GeoData, HistomicsUI, and the Digital Slide Archive.

Highlights

  • Tile serving made easy
  • Supports a wide variety of geospatial and medical image formats
  • Convert to tiled Cloud Optimized (Geo)Tiffs (also known as pyramidal tiffs)
  • Python methods for retiling or accessing regions of images efficiently
  • Options for restyling tiles, such as dynamically applying color and band transform

Installation

In addition to installing the base large-image package, you'll need at least one tile source which corresponds to your target file format(s) (a large-image-source-xxx package). You can install everything from the main project with one of these commands:

Pip

Install common tile sources on linux, OSX, or Windows:

pip install large-image[common]

Install all tile sources on linux:

pip install large-image[all] --find-links https://girder.github.io/large_image_wheels

When using large-image with an instance of Girder, install all tile sources and all Girder plugins on linux:

pip install large-image[all] girder-large-image-annotation[tasks] --find-links https://girder.github.io/large_image_wheels

Conda

Conda makes dependency management a bit easier if not on Linux. The base module, converter module, and two of the source modules are available on conda-forge. You can install the following:

conda install -c conda-forge large-image
conda install -c conda-forge large-image-source-gdal
conda install -c conda-forge large-image-source-tiff
conda install -c conda-forge large-image-converter

Docker Image

Included in this repository’s packages is a pre-built Docker image that has all of the dependencies to read any supported image format.

This is particularly useful if you do not want to install some of the heavier dependencies like GDAL on your system or want a dedicated and isolated environment for working with large images.

To use, pull the image and run it by mounting a local volume where the imagery is stored:

docker pull ghcr.io/girder/large_image:latest
docker run -v /path/to/images:/opt/images ghcr.io/girder/large_image:latest

Modules

Large Image consists of several Python modules designed to work together. These include:

  • large-image: The core module.

    You can specify extras_require of the name of any tile source included with this repository. For instance, you can do pip install large-image[tiff]. There are additional extras_require options:

    • sources: all of the tile sources in the repository, a specific source name (e.g., tiff)
    • memcached: use memcached for tile caching
    • converter: include the converter module
    • colormaps: use matplotlib for named color palettes used in styles
    • tiledoutput: support for emitting large regions as tiled tiffs
    • performance: include optional modules that can improve performance
    • common: all of the tile sources and above packages that will install directly from pypi without other external libraries on linux, OSX, and Windows.
    • all: for all of the above
  • large-image-converter: A utility for using pyvips and other libraries to convert images into pyramidal tiff files that can be read efficiently by large_image. You can specify extras_require of jp2k to include modules to allow output to JPEG2000 compression, sources to include all sources, stats to include modules to allow computing compression noise statistics, geospatial to include support for converting geospatial sources, or all for all of the optional extras_require.

  • Tile sources:

    • large-image-source-bioformats: A tile source for reading any file handled by the Java Bioformats library.
    • large-image-source-deepzoom: A tile source for reading Deepzoom tiles.
    • large-image-source-dicom: A tile source for reading DICOM Whole Slide Images (WSI).
    • large-image-source-gdal: A tile source for reading geotiff files via GDAL. This handles source data with more complex transforms than the mapnik tile source.
    • large-image-source-mapnik: A tile source for reading geotiff and netcdf files via Mapnik and GDAL. This handles more vector issues than the gdal tile source.
    • large-image-source-multi: A tile source for compositing other tile sources into a single multi-frame source.
    • large-image-source-nd2: A tile source for reading nd2 (NIS Element) images.
    • large-image-source-ometiff: A tile source using the tiff library that can handle most multi-frame OMETiff files that are compliant with the specification.
    • large-image-source-openjpeg: A tile source using the Glymur library to read jp2 (JPEG 2000) files.
    • large-image-source-openslide: A tile source using the OpenSlide library. This works with svs, ndpi, Mirax, tiff, vms, and other file formats.
    • large-image-source-pil: A tile source for small images via the Python Imaging Library (Pillow). By default, the maximum size is 4096, but the maximum size can be configured.
    • large-image-source-tiff: A tile source for reading pyramidal tiff files in common compression formats.
    • large-image-source-tifffile: A tile source using the tifffile library that can handle a wide variety of tiff-like files.
    • large-image-source-vips: A tile source for reading any files handled by libvips. This also can be used for writing tiled images from numpy arrays (up to 4 dimensions).
    • large-image-source-zarr: A tile source using the zarr library that can handle OME-Zarr (OME-NGFF) files as well as some other zarr files. This can also be used for writing N-dimensional tiled images from numpy arrays. Written images can be saved as any supported format.
    • large-image-source-test: A tile source that generates test tiles, including a simple fractal pattern. Useful for testing extreme zoom levels.
    • large-image-source-dummy: A tile source that does nothing. This is an absolutely minimal implementation of a tile source used for testing. If you want to create a custom tile source, start with this implementation.

As a Girder plugin, large-image adds end points to access all of the image formats it can read both to get metadata and to act as a tile server. In the Girder UI, large-image shows images on item pages, and can show thumbnails in item lists when browsing folders. There is also cache management to balance memory use and speed of response in Girder when large-image is used as a tile server.

Most tile sources can be used with Girder Large Image. You can specify an extras_require of girder to install the following packages:

  • girder-large-image: Large Image as a Girder 3.x plugin. You can install large-image[tasks] to install a Girder Worker task that can convert otherwise unreadable images to pyramidal tiff files.
  • girder-large-image-annotation: Adds models to the Girder database for supporting annotating large images. These annotations can be rendered on images. Annotations can include polygons, points, image overlays, and other types. Each annotation can have a label and metadata.
  • large-image-tasks: A utility for running the converter via Girder Worker. You can specify an extras_require of girder to include modules needed to work with the Girder remote worker or worker to include modules needed on the remote side of the Girder remote worker. If neither is specified, some conversion tasks can be run using Girder local jobs.

large_image's People

Contributors

annehaley avatar banesullivan avatar brianhelba avatar cdeepakroy avatar dependabot[bot] avatar dgutman avatar dhandeo avatar dorukozturk avatar eagw avatar gabrielle6249 avatar jbeezley avatar jeffbaumes avatar jonasteuwen avatar law12019 avatar manthey avatar mcovalt avatar msmolens avatar naglepuff avatar nipeone avatar psavery avatar salamb avatar sgratzl avatar subinkitware avatar willdunklin avatar zachmullen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

large_image's Issues

Bad files stop thumbnail creation job

When trying to generate a thumbnail of the SVS files TCGA-A8-A06U-01A-01-TS1.63824040-373f-4c6c-a74e-881c127567a6.svs , an error occurs, which stops the thumbnail generation job.

This SVS file has only one level, and, when we attempt to read some parts of the image, the OpenSlide library throws an error.

At the very least, we should make the thumbnail job log and proceed after certain types of errors, skipping the problem files.

memcached appears to be mandatory, install fails

I am trying to install this plugin on CentOS 6. I was able to install memcached and memcached-devel using standard package manager, but large_image setup complained about missing memcached headers. I then removed memcached, and re-run the setup script, and I am still getting those missing header errors. Is it supposed to work without memcached? Instructions say it is optional. Any suggestions how to resolve the install problem?

Installed /home/af61/girder_env/lib/python2.7/site-packages/large_image-0.2.0-py2.7.egg
Processing dependencies for large-image==0.2.0
Searching for pylibmc>=1.5.1
Reading https://pypi.python.org/simple/pylibmc/
Downloading https://pypi.python.org/packages/23/f4/3904b7171e61a83eafee0ed3b1b8efe4d3c6ddc05f7ebdff1831cf0e15f1/pylibmc-1.5.1.tar.gz#md5=9077704e34afc8b6c7b0b686ae9579de
Best match: pylibmc 1.5.1
Processing pylibmc-1.5.1.tar.gz
Writing /tmp/easy_install-EN12Mo/pylibmc-1.5.1/setup.cfg
Running pylibmc-1.5.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-EN12Mo/pylibmc-1.5.1/egg-dist-tmp-2MIdfj
warning: no files found matching 'runtests.py'
warning: no files found matching '*.py' under directory 'pylibmc'
In file included from src/_pylibmcmodule.c:34:
src/_pylibmcmodule.h:42:36: error: libmemcached/memcached.h: No such file or directory

Adding Nested Metadata to DSA via Girder

Can someone explain/walk me through the following process...

So for each image we have ingested into GIRDER from the TCGA collection, I want to do the following.

  1. Create a new metadata object called "SLIDE IMAGE PROPERTIES"

  2. This item then is actually a second JSON object of "stuff"... basically whatever I pull out of the SVS file header

  3. There are going to be certain properties I want to be easily visible... like slide magnification and image size, there's also a lot of crap in there I am happy to include but not necessarily make a top level property (i.e. store it, but not necessarily make it have first tier visibility)

So I would basically create an object like

{'Slide Image Properties':  {
                                     'Native Resolution': '40X', 
                                     'Orig Width':  25000,
                                     'Orig Height':  100000,
                                     'Orig Encoding':  JPEG2000,
                                     'HeaderData':  { BIG LONG JSON OBJECT     }
                                  }

I imaging this element should appear "collapsed" but have a plus button, and then when I click on it I can then see all those other properties...

  1. This data should be searchable (somehow)

  2. I should also have some sort of backend (ipython/whatever) script that would allow me to generate that data AFTER slide ingestion if it was missing and/or this was done as a separate process

Add getRegionAtAnotherScale() - converts region from one scale to another

Considering the region spec is the following dictionary:

  • left: the left edge (inclusive) of the region to process.
  • top: the top edge (inclusive) of the region to process.
  • right: the right edge (exclusive) of the region to process.
  • bottom: the bottom edge (exclusive) of the region to process.
  • width: the width of the region to process.
  • height: the height of the region to process.
  • units: either 'base_pixels' (default), 'pixels', 'mm', or 'fraction'.

Add a function getRegionAtAnotherScale(source_region, target_scale, source_scale=None) that takes a region at source_scale and returns region at the target_scale. If units of source_region is pixels, then source_scale must not be None.

Enhance TileSource API to support analysis of all tiles of a whole-slide image

This issue involves enhancing the interface of the TileSource class to support the analysis of all tiles in a whole-slide image with a simplified interface.

To start a discussion. I have summarize below the requirements that are on my mind:

  • an iterator to iterate through all tiles of a whole-slide image at any desired magnification: The iterator should probably return a tuple with (top-left-x, top-left-y, width, height, tile-image). @cooperlab Do we need any other information. This iterator is probably the most important among other requirements listed here.
  • A member function to add/delete layers of the image pyramid
  • We need a member function grab corresponding tiles at each resolution. Lee can you please elaborate on the need for this. There is a function called getTile(x, y, z) which probably does this. @manthey Can you please confirm or explain what this does?
  • A member function to get an arbitrary region in the whole slide image: The function could take (top-left-x, top-left-y, width, height) as arguments. Looks like TileSource.getRegion(width. height, **kwargs) already does this?
  • Good docstrings that clearly explain what all the member functions do.

Add python classes for annotation types

Each annotation shape should have an associated python class. Those classes could have a static method for generating the json schema, initialization functions that can construct the shape, a json output function, and any manipulation functions we decide to add (translation, rotation, etc).

Polyline annotation schema is not correct

The example from the docs fails to validate:

{
    "type": "polyline",
    "points": [
      [5,6,0],
      [-17,6,0],
      [56,-45,6]
    ],
    "closed": true,
    "fillColor": "rgba(0, 255, 0, 1)"
}

Add a tile size parameter for TileSource.tileIterator

This is to provide the ability to use a custom tile size when iterating through a whole slide image at a desired magnification.

Example usage:

ts = large_image.getTileSource(wsi_path)
for tile in ts.tileIterator(scale={'magnification': 20}, 
                            tile_size=(tile_height, tile_width)):
    pass

No module named server

I had to remake my virtual environment after updating python. Now I'm getting an error starting girder:

Running in mode: development
Connected to MongoDB: mongodb://localhost:27017/girder
Using memcached for large_image caching
Traceback (most recent call last):
  File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/Users/jbeezley/emory/girder/girder/__main__.py", line 57, in <module>
    main()
  File "/Users/jbeezley/emory/girder/girder/__main__.py", line 50, in main
    server.setup(args.testing)
  File "girder/utility/server.py", line 145, in setup
    root, appconf = configureServer(test, plugins, curConfig)
  File "girder/utility/server.py", line 115, in configureServer
    plugins, curConfig, ignoreMissing=True))
  File "girder/utility/plugin_utilities.py", line 114, in getToposortedPlugins
    allPlugins = findAllPlugins(curConfig)
  File "girder/utility/plugin_utilities.py", line 362, in findAllPlugins
    findEntryPointPlugins(allPlugins)
  File "girder/utility/plugin_utilities.py", line 349, in findEntryPointPlugins
    data = getattr(entry_point.load(), 'config', {})
  File "/Users/jbeezley/emory/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2264, in load
    return self.resolve()
  File "/Users/jbeezley/emory/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2270, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
ImportError: No module named server

This led me to this strangeness:

>>> from large_image import server
Using memcached for large_image caching
>>> server.load
<function load at 0x10ee81de8>
>>> from large_image.server import load
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named server

Anyone seen this before? My sys.path looks like this:

/Users/jbeezley/emory/env/lib/python27.zip
/Users/jbeezley/emory/env/lib/python2.7
/Users/jbeezley/emory/env/lib/python2.7/plat-darwin
/Users/jbeezley/emory/env/lib/python2.7/plat-mac
/Users/jbeezley/emory/env/lib/python2.7/plat-mac/lib-scriptpackages
/Users/jbeezley/emory/env/lib/python2.7/lib-tk
/Users/jbeezley/emory/env/lib/python2.7/lib-old
/Users/jbeezley/emory/env/lib/python2.7/lib-dynload
/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac
/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/Users/jbeezley/emory/env/lib/python2.7/site-packages
/Users/jbeezley/emory/girder
/Users/jbeezley/emory/girder/plugins/large_image
/Users/jbeezley/emory/girder_worker

Report when a conversion job fails

When a job tries to convert a file to a ptif and fails, the job's status is properly marked as ERROR, but this doesn't clear the largeImage data, nor get reported in any clear manner to the UI.

We listen for data.process to see when a new ptif file appears, but we also need to listen on model.job.save for error or cancel messages.

Shortcut large_image creation on a non-filesystem assetstore does not work

If an otherwise-valid pyramidal TIFF in a non-filesystem assetstore is marked as a large_image, a new conversion process is started, instead of failing directly.

The problem is that a failure to read due to non-pyramidal file content and a failure to read due to being in a non-filesystem assetstore both return the same exception type.

Add an image region cutout endpoint

This HTTP endpoint should allow users to fetch a high-quality arbitrary region of a large image, in a variety of formats.

A few requirements:

  • The size and position of the region may be specified by the user. This could be implemented as either parameters for the top-left and bottom-right edges of the image, or as an offset value and size of the image. Either way, the coordinates should be specified in base (highest resolution) layer pixel coordinates, with the origin at the top-left.
  • By default, the output images should be at the original size. If a user attempts to fetch a large portion of a massive image, the endpoint is allowed to fail.
    • Failure should be graceful, with a sane error message that the requested region is too large.
  • Options for JPEG quality, etc. should be exposed.
  • Cookie auth should be permitted.

Plugin fails to initialize when using an older memcached

On travis with Ubuntu 14.04 with memcached 0.44, large_image fails to start with the following error:

ERROR: Failed to load plugin "large_image":
Traceback (most recent call last):
  File "girder/utility/plugin_utilities.py", line 87, in loadPlugins
    root, appconf, apiRoot = loadPlugin(plugin, root, appconf, apiRoot)
  File "girder/utility/plugin_utilities.py", line 212, in loadPlugin
    pluginLoadMethod(info)
  File "girder/utility/plugin_utilities.py", line 416, in wrapped
    return func(*arg, **kw)
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/base.py", line 198, in load
    from .rest import TilesItemResource, LargeImageResource, AnnotationResource
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/rest/__init__.py", line 20, in <module>
    from .tiles import TilesItemResource
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/rest/tiles.py", line 29, in <module>
    from ..models import TileGeneralException
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/models/__init__.py", line 23, in <module>
    from .image_item import ImageItem
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/models/image_item.py", line 33, in <module>
    from ..tilesource import AvailableTileSources, TileSourceException
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/tilesource/__init__.py", line 22, in <module>
    from .base import TileSource, getTileSourceFromDict, TileSourceException, \
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/tilesource/base.py", line 23, in <module>
    from ..cache_util import tileCache, tileLock, strhash, methodcache
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/cache_util/__init__.py", line 20, in <module>
    from .cache import LruCacheMetaclass, tileCache, tileLock, strhash, methodcache
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/cache_util/cache.py", line 171, in <module>
    tileCache, tileLock = CacheFactory().getCache()
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/cache_util/cachefactory.py", line 111, in getCache
    cache = MemCache(url, memcachedUsername, memcachedPassword)
  File "/home/travis/build/DigitalSlideArchive/digital_slide_archive/build/girder/plugins/large_image/server/cache_util/memcache.py", line 56, in __init__
    behaviors=behaviors)
  File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pylibmc/client.py", line 143, in __init__
    behaviors=_behaviors_numeric(behaviors))
  File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/pylibmc/client.py", line 112, in _behaviors_numeric
    raise ValueError("unknown behavior names: %s" % (names,))
ValueError: unknown behavior names: dead_timeout

Here is the memcached package information:

$ apt-cache show libmemcached-dev
Package: libmemcached-dev
Priority: extra
Section: libdevel
Installed-Size: 1028
Maintainer: Ubuntu Developers <[email protected]>
Original-Maintainer: Monty Taylor <[email protected]>
Architecture: amd64
Source: libmemcached
Version: 0.44-1.1build1
Depends: libmemcached6 (= 0.44-1.1build1), libhashkit0 (= 0.44-1.1build1), libmemcachedutil1 (= 0.44-1.1build1), libmemcachedprotocol0 (= 0.44-1.1build1)
Filename: pool/main/libm/libmemcached/libmemcached-dev_0.44-1.1build1_amd64.deb
Size: 313510
MD5sum: 173cec20017e9ee70d1b6b46493da844
SHA1: f9ef8cadc8061725c9cee3ce7dfd164fd3bc2f59
SHA256: 578e778fd93b13245be43cb869b20e2aedfc53acdd8fe231ca38ef4e69b019dd
Description: Development files for libmemcached
Homepage: http://tangent.org/552/libmemcached.html
Description-md5: ee21fa04cba54da4c95743721db9bde5
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu
Supported: 5y

Test and upgrade to Numpy 1.10.4

Numpy 1.10.4 is now released on Conda. If it works, we should upgrade (requirements.txt and Travis build) to use it.

I have noticed some weird behavior with pylibtiff on a machine where Numpy 1.10.4 was installed, but I haven't been able to definitively blame it on Numpy 1.10.4 yet. If it turns out there is a problem, we need to affirmatively blacklist this version of Numpy and document this issue.

Clean up abandoned jobs

If a job fails to complete (because girder is stopped, for instance), an item is marked as waiting for a large_image. The UI doesn't allow cancelling (the large_image information can be deleted via a REST call, though).

Add an endpoint that would clean up all jobs that are in-progress and over a given age, since they are probably stuck.

When an imported file is no longer there, error messages are misleading

Import a file via the assetstore import function. Rename the file. Try to convert the file to a large_image. The error will claim vips can't convert the file, which, while true, is because the file is unreachable. girder_worker should be smarter than that, and report that the file is unavailable.

Add support for writing a tiled image

This should be able to write an image tile-by-tile from source data. Once all tiles are present, we need a method to generate the other levels (either by going through vips or by generating them ourselves).

Add documentation to install large_image for use as a python toolkit

This issue involves adding a section to README.rst on how to install large_image for use as a python toolkit.

Prior to installing large_image using python setup.py install or pip install -e large_image, openslide and its dependencies need to be installed. To begin with we can just have documentation for ubuntu 14.04 or ubuntu 16.04. But later we will need to add this for osx and windows.

Support small images

Optionally support small images without having to convert them to a tiled image. There should be an upper size limit beyond which this refuses to work so as not to use too much memory.

The UI should only show these if they have been marked in some way, which could happen automatically (off by default).

Pylibtiff segfaults with Numpy 1.10.4

This can be reproduced on a fresh Vagrant VM, with Numpy 1.10.4 installed from Conda or pip. It does not occur with Numpy 1.10.42.

The segfault occurs whenever Pylibtiff's tif_lzw file is imported. However, it only occurs if import numpy has already been run within the Python environment. Note that import libtiff or from libtiff import tif_lzw will always cause the segfault, as those will cause numpy to be imported first. Running a Python shell from inside the libtiff module and simply running`import tif_lzw`` will not cause the crash.

The full C stack trace is:

#0  0x00007ffff5a40de0 in PyArray_API () from /home/vagrant/env/lib/python2.7/site-packages/numpy/core/multiarray.so
#1  0x00007fffed32c4bc in _import_array () at /home/vagrant/env/lib/python2.7/site-packages/numpy/core/include/numpy/__multiarray_api.h:1673
#2  inittif_lzw () at libtiff/src/tif_lzw.c:1310
#3  0x00007ffff7b07905 in _PyImport_LoadDynamicModule (name=0xa91870 "libtiff.tif_lzw", pathname=0xa92880 "/home/vagrant/pylibtiff/libtiff/tif_lzw.so", fp=<optimized out>)
    at ./Python/importdl.c:53
#4  0x00007ffff7b05f81 in import_submodule (mod=0x7ffff7ea5bb0, subname=0xa91878 "tif_lzw", fullname=0xa91870 "libtiff.tif_lzw") at Python/import.c:2704
#5  0x00007ffff7b061f4 in load_next (mod=0x7ffff7ea5bb0, altmod=0x7ffff7da3cd0 <_Py_NoneStruct>, p_name=<optimized out>, buf=0xa91870 "libtiff.tif_lzw", p_buflen=0x7fffffffc8e0)
    at Python/import.c:2519
#6  0x00007ffff7b06820 in import_module_level (level=<optimized out>, fromlist=0x7ffff7da3cd0 <_Py_NoneStruct>, locals=<optimized out>, globals=<optimized out>, name=0x0)
    at Python/import.c:2228
#7  PyImport_ImportModuleLevel (name=<optimized out>, globals=<optimized out>, locals=<optimized out>, fromlist=0x7ffff7da3cd0 <_Py_NoneStruct>, level=<optimized out>)
    at Python/import.c:2292
#8  0x00007ffff7ae614f in builtin___import__ (self=<optimized out>, args=<optimized out>, kwds=<optimized out>) at Python/bltinmodule.c:49
#9  0x00007ffff7a3cd23 in PyObject_Call (func=0x7ffff7fdbfc8, arg=<optimized out>, kw=<optimized out>) at Objects/abstract.c:2546
#10 0x00007ffff7ae6633 in PyEval_CallObjectWithKeywords (func=0x7ffff7fdbfc8, arg=0x7fffed531788, kw=<optimized out>) at Python/ceval.c:4219
#11 0x00007ffff7aeb29e in PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:2622
#12 0x00007ffff7af0a2e in PyEval_EvalCodeEx (co=0x7fffed7aedb0, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=0, kws=0x0, kwcount=0, defs=0x0, 
    defcount=0, closure=0x0) at Python/ceval.c:3582
#13 0x00007ffff7af0b42 in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at Python/ceval.c:669
#14 0x00007ffff7b02a82 in PyImport_ExecCodeModuleEx (name=0xa7a0a0 "libtiff.tiff_sample_plane", co=0x7fffed7aedb0, 
    pathname=0xa855e0 "/home/vagrant/pylibtiff/libtiff/tiff_sample_plane.pyc") at Python/import.c:713
#15 0x00007ffff7b051ce in load_source_module (name=0xa7a0a0 "libtiff.tiff_sample_plane", pathname=0xa855e0 "/home/vagrant/pylibtiff/libtiff/tiff_sample_plane.pyc", 
    fp=<optimized out>) at Python/import.c:1103
#16 0x00007ffff7b05f81 in import_submodule (mod=0x7ffff7ea5bb0, subname=0xa7a0a8 "tiff_sample_plane", fullname=0xa7a0a0 "libtiff.tiff_sample_plane") at Python/import.c:2704
#17 0x00007ffff7b061f4 in load_next (mod=0x7ffff7ea5bb0, altmod=0x7ffff7ea5bb0, p_name=<optimized out>, buf=0xa7a0a0 "libtiff.tiff_sample_plane", p_buflen=0x7fffffffce80)
    at Python/import.c:2519
#18 0x00007ffff7b06820 in import_module_level (level=<optimized out>, fromlist=0x7fffee0bd650, locals=<optimized out>, globals=<optimized out>, name=0x0) at Python/import.c:2228
#19 PyImport_ImportModuleLevel (name=<optimized out>, globals=<optimized out>, locals=<optimized out>, fromlist=0x7fffee0bd650, level=<optimized out>) at Python/import.c:2292
#20 0x00007ffff7ae614f in builtin___import__ (self=<optimized out>, args=<optimized out>, kwds=<optimized out>) at Python/bltinmodule.c:49
#21 0x00007ffff7a3cd23 in PyObject_Call (func=0x7ffff7fdbfc8, arg=<optimized out>, kw=<optimized out>) at Objects/abstract.c:2546
#22 0x00007ffff7ae6633 in PyEval_CallObjectWithKeywords (func=0x7ffff7fdbfc8, arg=0x7fffee0b3bf0, kw=<optimized out>) at Python/ceval.c:4219
#23 0x00007ffff7aeb29e in PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:2622
#24 0x00007ffff7af0a2e in PyEval_EvalCodeEx (co=0x7fffee0cf0b0, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=0, kws=0x0, kwcount=0, defs=0x0, 
    defcount=0, closure=0x0) at Python/ceval.c:3582
#25 0x00007ffff7af0b42 in PyEval_EvalCode (co=<optimized out>, globals=<optimized out>, locals=<optimized out>) at Python/ceval.c:669
#26 0x00007ffff7b02a82 in PyImport_ExecCodeModuleEx (name=0x6da410 "libtiff.tiff_file", co=0x7fffee0cf0b0, pathname=0xa78d60 "/home/vagrant/pylibtiff/libtiff/tiff_file.pyc")
    at Python/import.c:713
#27 0x00007ffff7b051ce in load_source_module (name=0x6da410 "libtiff.tiff_file", pathname=0xa78d60 "/home/vagrant/pylibtiff/libtiff/tiff_file.pyc", fp=<optimized out>)
    at Python/import.c:1103
#28 0x00007ffff7b05f81 in import_submodule (mod=0x7ffff7ea5bb0, subname=0x6da418 "tiff_file", fullname=0x6da410 "libtiff.tiff_file") at Python/import.c:2704
#29 0x00007ffff7b061f4 in load_next (mod=0x7ffff7ea5bb0, altmod=0x7ffff7ea5bb0, p_name=<optimized out>, buf=0x6da410 "libtiff.tiff_file", p_buflen=0x7fffffffd420)
    at Python/import.c:2519
#30 0x00007ffff7b06820 in import_module_level (level=<optimized out>, fromlist=0x7ffff7eaa390, locals=<optimized out>, globals=<optimized out>, name=0x0) at Python/import.c:2228

Note that the transition to Numpy code occurs here: https://github.com/pearu/pylibtiff/blob/master/libtiff/src/tif_lzw.c#L1308

This was likely caused by this change in Numpy: numpy/numpy@adbd6db

Chrome limits number of connections

Chrome limits the number of connections, which makes fetching thumbnails a blocking process. Implement a ring buffer or some other workaround.

Move annotation code to its own plugin

I propose that we eventually move the annotation code into its own plugin, which will then likely depend on large_image. Ideally, I think we'd probably keep both plugins in this repository, so they can share testing, bug tracker, etc. infrastructure, but we'll need to ensure that this works cleanly with the girder installation process, for both development and production.

This has the benefit on allowing users access to the tileserver functionality of large_image, without bringing in the extra endpoints and complexity of the annotation system.

Using as girder plugin causes girder crash

Trying to use as a girder plugin (ubuntu 14.04) and am installing libmemcached-dev, libopenslide-dev to the system and numpy with pip to a virtual env.

When trying to run girder-server, I get the following output:

Running in mode: development
Connected to MongoDB: mongodb://localhost:27017/girder
INFO:girder:Using memcached for large_image caching
Using memcached for large_image caching
Traceback (most recent call last):
  File "/home/vagrant/.virtualenvs/test/bin/girder-server", line 11, in <module>
    load_entry_point('girder', 'console_scripts', 'girder-server')()                                                           
  File "/home/vagrant/girder/girder/__main__.py", line 50, in main                                                             
    server.setup(args.testing)                                                                                                 
  File "/home/vagrant/girder/girder/utility/server.py", line 145, in setup                                                     
    root, appconf = configureServer(test, plugins, curConfig)                                                                  
  File "/home/vagrant/girder/girder/utility/server.py", line 115, in configureServer                                           
    plugins, curConfig, ignoreMissing=True))                                                                                   
  File "/home/vagrant/girder/girder/utility/plugin_utilities.py", line 114, in getToposortedPlugins                            
    allPlugins = findAllPlugins(curConfig)                                                                                     
  File "/home/vagrant/girder/girder/utility/plugin_utilities.py", line 362, in findAllPlugins                                  
    findEntryPointPlugins(allPlugins)                                                                                          
  File "/home/vagrant/girder/girder/utility/plugin_utilities.py", line 349, in findEntryPointPlugins                           
    data = getattr(entry_point.load(), 'config', {})                                                                           
  File "/home/vagrant/.virtualenvs/test/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2258, in load   
    return self.resolve()                                                                                                      
  File "/home/vagrant/.virtualenvs/test/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2268, in resolve
    raise ImportError(str(exc))                                                                                                
ImportError: 'module' object has no attribute 'load'                                                                           

I think this is occurring when trying to start the large image plugin. I'm not sure if this is a memcached issue or if the module isn't revealing the assumed load function. I tried deciphering the setup.py entry point and it looks like server/__init__.py should be exposing the load function defined in server/base.py, unless the error above is in reference to a different load function.

In comparison, a checkout of the hash 60c32b1 works (before a bunch of the caching stuff).

Add an endpoint to get an image thumbnail

This should return the image at its native aspect ratio (without any extra padding).

The endpoint will probably be at /api/v1/item/tiles/thumbnail. It will probably take the following query string parameters:

  • width, taking an integer, which will rescale the image to be the specified width.
  • height, taking an integer, which will rescale the image to be the specified height. Since the native aspect ratio of the image should be preserved, it is an error for both width and height to be specified at the same time.
  • If neither width nor height is specified, a sensible default for one of these parameters should be used, so that all thumbnails are returned in the same predictable size.
  • encoding, changing the image encoding of the output. At a minimum, JPEG and PNG should be supported.

For now, internally, the "top-level" image of the pyramid can be used to generate the output, even if it means upsampling. We will later add a dedicated endpoint for getting high-quality image regions of arbitrary size and location.

Add thumbnails.

Add thumbnails to the item list. Make this an option via the settings (depends on issue #39).

Add client settings

Add plugin settings that allow turning on or off showing the client viewer.

Also allow setting the default viewer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.