Coder Social home page Coder Social logo

ome / bioformats Goto Github PK

View Code? Open in Web Editor NEW
366.0 30.0 238.0 272.82 MB

Bio-Formats is a Java library for reading and writing data in life sciences image file formats. It is developed by the Open Microscopy Environment. Bio-Formats is released under the GNU General Public License (GPL); commercial licenses are available from Glencoe Software.

Home Page: https://www.openmicroscopy.org/bio-formats

License: GNU General Public License v2.0

Java 62.89% Shell 0.09% HTML 0.18% C++ 0.04% JavaScript 0.14% PostScript 35.62% MATLAB 0.84% Python 0.13% Batchfile 0.07% Dockerfile 0.01%
bio-formats java image life-sciences-image format-reader format-converter metadata whole-slide-imaging wsi lightsheet

bioformats's Introduction

Bio-Formats

Actions Status Actions Status

Bio-Formats is a standalone Java library for reading and writing life sciences image file formats. It is capable of parsing both pixels and metadata for a large number of formats, as well as writing to several formats.

If you are having an issue with Bio-Formats and need support, please see the support page.

Purpose

Bio-Formats' primary purpose is to convert proprietary microscopy data into an open standard called the OME data model, particularly into the OME-TIFF file format. See About Bio-Formats for further information.

Supported formats

Bio-Formats supports more than a hundred file formats.

For users

Many software packages use Bio-Formats to read and write microscopy formats.

For developers

You can use Bio-Formats to easily support these formats in your software.

More information

For more information, see the Bio-Formats web site.

Pull request testing

We welcome pull requests from anyone, but ask that you please verify the following before submitting a pull request:

  • verify that the branch merges cleanly into develop
  • verify that the branch compiles with the clean jars tools Ant targets
  • verify that the branch compiles using Maven
  • verify that the branch does not use syntax or API specific to Java 1.8+
  • run the unit tests (ant test) and correct any failures
  • test at least one file in each affected format, using the showinf command
  • internal developers only: run the data tests against directories corresponding to the affected format(s)
  • make sure that your commits contain the correct authorship information and, if necessary, a signed-off-by line
  • make sure that the commit messages or pull request comment contains sufficient information for the reviewer(s) to understand what problem was fixed and how to test it

bioformats's People

Contributors

bdezonia avatar billhill00 avatar carandraug avatar cgdogan avatar chris-allan avatar csachs avatar ctrueden avatar dependabot[bot] avatar dgault avatar emilroz avatar glehmann avatar hinerm avatar imunro avatar jburel avatar jmuhlich avatar joshmoore avatar manics avatar melissalinkert avatar mtbc avatar paulvanschayck avatar premyslfiala avatar qidane avatar richardmyers avatar sbesson avatar shaquillelouisa-lambertinstruments avatar snoopycrimecop avatar stelfrich avatar swg08 avatar tinevez avatar xlefreaderforbioformats avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bioformats's Issues

FEITiffReader crashes with some images

With the following image, the FEITiffReader crashes with the following stack trace:
Tile_001-002-000_0-000.tif.zip

java.lang.NumberFormatException: For input string: "1.943794245326921E"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
    at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
    at java.lang.Double.parseDouble(Double.java:538)
    at java.lang.Double.<init>(Double.java:608)
    at loci.formats.in.FEITiffReader$FEIHandler.characters(FEITiffReader.java:407)
    at org.apache.xerces.parsers.AbstractSAXParser.characters(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanContent(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
    at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
    at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
    at org.apache.xerces.jaxp.SAXParserImpl.parse(Unknown Source)
    at javax.xml.parsers.SAXParser.parse(SAXParser.java:195)
    at loci.common.xml.XMLTools.parseXML(XMLTools.java:428)
    at loci.common.xml.XMLTools.parseXML(XMLTools.java:414)
    at loci.common.xml.XMLTools.parseXML(XMLTools.java:393)
    at loci.formats.in.FEITiffReader.initStandardMetadata(FEITiffReader.java:164)
    at loci.formats.in.BaseTiffReader.initMetadata(BaseTiffReader.java:98)
    at loci.formats.in.BaseTiffReader.initFile(BaseTiffReader.java:577)
    at loci.formats.FormatReader.setId(FormatReader.java:1426)

This happens because the XML element value "1.943794245326921E-09" get split in two characters calls.
According to http://stackoverflow.com/questions/4567636/java-sax-parser-split-calls-to-characters , the processing should be done in endElement instead.
I can make a PR to fix that if you want to.

JpegXR support, OME ticket #8493

Hi!

http://trac.openmicroscopy.org.uk/ome/ticket/8493

I see the ticket mentions implementing this functionality in beta releases -- are these available as of yet? I cannot find any code updates relevant to this functionality in any of the OME GIT repositories.

Furthermore - is there anything I can provide of code or examples that could help you get this along in any way?

Sincerely,
Jonas Øgaard

mrc file with complex numbers - is read as floating point

Hi

we have some mrc files with complex numbers but they are simply treated as real floating point. Omero loads them as such, and when accessing it's PixelType (via OmeroPy), it just lists them as "float".

Looking into the source of MRCReader.java, I see that these are mapped to double or uint32:

  case 3:
    m.pixelType = FormatTools.UINT32;
    break;
  case 4:
    m.pixelType = FormatTools.DOUBLE;
    break;

I can share such file if you want to, no problem. Here's a short python session, showing that the file is indeed an image with complex numbers:

>>> f = open ("./V2-live-cell-19Mar2013_525.otf")
>>> struct.unpack("4i", f.read(16))
(65, 129, 3, 4)

The fourth number (value of 4) would mean that the data is doublecomplex. I don't know java yet but the bioformats documentation seems to suggest that there is a double complex pixeltype.

Bug in loci.formats.ImageTools.autoscale due to integer division

There is a bug in ImageTools.autoscale method: https://github.com/openmicroscopy/bioformats/blob/develop/components/formats-bsd/src/loci/formats/ImageTools.java#L472

The problem is with lines 485 and 486:
int diff = max - min;
float dist = (s - min) / diff;

You are assigning the result to a float variable but because either s, min and diff variables are ints the division is an integer division and this causes the result to always be 0 unless when s is equal to diff. That is, the autoscale function will incorrectly map all values in array except max to 0 and max to 255.

The fix is simple, just change the type of diff variable to float, so at least one operand of division is a float number:
float diff = max - min;
float dist = (s - min) / diff;

Additionally, there are 2 mistakes at line 488:
s = (int) dist * 256;

  1. the constant should be 255, not 256, because the max value in 8-bit unsigned variable is 255, not 256.
  2. the expression should be in parentheses, because you want to cast to int the whole expression, not only dist (because it would be almost always converted to 0 as in above).

So the line should be like this:
s = (int)(dist * 255);

The code is wrong in both "dev_5_0" and "develop" branches.

"Cannot construct image with 7 channels" error on trying to convert some TIFFs to JP2

Hi folks,

I have some questionable tiffs that I've been getting from old imaging equipment that I've been using Djatoka to convert up to JP2 up to now. I do not have much love for Djatoka, and was very excited when I discovered this project, as it means I can potentially convert a much wide range of file formats to more preservation-friendly standards. However, I'm getting the following Java error when trying to run bfconvert on some of these tiffs:

axfelix@shoebox:/opt/bftools$ bfconvert -merge -expand /opt/image.tif ~/Desktop/test.jp2
/opt/image.tif
TiffDelegateReader initializing /opt/image.tif
Reading IFDs
Populating metadata
Checking comment style
Populating OME metadata
[Tagged Image File Format] -> /home/axfelix/Desktop/test.jp2 [JPEG-2000]
Exception in thread "main" java.lang.IllegalArgumentException: Cannot construct image with 7 channels
at loci.formats.gui.AWTImageTools.constructImage(AWTImageTools.java:659)
at loci.formats.codec.JPEG2000Codec.compress(JPEG2000Codec.java:137)
at loci.formats.out.JPEG2000Writer.compressBuffer(JPEG2000Writer.java:126)
at loci.formats.out.JPEG2000Writer.saveBytes(JPEG2000Writer.java:86)
at loci.formats.FormatWriter.saveBytes(FormatWriter.java:126)
at loci.formats.ImageWriter.saveBytes(ImageWriter.java:201)
at loci.formats.tools.ImageConverter.convertPlane(ImageConverter.java:547)
at loci.formats.tools.ImageConverter.testConvert(ImageConverter.java:483)
at loci.formats.tools.ImageConverter.main(ImageConverter.java:677)

When trying to convert the same image to jpeg rather than jp2, I get: Exception in thread "main" javax.imageio.IIOException: Invalid argument to native writeImage, with the same traceback.

I can get both of these to work by using the -separate flag (thanks for the good documentation that helped me figure that out), but unfortunately, that results in greyscale output.

Any ideas? Thanks!

TJDecompressor.java:305 can cause error in java runtime environment

I'm using Bio-Formats 5.1.2, and on converting the sample file below, I get an error message like this:

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f0292a96fc1, pid=8072, tid=139649635669760
#
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C  [libturbojpeg6772939230623658346.so+0x3cfc1]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/kristian/Playground/omezoomify/sandbox/hs_err_pid8072.log
#

Command executed: $ bfconvert -compression JPEG -series 0 6_1_Pancreas_HE.ndpi out.tiff

The error occurs in TJDecompressor.java at line 305, and I'm not able to locate the error any further than this.

Sample files: 6_1_Pancreas_HE.ndpi that causes the error, AH09.ndpi and AH74.ndpi that works just fine. Full error log in hs_err_pid8072 is also available.

Please let me know if there's a more appropriate way to report errors - or if this is the wrong audience :)

deltavision 4D stack opened as series and thumbnails

I have uploaded the problematic image file to your QA system as bug #9514

The attached image is a 4d dataset, (256 columns, 256 rows, 20 time-points, 9 Z slices, and 1 wavelength). However, bioformats reads it as 4 separate images with 5 time-points on each (as it does when files have multiple sub-resolution images). The file is read correctly in softWoRx (software is not free or libre).

The following python session shows that the details on the base header of the dv file (inherited from MRC files) appear correct (the missing number of z slices is deduced from nZ = nSlices / (nWaves * nTimes) which in this case 9 = 180 (1 * 20)). There's a quite large extended header but I do not have the specs to it so I can't debug it further :

f = open ("61_06_TV.dv", "r")

## Magic number (it is a DV file)
>>> f.seek(96)
>>> struct.unpack("1i", f.read(4))
(49312,)

## nColumns / nRows / nSlices (Z * T * C)
>>> f.seek(0)
>>> struct.unpack("3i", f.read(12))
(256, 256, 180)

## Number of time-points
>>> f.seek(180)
>>> struct.unpack("1h", f.read(2))
(20,)

## Number of wavelengths
>>> f.seek(196)
>>> struct.unpack("1h", f.read(2))
(1,)

## Data order (0 = ZTW)
>>> f.seek(182)
>>> struct.unpack("1h", f.read(2))
(0,)

## Sampling frequency X, Y, Z (either (1,1,1) or (nColumns, nRows, nZSections))
>>> f.seek(28)
>>> struct.unpack("3i", f.read(12))
(1, 1, 1)

## Size of extended header size
>>> f.seek(92)
>>> struct.unpack("1i", f.read(4))
(31744,)

## Number of sub-resolution data-sets
>>> f.seek(132)
>>> struct.unpack("1h", f.read(2))
(1,)

Unfortunately, I am not allowed to give you much details on the software that created the file (NDA signed) but it was written by GE.

Loading large jpeg files fails

Hello,

(This issue is a copy of a topic opened on OME forum : https://www.openmicroscopy.org/community/viewtopic.php?f=13&t=7957)

I am currently trying to convert a very large JPEG image (65500px x 53090px) with bfconvert.

When I run the executable with this image, I got the following stack trace :

../bftools/bfconvert -bigtiff -series 0 "/.../.../x-3.jpeg" "/.../.../TIFF/%z.%t.tiff" 2>&1
/.../.../x-3.jpeg
JPEGReader initializing /.../.../x-3.jpeg
Populating metadata
Exception in thread "main" java.lang.IllegalArgumentException: width*height > Integer.MAX_VALUE!
   at javax.imageio.ImageReader.getDestination(ImageReader.java:2823)
   at com.sun.imageio.plugins.jpeg.JPEGImageReader.readInternal(JPEGImageReader.java:1046)
   at com.sun.imageio.plugins.jpeg.JPEGImageReader.read(JPEGImageReader.java:1014)
   at javax.imageio.ImageIO.read(ImageIO.java:1422)
   at javax.imageio.ImageIO.read(ImageIO.java:1326)
   at loci.formats.in.ImageIOReader.initImage(ImageIOReader.java:150)
   at loci.formats.in.ImageIOReader.initFile(ImageIOReader.java:122)
   at loci.formats.in.JPEGReader$DefaultJPEGReader.initFile(JPEGReader.java:190)
   at loci.formats.FormatReader.setId(FormatReader.java:1426)
   at loci.formats.DelegateReader.setId(DelegateReader.java:290)
   at loci.formats.in.JPEGReader.setId(JPEGReader.java:89)
   at loci.formats.ImageReader.setId(ImageReader.java:835)
   at loci.formats.tools.ImageConverter.testConvert(ImageConverter.java:367)
   at loci.formats.tools.ImageConverter.main(ImageConverter.java:874)

I am currently using this version of bfconvert:

Version: 5.1.4
VCS revision: 05840624ab3d1d1dca14d1ccfebabcb61c42ec27
Build date: 4 September 2015

Do you advise any other solution to convert this image to a TIFF file ?

Regards,
Félix

NumberFormatException on CZI Read

Hi all,

I'm working with a team who are encountering problems loading Zeiss CZI files with the Bio-Formats toolbox. These import errors occur both in MATLAB and ImageJ. The errors occur at least on v5.0.8 and v5.1.2. The following MATLAB exception is encountered on load:

Error using bfGetReader (line 89)
Java exception occurred:
java.lang.NumberFormatException: For input string: "with length indication"

                at java.lang.NumberFormatException.forInputString(Unknown Source)

                at java.lang.Integer.parseInt(Unknown Source)

                at java.lang.Integer.parseInt(Unknown Source)

                at loci.formats.in.ZeissCZIReader.initFile(ZeissCZIReader.java:515)

                at loci.formats.in.ZeissCZIReader.initFile(ZeissCZIReader.java:478)

                at loci.formats.FormatReader.setId(FormatReader.java:1317)

                at loci.formats.ImageReader.setId(ImageReader.java:753)

                at loci.formats.ReaderWrapper.setId(ReaderWrapper.java:569)

                at loci.formats.ChannelFiller.setId(ChannelFiller.java:259)

                at loci.formats.ReaderWrapper.setId(ReaderWrapper.java:569)

                at loci.formats.ChannelSeparator.setId(ChannelSeparator.java:270)


Error in bfopen (line 114)
r = bfGetReader(id, stitchFiles);

Error in bfOpen3DVolume (line 50)
volume = bfopen(filename);

Error in load3DCiliaVolFile (line 4)
V = bfOpen3DVolume(filepath); 
...

and the ImageJ plugin gives an exception on import:

ImageJ 1.48v; Java 1.6.0_20 [64-bit]; Windows 7 6.1; 20MB of 6075MB (<1%)
java.lang.NumberFormatException: For input string: "with length indication"
                at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
                at java.lang.Integer.parseInt(Integer.java:449)
                at java.lang.Integer.parseInt(Integer.java:499)
                at loci.formats.in.ZeissCZIReader.initFile(ZeissCZIReader.java:510)
                at loci.formats.FormatReader.setId(FormatReader.java:1426)
                at loci.plugins.in.ImportProcess.initializeFile(ImportProcess.java:505)
                at loci.plugins.in.ImportProcess.execute(ImportProcess.java:143)
                at loci.plugins.in.Importer.showDialogs(Importer.java:137)
                at loci.plugins.in.Importer.run(Importer.java:75)
                at loci.plugins.LociImporter.run(LociImporter.java:78)
                at ij.IJ.runUserPlugIn(IJ.java:199)
                at ij.IJ.runPlugIn(IJ.java:163)
                at ij.Executer.runCommand(Executer.java:131)
                at ij.Executer.run(Executer.java:64)

Oddly, I can't reproduce this behavior locally with the same file, BF version, and MATLAB version (and subsequent Java VM), but I also can't see what else may be causing this problem. I uploaded the problematic file to https://www.openmicroscopy.org/qa2/qa/upload/ (which I hope went through). Any help is greatly appreciated!

add support for file format from Olympus FVMPE-RS microscope

Not sure if this is the right place to ask, but the new native file format for Olympus's FVMPE-RS microscope has a new extension (.oir) that is not currently supported by bioformats. It would be amazingly helpful if we could read in these files. I would be happy to send files or work with anyone who could help us out with this. Thanks!

garret

Error on opening CZI file -- bfGetPlane

I'm attempting to open a 760 MB CZI file acquired by another research group with, i believe, Zen Lite. I have installed the matlab toolbox, added it to my path, and set my java maximum heap memory to 1024 MB. I'm running r2012a (64 bit) on Windows 7.

Executing

 data = bfopen('IP-Stitching-01-Change Scaling-01.czi');

results in

ZeissCZIReader initializing C:\Projects\LMR-Acquisition\IP-Stitching-01-Change Scaling-01.czi

Unknown IlluminationType value 'Fluorescence' will be stored as "Other"

ome.xml.model.Channel@66156d8 reference to null missing from object hierarchy.

ome.xml.model.Channel@28b689e0 reference to null missing from object hierarchy.

ome.xml.model.Channel@7b0064e3 reference to null missing from object hierarchy.

ome.xml.model.Channel@26b72884 reference to null missing from object hierarchy.

ome.xml.model.Channel@232a32bf reference to null missing from object hierarchy.

ome.xml.model.Channel@150abd60 reference to null missing from object hierarchy.

ome.xml.model.Channel@1a15deb6 reference to null missing from object hierarchy.

ome.xml.model.Channel@a756b37 reference to null missing from object hierarchy.

ome.xml.model.Channel@45b6867 reference to null missing from object hierarchy.

ome.xml.model.Channel@75af8109 reference to null missing from object hierarchy.

ome.xml.model.Channel@782a519b reference to null missing from object hierarchy.

ome.xml.model.Channel@76792357 reference to null missing from object hierarchy.

ome.xml.model.Channel@129e49c0 reference to null missing from object hierarchy.

ome.xml.model.Channel@2db45934 reference to null missing from object hierarchy.

ome.xml.model.Channel@3a78cbab reference to null missing from object hierarchy.

ome.xml.model.Channel@f0896b1 reference to null missing from object hierarchy.

ome.xml.model.Channel@1a0d8377 reference to null missing from object hierarchy.

ome.xml.model.Channel@cabe02e reference to null missing from object hierarchy.

ome.xml.model.Channel@a832ce5 reference to null missing from object hierarchy.

ome.xml.model.Channel@627f7051 reference to null missing from object hierarchy.

Reading series #1
    .Error using loci.formats.ChannelSeparator/openBytes
Java exception occurred:
loci.formats.FormatException: Buffer too small (got 18172584, expected 54517752).

    at loci.formats.FormatTools.checkBufferSize(FormatTools.java:976)

    at loci.formats.FormatTools.checkPlaneParameters(FormatTools.java:932)

    at loci.formats.in.ZeissCZIReader.openBytes(ZeissCZIReader.java:292)

    at loci.formats.ImageReader.openBytes(ImageReader.java:453)

    at loci.formats.ChannelFiller.openBytes(ChannelFiller.java:156)

    at loci.formats.ChannelSeparator.openBytes(ChannelSeparator.java:225)

    at loci.formats.ChannelSeparator.openBytes(ChannelSeparator.java:157)


Error in bfGetPlane (line 78)
plane = r.openBytes(...

Error in bfopen (line 148)
        arr = bfGetPlane(r, i, varargin{:});

NIOFileHandle large buffer size detracts from ImageJ importer performance

We noticed that when opening OME-TIFF datasets produced by WiscScan, the initialization step took a long time. Profiling the code, we found that much time was spent in NIOByteBufferProvider in the allocateDirect method, copying data from the FileChannel into the buffers. The default read-only buffer size is set to 1048576 bytes, which gets allocated for every TIFF file in the dataset, causing the channel.read call to take a long time populating each buffer.

The issue can be significantly mitigated by calling NIOFileHandler.setDefaultBufferSize(int) with a smaller value. Here is a quick benchmark of the difference with various buffer sizes, when reading an (fill in size of dataset) OME-TIFF dataset from an SMB networked file system:

Buffer size: 16 Process time: 90566
Buffer size: 1024 Process time: 92513
Buffer size: 8192 Process time: 96178
Buffer size: 65536 Process time: 111689
Buffer size: 1048576 Process time: 313106

Here is the script which generated the benchmark:

var list = newArray(16, 1024, 8192, 65536, 1048576);
for (i = 0; i < list.length; i++) {
    call("loci.common.NIOFileHandle.setDefaultBufferSize",list[i]);
    start = getTime();
    run("Bio-Formats", "open=[/Volumes/data/Jayne/test data sets for Ellen/3ROI 2filters 2 channels 3TP/H150901R1 012016 postW_TL1 2dpfDish1 krtGFP_SHG 20x890nmp50_160_z2_5i2g580f520_445u5ROI3TP6_C0_TP0_SP0_FW0.ome.tiff] autoscale color_mode=Default open_all_series view=Hyperstack stack_order=XYCZT");
    end = getTime();
    print("Buffer size: " + list[i] + " Process time: " + (end - start));
    run("Close All");
}

You can find the data at the above path on LOCI ftp accessible via the usual credentials (just message a LOCI programmer if you need them).

As you can see above, the smaller the buffer size the faster the data is opened within ImageJ. We could certainly hardcode the buffer size within ImageJ - but we are wondering if you have another idea?

Many thanks for your insight.

SVS to TIFF conversion is failling

Thanks for your awesome tool !

I have been trying to convert to TIFF numerous SVS files with bfconvert.
They all seem corrupted.

Here is the result (opened with GIMP) of the conversion of http://openslide.cs.cmu.edu/download/openslide-testdata/Aperio/JP2K-33003-1.svs:

svs-fail

Here is the command line I have been using:
.../bfconvert -bigtiff -series 0 ".../JP2K-33003-1.svs" ".../%z.%t.%c.tiff"

I am using latest bftools on Mac OS X 10.11 (but same thing is happening on Ubuntu 14.04):

felixveysseyre$ ./bfconvert -version
Version: 5.1.9
VCS revision: c3065feb775a7a8c0cc2cf6e35979331cca2418b
Build date: 15 April 2016

Thanks for your help !

VSI Files open much more slowly than before

Hi,
We have been using the CellSens Reader coupled to the Macro Extensions for Fiji/ImageJ.

This has been working fine till about last month perhaps when now it takes minutes to open even a very small image. What has changed?
We are using the version from your update site: 14 April 2016.
Best
Oli

TiffParser overruns buffer

Hi all,
I am getting the following stack trace when opening the file, http://www.broadinstitute.org/~leek/20130116-u2os-celldensitytest-72h-96h_m08_s9_w2766e46f1-81be-45a2-88b1-abe8da58c47a.tif

java.lang.IllegalArgumentException: Invalid indices: buf.length=6547, ndx=6546, nBytes=2
at loci.common.DataTools.unpackBytes(DataTools.java:536)
at loci.formats.tiff.TiffCompression.undifference(TiffCompression.java:303)
at loci.formats.tiff.TiffParser.getTile(TiffParser.java:703)
at loci.formats.tiff.TiffParser.getSamples(TiffParser.java:881)
at loci.formats.tiff.TiffParser.getSamples(TiffParser.java:742)
at loci.formats.in.MinimalTiffReader.openBytes(MinimalTiffReader.java:292)
at loci.formats.DelegateReader.openBytes(DelegateReader.java:211)
at loci.formats.FormatReader.openBytes(FormatReader.java:777)
at loci.formats.FormatReader.openBytes(FormatReader.java:749)

using BioFormats in CellProfiler. This is due to a buffer overflow in TiffCompression.undifference (pull request to follow). Unfortunately, I can't seem to get a reproducible test case using ImageInfo (I am using the MetamorphReader to read it) and it does seem like the overflow is caused by an odd buffer length so the true cause might be in the decompression. Nevertheless, the file does load with the patch and the patch is, at worst, defensive coding.

Thanks,
Lee

DeltavisionReader trying to open a FEI Tiff file.

Hello,

When running ImageInfo on a particular FEI Tiff file, it crashes in the DeltavisionReader class:

Exception in thread "main" java.lang.NegativeArraySizeException
    at loci.formats.in.DeltavisionReader.initPixels(DeltavisionReader.java:387)
    at loci.formats.in.DeltavisionReader.initFile(DeltavisionReader.java:267)
    at loci.formats.FormatReader.setId(FormatReader.java:1395)
    at loci.formats.ImageReader.setId(ImageReader.java:835)
    at loci.formats.ReaderWrapper.setId(ReaderWrapper.java:651)
    at ImageInfo.testRead(ImageInfo.java:989)
    at ImageInfo.main(ImageInfo.java:1071)

It appears that the isThisType(RandomAccessInputStream stream) method returns true because DV_MAGIC_BYTES_1 is stored at offset 96.
Could this method be made more restrictive by checking other bytes as well?
For example in my case, checking that sizeX, sizeY, imageCount > 0 or that pixelType is between 0 and 6 would have detected that my file is not a Deltavision one.

Exporting to ICS can throw ArrayIndexOutOfBoundsException

Macro to reproduce:

run("Organ of Corti (2.8M, 4D stack)");
outDir = "/home/curtis/Desktop";
run("Bio-Formats Exporter", "save=" + outDir + "/organ-of-corti.ics write_each_channel");

Relevant stack trace:

(Fiji Is Just) ImageJ 2.0.0-rc-44/1.50e; Java 1.8.0_66 [64-bit]; Linux 3.19.0-39-generic; 55MB of 1065MB (5%)

java.lang.ArrayIndexOutOfBoundsException: 15
    at loci.formats.out.ICSWriter.saveBytes(ICSWriter.java:130)
    at loci.formats.FormatWriter.saveBytes(FormatWriter.java:124)
    at loci.plugins.out.Exporter.run(Exporter.java:767)
    at loci.plugins.LociExporter.run(LociExporter.java:75)

Exception is the same using both release (Bio-Formats 5.1.7) and dev version (Bio-Formats 5.1.7-DEV 62c4dd8, 9 February 2016).

Reported by @tischi on this ImageJ forum thread.

Problems opening new SlideBook files

Originally reported as Fiji bug #1161.

When opening newer SlideBook files with an up-to-date Fiji as of this writing, the following exception is thrown:

(Fiji Is Just) ImageJ 2.0.0-rc-39/1.50b; Java 1.8.0_45 [64-bit]; Mac OS X 10.10.5; 36MB of 4175MB (<1%)

java.lang.ArrayIndexOutOfBoundsException: 0
    at loci.formats.in.SlidebookReader.initFile(SlidebookReader.java:634)
    at loci.formats.FormatReader.setId(FormatReader.java:1426)
    at loci.plugins.in.ImportProcess.initializeFile(ImportProcess.java:505)
    at loci.plugins.in.ImportProcess.execute(ImportProcess.java:143)
    at loci.plugins.in.Importer.showDialogs(Importer.java:137)
    at loci.plugins.in.Importer.run(Importer.java:75)
    at loci.plugins.LociImporter.run(LociImporter.java:78)
    at ij.IJ.runUserPlugIn(IJ.java:212)
    at ij.IJ.runPlugIn(IJ.java:176)
    at ij.IJ.runPlugIn(IJ.java:165)
    at HandleExtraFileTypes.openImage(HandleExtraFileTypes.java:499)
    at HandleExtraFileTypes.run(HandleExtraFileTypes.java:72)
    at ij.IJ.runUserPlugIn(IJ.java:212)
    at ij.IJ.runPlugIn(IJ.java:176)
    at ij.IJ.runPlugIn(IJ.java:165)
    at ij.io.Opener.openWithHandleExtraFileTypes(Opener.java:503)
    at ij.io.Opener.openImage(Opener.java:369)
    at ij.io.Opener.openImage(Opener.java:243)
    at ij.io.Opener.open(Opener.java:110)
    at ij.io.Opener.openAndAddToRecent(Opener.java:292)
    at ij.plugin.DragAndDrop.openFile(DragAndDrop.java:181)
    at ij.plugin.DragAndDrop.run(DragAndDrop.java:152)
    at java.lang.Thread.run(Thread.java:745)

On my Windows 7 VM, the error is different, due to the use of the SlideBook6Reader native linkage:

(Fiji Is Just) ImageJ 2.0.0-rc-39/1.50b; Java 1.8.0_05 [64-bit]; Windows 7 6.1; 32MB of 1527MB (2%)

java.lang.AssertionError: Failed with exception: The file is not a valid SlideBook document. Z plane index too large.

    at loci.formats.in.SlideBook6Reader.getZPosition(Native Method)
    at loci.formats.in.SlideBook6Reader.initFile(SlideBook6Reader.java:277)
    at loci.formats.FormatReader.setId(FormatReader.java:1426)
    at loci.plugins.in.ImportProcess.initializeFile(ImportProcess.java:505)
    at loci.plugins.in.ImportProcess.execute(ImportProcess.java:143)
    at loci.plugins.in.Importer.showDialogs(Importer.java:137)
    at loci.plugins.in.Importer.run(Importer.java:75)
    at loci.plugins.LociImporter.run(LociImporter.java:78)
    at ij.IJ.runUserPlugIn(IJ.java:212)
    at ij.IJ.runPlugIn(IJ.java:176)
    at ij.Executer.runCommand(Executer.java:132)
    at ij.Executer.run(Executer.java:65)
    at java.lang.Thread.run(Thread.java:745)

The original reporter, Glen MacDonald, states that the file works on his Windows 7 systems—probably because those machines have a different/newer version of the SlideBook native library?

Sample file:

Micromanager tiff files

Hello,

I have 2 questions regarding micromanager tiff files support.

  1. It looks like all the "StageLabel" informations could be retrieved from the JSON file with the properties XPositionUm, YPositionUm and ZPositionUm.
    Is there any reason I don't know about to not support it?
  2. The JSON file is stored inside the TIFF file in the tags 50838 and 50839. Those are ImageJ tags and the implementation is available here:
    https://github.com/imagej/ImageJA/blob/v1.48e/src/main/java/ij/io/TiffDecoder.java#L43
    Would it be possible to support reading the JSON file from there if the metadata.txt file is missing?

Split Channels setting is persistent when it shouldn't be

Hello,
I’ve encountered the following behavior difference in calling Bio-formats manually versus from a macro. If someone runs Bio-Formats from the plugins menu and checks ‘split_channels’, that option is not cleared if subsequently Bio-formats is called from a macro that is not intended to split channels.

for example, run from macro to split channels:
run(“Bio-Formats Importer”,”open=[bunchastuff] color_moded=Grayscale split_channels view=Hyperstack stack_order=default”); -> splits channels

then run from macro to NOT split channels:
run(“Bio-Formats Importer”,”open=[bunchastuff] color_moded=Grayscale view=Hyperstack stack_order=default”); -> does not split channels

Now, open Bio-Formats from plugins menu, manually select ‘split channels’ -> splits channels

then run from macro to NOT split channels:
run(“Bio-Formats Importer”,”open=[bunchastuff] color_moded=Grayscale view=Hyperstack stack_order=default”); -> still splits channels

Is there a way around this?

Add JPEG Exif read support

As far as I can tell the JPEGReader cannot read Exif data contained within a jpeg. Nothing is added to the Original Metadata.

matlab package: incorrect use of inputParser

In the Matlab functions of bfGetPlane and bfsave, there is use of Matlab's classdef for input check of required input arguments. These do not go into varargin which causes an error when they are missing.

>> bfGetPlane()
??? Input argument "r" is undefined.

Error in ==> bfGetPlane at 45
    ip.parse(r);

Note that this error does not come from inputParser erroring about missing r.

Opening CZI in Fiji causes an ArrayIndexOutOfBoundsException

Hi all,

I'm not sure who I should address with this issue, so I just ask here:

Opening this czi-image with Fiji (bioformats 5...) causes an AIOOBE:

java.lang.ArrayIndexOutOfBoundsException: 1
at loci.formats.in.ZeissCZIReader.translateExperiment(ZeissCZIReader.java:2190)
at loci.formats.in.ZeissCZIReader.translateMetadata(ZeissCZIReader.java:1214)
at loci.formats.in.ZeissCZIReader.initFile(ZeissCZIReader.java:699)
at loci.formats.FormatReader.setId(FormatReader.java:1315)
at loci.plugins.in.ImportProcess.initializeFile(ImportProcess.java:494)
at loci.plugins.in.ImportProcess.execute(ImportProcess.java:144)
at loci.plugins.in.Importer.showDialogs(Importer.java:141)
at loci.plugins.in.Importer.run(Importer.java:79)
at loci.plugins.LociImporter.run(LociImporter.java:81)
at ij.IJ.runUserPlugIn(IJ.java:201)
at ij.IJ.runPlugIn(IJ.java:165)
at ij.IJ.runPlugIn(IJ.java:154)
at HandleExtraFileTypes.openImage(HandleExtraFileTypes.java:421)
at HandleExtraFileTypes.run(HandleExtraFileTypes.java:57)
at ij.IJ.runUserPlugIn(IJ.java:201)
at ij.IJ.runPlugIn(IJ.java:165)
at ij.IJ.runPlugIn(IJ.java:154)
at ij.io.Opener.openWithHandleExtraFileTypes(Opener.java:454)
at ij.io.Opener.openImage(Opener.java:311)
at ij.io.Opener.openImage(Opener.java:334)
at ij.io.Opener.open(Opener.java:144)
at ij.io.Opener.open(Opener.java:71)
at ij.plugin.Commands.run(Commands.java:27)
at ij.IJ.runPlugIn(IJ.java:171)
at ij.Executer.runCommand(Executer.java:131)
at ij.Executer.run(Executer.java:64)
at java.lang.Thread.run(Thread.java:662)

Any ideas?

Thank you very much in advance!

Best,
Martin

ND2 mixup between image index/series in offset calculation?

When loading a series of images from an ND2 file with pims in conjunction with Bioformats, I find that the DeltaT values for a series of planes actually looks like a linear set of acquisition times across series, as if the calculation of the offset in the stream of doubles is being performed as timestamp_offset = (this_series_number * images_per_series) + this_time_index rather than timestamp_offset = (this_time_index * series_count) + this_series_number. For example, with an ND2 containing 27 FOVs and one image acquired approximate once per minute, I see the following DeltaT values for a single series:

[0.9219700636267663,
 1.9037450135946274,
 3.0288364636301996,
 3.95556303858757,
 4.868468838632107,
 5.9362832885980605,
 6.8355563386082645,
 7.827144663631916,
 9.475698738574982,
 10.656013988614083,
 11.653529838621616,
 12.668974513590335,
 13.80917391359806,
 14.803709763586522,
 15.842455963611602,
 16.894236913621427,
 18.006129413604736,
 19.104716288626193,
 20.13068478858471,
 21.167891713619234,
 22.123279888629913,
 23.425835913598537,
 24.38865488862991,
 25.379765988588332,
 26.335722938597204,
 27.349123038589955,
 28.729651288628578,
 60.8162427136302,
 61.46171876358986,
 62.1168248385787,
 62.87410923862457,
 63.655140763580796,
 64.27967733860015,
 64.84931998860836,
 65.45493756359815,
 66.06824346357584,
 66.6852229385972,
 67.31949451363087,
 67.98029468858242,
 68.71559001362324,
 69.374028588593,
 70.02721916359663,
 70.73407038861514,
 71.42024456357956,
 72.31681051361561,
 73.107865688622,
 73.74822493863105,
 74.37522806358338,
 74.97352471357584,
 75.87358053863049,
 76.53868406361342,
 77.42659081357718,
 78.12829901361465,
 79.1507378885746,
 120.96144261360169,
 121.99119918859004,
 123.02214441359042,
 124.00523436361551,
 125.23334946358204,
 126.18084891360998,
 127.14750571358204,
 128.29976698857547,
 129.5479872136116,
 130.77767661362887,
 131.89068833857775,
 133.1268966885805,
 134.13226691359282,
 135.29994918859006,
 136.3480593636036,
 137.45545488858224,
 138.45935091358425,
 139.71330966359378,
 140.73377966362239,
 141.84502403861285,
 142.83436748862266,
 143.79208716362714,
 144.79577266359328,
 145.92105828857422,
 146.87656981360914,
 147.89340736359358,
 149.2373079636097,
 181.21744368863105,
 182.08611658859252,
 182.87250106358528,
 183.5572058135867,
 184.37544556361436,
 185.0481586636305,
 185.94679756361245,
 186.62292186361552,
 187.60025431358815,
 188.35336656361818,
 189.12251141357422,
 189.83873028862476,
 190.55731078863144,
 191.34162231361867,
 192.0975947136283,
 192.86485026359557,
 193.6300066385865,
 194.45286463862658,
 195.28603488862515]

I haven't yet figured out how to correct the issue in the Bioformats source code, but if any of the primary developers know what might be happening here, I'd be happy to have some direction towards correcting the issue.

CZI: autoselect series for file from multi-file dataset

When opening one file from a multi-file dataset with Bioformats from ImageJ, the series selection window shows the available series and selects the first series by default.

Would it be possible to make the series of the opened file the default selection?

NDPI Reader: Image cropped

As part of my project, I use the library to convert files. (Thanks for this awesome tool!)

With the version 5, the images are cropped. We can also noticed a black area on the image.

I reproduced the issue using the bfconvert tool and the image called "CMU-1.ndpi"
http://openslide.cs.cmu.edu/download/openslide-testdata/Hamamatsu/

Version 4:
Command: ./bfconvert -series 2 /tmp/CMU-1.ndpi /tmp/bf4/test.jpg
Result
version4

Version 5:
Command: ./bfconvert -series 2 /tmp/CMU-1.ndpi /tmp/bf5/test.jpg
test

'ant tools' failing when cloning latest bioformat.git

I just cloned bioformats.git and ant tools fails :

metakit.jar:
      [jar] Building jar: /Users/thomas_deschamps/Dev/bioformats/artifacts/metakit.jar

ome-xml-src:

generate-source:
     [exec] Traceback (most recent call last):
     [exec]   File "/Users/thomas_deschamps/Dev/bioformats/components/xsd-fu/xsd-fu", line 37, in <module>
     [exec]     from genshi.template import NewTextTemplate
     [exec] ImportError: No module named genshi.template

BUILD FAILED
/Users/thomas_deschamps/Dev/bioformats/ant/toplevel.xml:973: The following error occurred while executing this line:
/Users/thomas_deschamps/Dev/bioformats/components/ome-xml/build.xml:26: The following error occurred while executing this line:
/Users/thomas_deschamps/Dev/bioformats/ant/xsd-fu.xml:25: exec returned: 1

incorrect reading of indexed bmp files

This image (I don't have flash installed so I could not upload to the QA tracker) is read incorrectly by bioformats. Gnome image viewer, graphicsmagick, and imagemagick all handle it correctly.

$ gm identify -verbose /tmp/foo.bmp 
Image: /tmp/foo.bmp
  Format: BMP (Microsoft Windows bitmap image)
  Geometry: 256x100
  Class: PseudoClass
  Type: palette
  Depth: 8 bits-per-pixel component
  Channel Depths:
    Red:      8 bits
    Green:    8 bits
    Blue:     8 bits
  Colors: 256
    0: (255,  0,  0)      red
    1: (255,  2,  0)      #FF0200
    2: (255,  5,  0)      #FF0500
    [...]
    255: (170,  0,255)    #AA00FF
  Resolution: 29.25x29.25 pixels/centimeter
  Filesize: 51.2Ki
  Interlace: No
  Orientation: Unknown
  Background Color: white
  Border Color: #DFDFDF
  Matte Color: #BDBDBD
  Page geometry: 256x100+0+0
  Compose: Over
  Dispose: Undefined
  Iterations: 0
  Compression: Undefined
  Signature: 0a4fffb86e1d6dc8c67a8e04e2c3a63ca3c1576d9f59bb268f6a3a877e1b6ed6
  Tainted: False

Question regarding bfGetPlane function

Hi guys,

i would like to use this piece of code:

% Get OME Metainformation
MetaData = GetOMEData(filename);
% Initilaize Reader
reader = bfGetReader(filename);

% Preallocate array with size (Series, SizeX, SizeY, SizeC, SizeZ, SizeT)
image6d = zeros(MetaData.SeriesCount, MetaData.SizeX, MetaData.SizeY, MetaData.SizeC, MetaData.SizeZ, MetaData.SizeT);

% read image data
for series =1: MetaData.SeriesCount

for timepoint = 1: MetaData.SizeT
    for zplane = 1: MetaData.SizeZ
        for channel = 1: MetaData.SizeC

            % get frame for 1st series
            image6d(series, :, :, channel, zplane, timepoint) = bfGetPlane(reader, ???);

        end
    end
end

end

reader.close();

But my data set has:;
Series = 2
Timepoints = 5
Z-Planes = 3
Channels = 2

So the problem is that I cannot address a specific plane using bfGetPlane? I already did the same in Python using:

img[seriesID, timepoint, zplane, channel, :, :] = rdr.read(series=seriesID, c=channel, z=zplane, t=timepoint, rescale=False)

Do you have a hint how to do the same in MATLAB? Or must I use bfopen to read all the data at once?

Cheers,

Sebi

Micromanager: reader selection issue

Hello,

Following your fixes to parse config-specific metadata in MM datasets, I realised that I really don't understand why you rely on the *_metadata.txt in order to parse the config-specific metadata…

In fact (and as we discussed at large), the exact same metadata is present in the TIF tag 51123 of the tif files… Did I get it right that you want to keep the OME-TIF reader as the default one for full MM datasets, i.e. when the input path is the first file of the dataset? If this is the case, this comes with 2 major drawbacks:

  • the config-specific metadata are not parsed for datasets without *_metadata.txt files although they're valid MM datasets
  • there is no way to use the MM reader for the first position and hence to open this position only using the same syntax as another position

Could we come up with a filename pattern criteria that would allow to use the MM reader for MM files (and parse the tif tag 51123)?
Maybe this was already discussed but because I faced this other issue with the new reader and large datasets spread over several files, I couldn't even make sense of this basic choice anymore…

Thank you for your attention. Best,
Thomas

Messed up XMLAnnotations in OMEXML metadata

The output when converting an Olympys OIB file (either to ome.tiff or ome) has invalid annotations and the file does not pass validation. xmlvalid output containse a bunch of these:

cvc-complex-type.2.4.a: Invalid content was found starting with element 'Value'. One of '{"http://www.openmicroscopy.org/Schemas/SA/2013-06":Description, "http://www.openmicroscopy.org/Schemas/SA/2013-06":AnnotationRef, "http://www.openmicroscopy.org/Schemas/SA/2013-06":Value}' is expected.

In the OME-XML the annotation elements look like this:

<XMLAnnotation ID="Annotation:1" Namespace="openmicroscopy.org/OriginalMetadata">
<Value xmlns="">
<OriginalMetadata>
<Key>[Channel 1 Parameters] PMTDetectingMode</Key>
<Value>0</Value></OriginalMetadata></Value></XMLAnnotation>

Instead of what the schema expects:

<XMLAnnotation Annotator="" ID="" Namespace="" xmlns="http://www.openmicroscopy.org/Schemas/SA/2013-06">
  <Description>{0,1}</Description>
  <AnnotationRef ID="">{0,unbounded}</AnnotationRef>
  <Value>{1,1}</Value>
</XMLAnnotation>

Actual xml is here

ImageJ plugin import remote with https

When trying to import a a remote url which is https, it will fail. Presumably this is because it does not recognise the protocol and defaults to trying to open it as a file:

java.io.FileNotFoundException: https:/s3.amazonaws.com/dpwr/s3test/bus.png (No such file or directory)
    at java.io.RandomAccessFile.open0(Native Method)
    at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
    at loci.common.NIOFileHandle.<init>(NIOFileHandle.java:123)
    at loci.common.NIOFileHandle.<init>(NIOFileHandle.java:139)
    at loci.common.NIOFileHandle.<init>(NIOFileHandle.java:148)
    at loci.common.Location.getHandle(Location.java:321)
    at loci.common.Location.getHandle(Location.java:291)
    at loci.common.Location.getHandle(Location.java:281)
    at loci.common.Location.getHandle(Location.java:271)
    at loci.common.Location.checkValidId(Location.java:346)
    at loci.formats.ImageReader.getReader(ImageReader.java:173)
    at loci.plugins.in.ImportProcess.createBaseReader(ImportProcess.java:628)
    at loci.plugins.in.ImportProcess.initializeReader(ImportProcess.java:486)
    at loci.plugins.in.ImportProcess.execute(ImportProcess.java:139)
    at loci.plugins.in.Importer.showDialogs(Importer.java:137)
    at loci.plugins.in.Importer.run(Importer.java:75)
    at loci.plugins.LociImporter.run(LociImporter.java:78)
    at ij.IJ.runUserPlugIn(IJ.java:199)
    at ij.IJ.runPlugIn(IJ.java:163)
    at ij.Executer.runCommand(Executer.java:132)
    at ij.Executer.run(Executer.java:65)
    at java.lang.Thread.run(Thread.java:745)

ImageJ's File -> Import -> URL does manage to open from https URLs.

Image Dimension problem with CZIReader

Hi guys,

I just realized a serious problem with the CZIReader of BioFormats which puszzles me. Here is what I acquired in ZEN:

4 Scenes with a 2x2 TileRegion per scene --> 16 image series
3 Timepoints
5 Z-Planes
2 Channels
Camera ROI = 640x640
Overlap = 10%

When I open this one in Fiji the XML-MetaInfo show an image size of 1216x1216, which the the size of on 2x2 TileRegion with 10% overlap using a 640x640 ROI on the camera. But the result in Fiji is strange, since all the single tile are always placed wrong.

I expected to see an image size of 640 x 640 and a stack of 16 image series. See the attached screenshot.

But when I adress the single image series using python_bioformats, the size of a single tile is shown correctly as 640x640.

I uploaded the CZI to my dropbox account: https://dl.dropboxusercontent.com/u/623476/20160425_BF_CZI.czi

Any help or clarification is greatly appriciated, since those black empty areas are not really usefull. The CZI was acquired with the lates upcoming ZEN Blue 2.3 Release, which is about to be annouced officially.

Cheers,

Sebi (from Zeiss)

bf5 1 9_czi_dimproblem

Bio-Formats Exporter can deadlock on OS X

When attempting to reproduce #2237 on OS X, the Bio-Formats Exporter dialog never appears on my OS X 10.11 system. Instead, I get a deadlock:

Found one Java-level deadlock:
=============================
"Bio-Formats Exporter":
  waiting to lock monitor 0x000000011a806048 (object 0x00000006444d46d8, a java.lang.Object),
  which is held by "AWT-EventQueue-0"
"AWT-EventQueue-0":
  waiting to lock monitor 0x000000011a804e68 (object 0x000000066aa10ff8, a java.awt.Component$AWTTreeLock),
  which is held by "Bio-Formats Exporter"

Java stack information for the threads listed above:
===================================================
"Bio-Formats Exporter":
    at com.apple.laf.AquaFileSystemModel.getRowCount(AquaFileSystemModel.java:194)
    - waiting to lock <0x00000006444d46d8> (a java.lang.Object)
    at javax.swing.JTable.getRowCount(JTable.java:2662)
    at javax.swing.plaf.basic.BasicTableUI.createTableSize(BasicTableUI.java:1692)
    at javax.swing.plaf.basic.BasicTableUI.getPreferredSize(BasicTableUI.java:1733)
    at javax.swing.JComponent.getPreferredSize(JComponent.java:1661)
    at javax.swing.ScrollPaneLayout.preferredLayoutSize(ScrollPaneLayout.java:495)
    at java.awt.Container.preferredSize(Container.java:1788)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.getPreferredSize(Container.java:1773)
    at javax.swing.JComponent.getPreferredSize(JComponent.java:1663)
    at java.awt.BorderLayout.preferredLayoutSize(BorderLayout.java:719)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.preferredSize(Container.java:1788)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.getPreferredSize(Container.java:1773)
    at javax.swing.JComponent.getPreferredSize(JComponent.java:1663)
    at java.awt.BorderLayout.preferredLayoutSize(BorderLayout.java:719)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.preferredSize(Container.java:1788)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.getPreferredSize(Container.java:1773)
    at javax.swing.JComponent.getPreferredSize(JComponent.java:1663)
    at javax.swing.BoxLayout.checkRequests(BoxLayout.java:483)
    at javax.swing.BoxLayout.layoutContainer(BoxLayout.java:424)
    - locked <0x0000000644741298> (a javax.swing.BoxLayout)
    at java.awt.Container.layout(Container.java:1503)
    at java.awt.Container.doLayout(Container.java:1492)
    at java.awt.Container.validateTree(Container.java:1688)
    at java.awt.Container.validateTree(Container.java:1697)
    at java.awt.Container.validateTree(Container.java:1697)
    at java.awt.Container.validateTree(Container.java:1697)
    at java.awt.Container.validateTree(Container.java:1697)
    at java.awt.Container.validate(Container.java:1623)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.validateUnconditionally(Container.java:1660)
    - locked <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Window.pack(Window.java:818)
    at javax.swing.JFileChooser.createDialog(JFileChooser.java:805)
    at javax.swing.JFileChooser.showDialog(JFileChooser.java:732)
    at javax.swing.JFileChooser.showSaveDialog(JFileChooser.java:664)
    at loci.plugins.out.Exporter.run(Exporter.java:236)
    at loci.plugins.LociExporter.run(LociExporter.java:75)
    at ij.plugin.filter.PlugInFilterRunner.processOneImage(PlugInFilterRunner.java:263)
    at ij.plugin.filter.PlugInFilterRunner.<init>(PlugInFilterRunner.java:112)
    at ij.IJ.runUserPlugIn(IJ.java:214)
    at ij.IJ.runPlugIn(IJ.java:176)
    at ij.Executer.runCommand(Executer.java:136)
    at ij.Executer.run(Executer.java:65)
    at java.lang.Thread.run(Thread.java:745)
"AWT-EventQueue-0":
    at java.awt.Component.invalidate(Component.java:2920)
    - waiting to lock <0x000000066aa10ff8> (a java.awt.Component$AWTTreeLock)
    at java.awt.Container.invalidate(Container.java:1580)
    at javax.swing.JComponent.revalidate(JComponent.java:4862)
    at javax.swing.JTable.tableRowsInserted(JTable.java:4482)
    at javax.swing.JTable.tableChanged(JTable.java:4407)
    at javax.swing.table.AbstractTableModel.fireTableChanged(AbstractTableModel.java:296)
    at javax.swing.table.AbstractTableModel.fireTableRowsInserted(AbstractTableModel.java:231)
    at com.apple.laf.AquaFileSystemModel$DoChangeContents.run(AquaFileSystemModel.java:458)
    - locked <0x00000006444d46d8> (a java.lang.Object)
    - locked <0x000000063fc952f0> (a java.lang.Object)
    at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:312)
    at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:745)
    at java.awt.EventQueue.access$300(EventQueue.java:103)
    at java.awt.EventQueue$3.run(EventQueue.java:706)
    at java.awt.EventQueue$3.run(EventQueue.java:704)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
    at java.awt.EventQueue.dispatchEvent(EventQueue.java:715)
    at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
    at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
    at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)

Happens with Oracle Java 1.7.0_80 as well as Oracle Java 1.8.0_66. And it happens with both the release and 5.1 dev versions of Bio-Formats.

AggregateMetadata.java generated file has '\' at top of file and fails to compile

I seem to be doing something wrong when I try to build the project on my Windows system. I cloned the repo used ant to build it after installing python 2.7 and Genshi . I also updated /etc/gitconfig to set autocrlf = input.

It looks like the AggregateMetadata.java is generated incorrectly because the first 17 characters in the file are: \

Any tips on how I can fix this so my build will complete? My command to build is 'ant jars' or just 'mvn'. Both give the same error (see below).

Thanks!

c:\Users\Richard\Documents\GitHub\bioformats\components\ome-xml\build\classes
[javac] warning: [options] bootstrap class path not set in conjunction with -source 1.6
[javac] c:\Users\Richard\Documents\GitHub\bioformats\components\ome-xml\build\src\ome\xml\meta\AggregateMetadata.java

:1: error: illegal character: ''
[javac]
[javac] ^

portable pixmap (*.ppm) unsupported but read incorrectly

I was going to report an error when reading ppm images but while trying to import the image via your QA system, I noticed the list of supported formats and turns out that ppm are not listed as supported. However, they are being read which I found it's kinda strange (maybe it's reading as some other format?). I noticed the incorrect reading of the image in Omero and in Fiji via the Bioformats plugin (when using the normal ImageJ reading, the image is read correctly).

Here's a small ppm of Lena I just found on the internet if you need for testing.

New warining message about SlideBook6Reader.dll with 5.1.9

Hi guys,

when i use the bioformats_package.jar from 5.1.9 from python I get those warnings. But when I switch back to version 5.1.8 those warnings do not pop up. I both cases my CZI file is read correctly, but I am just wondering...
When I open the file directly from within Fiji no warning appears no matter what version (5.1.8 or 5.1.9) no warning appears.

Sebi

C:\Anaconda\python.exe C:/Users/M1SRH/Documents/Python_Projects/BioFormatsRead/test_get_image6d.py
09:36:35.054 [Thread-0] DEBUG loci.common.NIOByteBufferProvider - Using mapped byte buffer? false
Apr 25, 2016 9:36:35 AM org.scijava.nativelib.BaseJniExtractor extractJni
INFORMATION: Couldn't find resource META-INF/lib/windows_64/ SlideBook6Reader.dll
Apr 25, 2016 9:36:35 AM org.scijava.nativelib.NativeLibraryUtil loadNativeLibrary
WARNUNG: IOException creating DefaultJniExtractor
java.io.IOException: Couldn't find resource META-INF/lib/windows_64/ SlideBook6Reader.dll
at org.scijava.nativelib.BaseJniExtractor.extractJni(BaseJniExtractor.java:144)
at org.scijava.nativelib.NativeLibraryUtil.loadNativeLibrary(NativeLibraryUtil.java:265)
at loci.formats.in.SlideBook6Reader.(SlideBook6Reader.java:93)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at loci.formats.ClassList.(ClassList.java:127)

Analyze images are always assumed to be Big Endian

As the tile says, images stored in the Analyze format (hdr/img) are always assumed to be big endian order.
This happens both when using the bio formats importer in imageJ and icy.
It appears that calling isLittleEndian() on the AnalyzeReader always returns false.

Best,
Christian

CZI files with bitdepths 12/14/36/42 can not be loaded

There's a class of CZI images - stemming from cameras - with BitDepth of 12 or 14 in B/W resp 36 and 42 in color. Trying to read them with the current plugin causes a post-mortem in ImageJ - see below. - To my untrained eye it always looks pretty much one and the same (I am seeing the 2.5 in the first line every time that happens).

If this kind of an image is cast in ZEN to 16bit (resp 48 for color) it can be read ok - a pair of them before and after casting I am sending via E-mail -.

You will see that when judged by their pure physical size, the two files are for all practical reasons one and the same image..Could the reason for the failure be of a semantic nature? With BitDepth msitakenly understood - or suggested - as the pixel size on the disk? In simple terms, BitDepth just means the number of significant bits - and not the pixel size on disk or in memory.. Note however, that the cast image has BitDepth 36 bits as well, but it does not make any problems.

Of course I could have a look at the differences between the two images - could be inconsistent/broken metadata in the original 36bitgs file or something similarly prosaic... On the other hand I am concerned about all those czi files created so far with this signature. The users should be spared the roadblock, whatever the cause.

Sincere regards

Vito

PS: "Unfortunately we dont support that file type yet": I can't attach the two images I am mentioning above. I will send them to Curtis via PM. Sorry.


java.lang.NumberFormatException: For input string: "2.5"
at java.lang.NumberFormatException.forInputString(Unknown Source)
at java.lang.Integer.parseInt(Unknown Source)
at java.lang.Integer.(Unknown Source)
at loci.formats.in.ZeissCZIReader.translateInformation(ZeissCZIReader.java:1010)
at loci.formats.in.ZeissCZIReader.translateMetadata(ZeissCZIReader.java:758)
at loci.formats.in.ZeissCZIReader.initFile(ZeissCZIReader.java:444)
at loci.formats.FormatReader.setId(FormatReader.java:1178)
at loci.plugins.in.ImportProcess.initializeFile(ImportProcess.java:482)
at loci.plugins.in.ImportProcess.execute(ImportProcess.java:146)
at loci.plugins.in.Importer.showDialogs(Importer.java:141)
at loci.plugins.in.Importer.run(Importer.java:79)
at loci.plugins.LociImporter.run(LociImporter.java:81)
at ij.IJ.runUserPlugIn(IJ.java:184)
at ij.IJ.runPlugIn(IJ.java:151)
at ij.Executer.runCommand(Executer.java:127)
at ij.Executer.run(Executer.java:64)
at ij.IJ.run(IJ.java:250)
at ij.macro.Functions.doRun(Functions.java:580)
at ij.macro.Functions.doFunction(Functions.java:83)
at ij.macro.Interpreter.doStatement(Interpreter.java:219)
at ij.macro.Interpreter.doBlock(Interpreter.java:542)
at ij.macro.Interpreter.runFirstMacro(Interpreter.java:641)
at ij.macro.Interpreter.doStatement(Interpreter.java:246)
at ij.macro.Interpreter.doStatements(Interpreter.java:207)
at ij.macro.Interpreter.run(Interpreter.java:104)
at ij.macro.Interpreter.run(Interpreter.java:74)
at ij.macro.Interpreter.run(Interpreter.java:85)
at ij.plugin.Macro_Runner.runMacro(Macro_Runner.java:105)
at ij.plugin.Macro_Runner.runMacroFile(Macro_Runner.java:90)
at ij.IJ.runMacroFile(IJ.java:118)
at ij.OtherInstance$Implementation.sendArgument(OtherInstance.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
at sun.rmi.transport.Transport$1.run(Unknown Source)
at sun.rmi.transport.Transport$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

FlowSightReader error

I have a .cif file from a user that fails to read with the following error:

Channel count (1) does not match number of channel names (6) in string "C0|C1|C21|C3|C4|C5

This is an error in interpreting one of the TIF tags in the file; I will contribute a patch that fixes it today.

Metadata skew in Bioformats for LightsheetZ1 CZI data

Hi,

I realized that different versions of Bioformats store the individual stack size for the views under different tags. I implemented a workaround here fiji/SPIM_Registration@d10db40, but it seems that this is not right.

In the current Fiji there is Bioformats-5.1.1. and it stores the z-stack size metadata under the key:
SizeZ|View|V|Image|Information #1
SizeZ|View|V|Image|Information #2
SizeZ|View|V|Image|Information #3
...

In Eclipse I still use Bioformats-5.1.0 where the key is
Information|Image|V|View|SizeZ #1
Information|Image|V|View|SizeZ #2
Information|Image|V|View|SizeZ #3
...

Or is this change on purpose?

Thanks a lot, Stephan

Problem loading Picoquant .bin files

I'm having some problems loading a Picoquant .bin file using bioformats. The file is here:
https://www.dropbox.com/s/k03jdxgpxhxs9ui/VLDLR_mGFP_with%20ligand%20%20.zip?dl=0

The file is 128x128 pixels with 3125 channels.
I've that it conforms correctly to the format specified by Picoquant: http://www.tcspc.com/doku.php/glossary:pre-histogrammed_image and I'm able to read it correctly in Matlab.

Bioformats however returns a width of 0px and a height of 204,816,160px.

The first 16 bytes of the data are: 80 00 00 00 | 80 00 00 00 | 00 00 20 3f | 35 0c 00 00 which correspond to the width (int32), height(int32), px res (float) and number of channels (int32) respectively.

The returned height, 204,816,160, is 20 3f 35 0c in hex. It looks to me like somehow the first 6 bytes might be getting discarded? Looking at PQBinReader.java it's not obvious to me how that might be happening though.

Any ideas would be greatly appreciated!
@imunro have you seen anything like this before?

Cheers
Sean

octave package: `bfopen` overrides logging settings and is too verbose

The function bfopen in the Octave package overrides the logging settings. The change of logging level should be returned to the previous state. In Octave there is an unwind_protect block but I'm not sure how to do that in Matlab.

But is this change of logging level really necessary?

The function is also quite verbose by default, printing to stdout in real time as it reads the images. While some users may prefer this on an interactive session, this is not very nice in other cases.

Exporting single plane PNG and TIFF files with bfconvert get corrupted

I tested this with the latest release of the Bioformats jar file (5.14 at time of writing)

My goal was to export all timepoints of a lsm file to single RGB16 PNG images.

Steps to reproduce:

  1. I used a .lsm file with 1000 timepoints, 3 channels and an XYZ resolution of 512x512x1.
  2. Command line for bfconvert was:
    ./bfconvert -merge 'input.lsm' 'output_%t.png'
  3. I used pngcheck, imagemagick and graphicsmage to test the resulting pngs (apng images to be precise) which all showed me that they where corrupted.

I then tested it with the .tif extension so I could eventually convert the TIFF images to PNG in a second step but they get corrupted too. Something about an illegal entry for the tiff directory.

I hope that I provided enough information. If you need any more just let me know.

./bfconvert failed to launch

I just noticed that using bftools.zip from http://downloads.openmicroscopy.org/bio-formats/5.0.0/. It appears that bfconvert does not work while other scripts does.

$ ./bfconvert /home/test/test.ome.tif
Error: Could not find or load main class loci.formats.tools.ImageConverter

I also looked in the bioformats_package.jar and loci.formats.tools.ImageConverter does not exist.

Maybe an incompatibility with BF 4 ?

Tested on Ubuntu.

Thank you

[bug] Micromanager: reader crashes with stack format and large datasets

Hello,

Following your fixes to parse config-specific metadata in MM datasets, I realised that I focused on providing small datasets (for obvious practical reasons). Hence the current MM reader (5.1.8) crashes when the dataset is split over several tiff files.

In more details:

  • when MM datasets are stored in stack format and the dataset size exceeds 4.7gb, it's split over several files (e.g. prefix_1_MMStack.ome.tif, prefix_1_MMStack_1.ome.tif, etc)
  • because MM can save one tif file (or set of tif files if > 4.7gb) per position, it is sometimes useful to use BF to open PosXX (in fact it would be my primary use case!)
  • with BF 5.1.7, the following readers were used depending on the input path:
    • first tif file of the dataset: OME_TIF reader
    • any other tif file: TIF reader
  • with current BF 5.1.8, the following readers are used depending on the input path:
    • first tif file of the dataset: OME_TIF reader
    • any other tif file with associated *_metadata.txt file: MM reader (OK provided the position dataset isn't >4.7gb)
    • any other tif file (without *_metadata.txt file): TIF reader

I would be happy to provide you with a demo large dataset, or you can easily create one using MM's demo config and the Multi Dimensional Acquisition window (select e.g. enough time points so that the dataset size is >4.7gb).

Also, as far as I understand it, it means that there is no easy way to open the first position of a dataset (because when this path is given as input, the OME-TIF reader is triggered which will load the full dataset). Although I don't understand how the reader selection works (but that you want to keep a way to trigger the OME-TIF reader for such datasets), I find this very cumbersome! What would be the other possible designs?

Thank you in advance for your attention. Best,
Thomas

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.