Coder Social home page Coder Social logo

rawspeed's Introduction

WARNING: rawspeed is now maintained under darktable-org/rawspeed repo! Do NOT open new issues in this old repo! Do NOT send new pull requests to this old repo!


#RawSpeed Developer Information

##What is RawSpeed?

RawSpeed…

  • is capable of decoding various images in RAW file format.
  • is intended to provide the fastest decoding speed possible.
  • supports the most common DSLR and similar class brands.
  • supplies unmodified RAW data, optionally scaled to 16 bit, or normalized to 0->1 float point data.
  • supplies CFA layout for all known cameras.
  • provides automatic black level calculation for cameras having such information.
  • optionally crops off “junk” areas of images, containing no valid image information.
  • can add support for new cameras by adding definitions to an xml file.
  • is extensively crash-tested on broken files.
  • decodes images from memory, not a file stream. You can use a memory mapped file, but it is rarely faster.
  • is currently tested on more than 500 unique cameras.
  • can add support for new cameras by updating an xml file.
  • open source under the LGPL v2 license.

RawSpeed does NOT…

  • read metadata information, beside whitebalance information.
  • do any color correction or whitebalance correction.
  • de-mosaic the image.
  • supply a viewable image or thumbnail.
  • crop the image to the same sizes as manufactures, but supplies biggest possible images.

So RawSpeed is not intended to be a complete RAW file display library, but only act as the first stage decoding, delivering the RAW data to your application.

##Version 2, new cameras and features

  • Support for Sigma foveon cameras.
  • Support for Fuji cameras.
  • Support old Minolta, Panasonic, Sony cameras (contributed by Pedro Côrte-Real)
  • Arbitrary CFA definition sizes.
  • Use pugixml for xml parsing to avoid depending on libxml.

##Getting Source Code

You can get access to the lastest version using from here. You will need to include the “RawSpeed” and “data” folder in your own project.

This includes a Microsoft Visual Studio project to build a test application. The test application uses libgfl to output 16 bit images. This library is not required for your own implementation though.

To see a GCC-based implementation, you can check out this directory, which is the implementation Rawstudio uses to load the images. This also includes an automake file to set up. You can also have a look at the darktable implementation, for which there is a Cmake based build file.

##Background of RawSpeed

So my main objectives were to make a very fast loader that worked for 75% of the cameras out there, and was able to decode a RAW file at close to the optimal speed. The last 25% of the cameras out there could be serviced by a more generic loader, or convert their images to DNG – which as a sidenote usually compresses better than your camera.

RawSpeed is not at the moment a separate library, so you have to include it in your project directly.

##Include files

All needed headers are available by including “RawSpeed-API.h”. You must have the pthread library and headers installed and available.

RawSpeed uses pthreads and libxml2, which is the only external requirements beside standard C/C++ libraries. As of v2, libxml is no longer required.

You must implement a single function: “int rawspeed_get_number_of_processor_cores();”, which should return the maximum number of threads that should be used for decoding, if multithreaded decoding is possible.

Everything is encapsulated on a “RawSpeed” namespace. To avoid clutter the examples below assume you have a “using namespace RawSpeed;” before using the code.

##The Camera Definition file

This file describes basic information about different cameras, so new cameras can be supported without code changes. See the separate documentation on the Camera Definition File.

The camera definitions are read into the CameraMetaData object, which you can retain for re-use later. You initialize this data by doing

static CameraMetaData *metadata = NULL;
if (NULL == metadata)
{
  try {
    metadata = new CameraMetaData("path_to_cameras.xml");
  } catch (CameraMetadataException &e) {
    // Reading metadata failed. e.what() will contain error message.
  }
}

The memory impact of this object is quite small, so you don’t have to free it every time. You can however delete and re-create it, if you know the metadata file has been updated.

You can disable specific cameras in the xml file, or if you would want to do it in code, you can use:

    // Disable specific camera
    metadata.disableCamera("Canon", "Canon EOS 100D")

    // Disable all cameras from maker:
    metadata.disableCamera("Fuji")

##Using RawSpeed

You need to have the file data in a FileMap object. This can either be created by supplying the file content in memory using FileMap(buffer_pointer, size_of_buffer), or use a “FileReader” object to read the content of a file, like this:

FileReader reader(filename);
FileMap* map = NULL;
try {
  map = reader.readFile();
} catch (FileIOException &e) {
  // Handle errors
}

The next step is to start decoding. The first step is to get a decoder:

RawParser parser(map);
RawDecoder *decoder = parser.getDecoder();

This will do basic parsing of the file, and return a decoder that is capable of decoding the image. If no decoder can be found or another error occurs a “RawDecoderException” object will be thrown. The next step is to determine whether the specific camera is supported:

decoder->failOnUnknown = FALSE;
decoder->checkSupport(metadata);

The “failOnUnknown” property will indicate whether the decoder should refuse to decode unknown cameras. Otherwise RawSpeed will only refuse to decode the image, if it is confirmed that the camera type cannot be decoded correctly. If the image isn’t supported a “RawDecoderException” will be thrown.

Reaching this point should be very fast in terms of CPU time, so the support check is very quick, if file data is quickly available. Next we decode the image:

decoder->decodeRaw();
decoder->decodeMetaData(metadata);
RawImage raw = decoder->mRaw;

This will decode the image, and apply metadata information. The RawImage is at this point completely untouched Raw data, however the image has been cropped to the active image area in decodeMetaData. Error reporting is: If a fatal error occurs a RawDecoderException is thrown.

Non-fatal errors are pushed into a "vector" array in the decoder object called "errors". With these types of errors, there WILL be a raw image available, but it will likely contain junk sections in undecodable parts. However, as much as it was possible to decode will be available. So treat these messages as warnings.

Another thing to note here is that the RawImage object is automatically refcounted, so you can pass the object around without worrying about the image being freed before all instances are out of scope. Do however keep this in mind if you pass the pointer to image data to another part of your application.

raw->scaleBlackWhite();

This will apply the black/white scaling to the image, so the data is normalized into the 0->65535 range no matter what the sensor adjustment is (for 16 bit images). This function does no throw any errors. Now you can retrieve information about the image:

int components_per_pixel = raw->getCpp();
RawImageType type = raw->getDataType();
bool is_cfa = r->isCFA;

Components per pixel indicates how many components are present per pixel. Common values are 1 on CFA images, and 3, found in some DNG images for instance. Do note, you cannot assume that an images is CFA just because it is 1 cpp - greyscale dng images from things like scanners can be saved like that.

The RawImageType can be TYPE_USHORT16 (most common) which indicates unsigned 16 bit data or TYPE_FLOAT32 (found in some DNGs)

The isCFA indicates whether the image has all components per pixel, or if it was taken with a colorfilter array. This usually corresponds to the number of components per pixel (1 on CFA, 3 on non-CFA).

The ColorfilterArray contains information about the placement of colors in the CFA:

if (TRUE == is_cfa) {
  ColorFilterArray cfa = raw->cfa;
  int dcraw_filter = cfa.getDcrawFilter();
  int cfa_width = cfa.size.x;
  int cfa_height = cfa.size.y;
  CFAColor c = cfa.getColorAt(0,0);
}

To get this information as a dcraw compatible filter information, you can use getDcrawFilter() function.

You can also use getColorAt(x, y) to get a single color information. Note that unlike dcraw, RawSpeed only supports 2×2 patterns, so you can reuse this information. CFAColor can be CFA_RED, CFA_GREEN, CFA_BLUE for instance.

Finally information about the image itself:

unsigned char* data = raw->getData(0,0);
int width = raw->dim.x;
int height = raw->dim.y;
int pitch_in_bytes = raw->pitch;

The getData(x, y) function will give you a pointer to the Raw data at pixel x, y. This is the coordinate after crop, so you can start copying data right away. Don’t use this function on every pixel, but instead increment the pointer yourself. The width and height gives you the size of the image in pixels – again after crop.

Pitch is the number of bytes between lines, since this is usually NOT width * components_per_pixel * bytes_per_component. So in this instance, to calculate a pointer at line y, use &data[y * raw->pitch] for instance.

Finally to clean up, use:

delete map;
delete decoder;

Actually the map and decoder can be deallocated once the metadata has been decoded. The RawImage will automatically be deallocated when it goes out of scope and the decoder has been deallocated. After that all data pointers that have been retrieved will no longer be usable.

##Tips & Tricks

You will most likely find that a relatively long time is spent actually reading the file. The biggest trick to speeding up raw reading is to have some sort of prefetching going on while the file is being decoded. This is the main reason why RawSpeed decodes from memory, and doesn’t use direct file reads while decoding.

The simplest solution is to start a thread that simply reads the file, and rely on the system cache to cache the data. This is fairly simple and works in 99% of all cases. So if you are doing batch processing simply start a process reading the next file, when the current image starts decoding. This will ensure that your file is read linearly, which gives the highest possible throughput.

A more complex option is to read the file to a memory portion, which is then given to RawSpeed to decode. This might be a few milliseconds faster in the best case, but I have found no practical difference between that and simply relying on system caching.

You might want to try out memory mapped files. However this approach has in practical tests shown to be just as fast in best cases (when file is cached), or slower (uncached files).

##Bad pixel elimination

A few cameras will mark bad pixels within their RAW files in various ways. For the camera we know how to this will be picked up by RawSpeed. By default these pixels are eliminated by 4-way interpolating to the closest valid pixels in an on-axis search from the current pixel.

If you want to do bad pixel interpolation yourself you can set:

RawDecoder.interpolateBadPixels = FALSE;

Before calling the decoder. This will disable the automatic interpolation of bad pixels. You can retrieve the bad pixels by using:

std::vector<uint32> RawImage->mBadPixelPositions;

This is a vector that contains the positions of the detected bad pixels in the image. The positions are stored as x | (y << 16), so maximum pixel position is 65535, which also corresponds with the limit of the image sizes within RawSpeed. you can loop through all bad pixels with a loop like this:

for (vector<uint32>::iterator i=mBadPixelPositions.begin(); i != mBadPixelPositions.end(); i++)  {
    uint32 pos_x = (*i)&0xffff;
    uint32 pos_y = (*i)>>16;
    ushort16* pix = (ushort16*)getDataUncropped(pos_x, pos_y);
}

This however may not be most optimal format for you. You can also call RawImage->transferBadPixelsToMap(). This will create a bit-mask for you with all bad pixels. Each byte correspond to 8 pixels with the least significant bit for the leftmost pixel. To set position x,y this operation is used:

RawImage->mBadPixelMap[(x >> 8) + y * mBadPixelMapPitch] |=  1 << (x & 7);

This enables you to quickly search through the array. If you for instance cast the array to integers you can check 32 pixels at the time.

Note that all positions are uncropped image positions. Also note that if you keep the interpolation enabled you can still retrieve the mBadPixelMap, but the mBadPixelPositions will be cleared.

##Updating Camera Support

If you implement an autoupdate feature, you simply update “cameras.xml” and delete and re-create the CameraMetaData object.

There might of course be some specific cameras that require code-changes to work properly. However, there is a versioning check inplace, whereby cameras requirering a specific code version to decode properly will be marked as such.

That means you should safely be able to update cameras.xml to a newer version, and cameras requiring a code update will then simply refuse to open.

##Format Specific Notes

###Canon sRaw/mRaw Canon reduced resolution Raws (mRaw/sRaw) are returned as RGB with 3 component per pixel without whitebalance compensation, so color balance should match ordinary CR2 images. The subsampled color components are linearly interpolated.

This is even more complicated by the fact that Canon has changed the way they store the sraw whitebalance values. This means that on newer cameras, you might have to specify "invert_sraw_wb" as a hint to properly decode the whitebalance on these casmeras. To see examples of this, search cameras.xml for "invert_sraw_wb".

###Sigma Foveon Support

Sigma Foveon (x3f-based) images are delivered as raw image values. dcraw offers a "cleanup" function, that will reduce noise in Foveon images. RawSpeed does not have an equivalent function, so if you want to use RawSpeed as a drop-in replacement, you will either have to convert the dcraw "foveon_interpolate", or implement similar noise reduction, if you want it.

###Fuji Rotated Support

By default RawSpeed delivers Fuji SuperCCD images as 45 degree rotated images.

RawSpeed does however use two camera hints to do this. The first hint is "fuji_rotate": When this is specified in cameras.xml, the images are rotated.

To check if an image has been rotated, check RawImage->fujiWidth after calling RawDecoder->decodeMetaData(...) If it is > 0, then the image has been rotated, and you can use this value to calculate the un-rotated image size. See here for an example on how to rotate the image back after de-mosaic.

If you do NOT want your images to be delivered rotated, you can disable it when decoding.

RawDecoder->fujiRotate = FALSE;

Do however note the CFA colors are still referring to the rotated color positions.

##Other options

###RawDecoder -> uncorrectedRawValues If you enable this on the decoder before calling RawDecoder->decodeRaw(), you will get complely unscaled values. Some cameras have a "compressed" mode, where a non-linear compression curve is applied to the image data. If you enable this parameter the compression curve will not be applied to the image. Currently there is no way to retrieve the compression curve, so this option is only useful for diagnostics.

###RawImage.mDitherScale This option will determine whether dither is applied when values are scaled to 16 bits. Dither is applied as a random value between "+-scalefactor/2". This will make it so that images with less number of bits/pixel doesn't have a big tendency for posterization, since values close to eachother will be spaced out a bit.

Another way of putting it, is that if your camera saves 12 bit per pixel, when RawSpeed upscales this to 16 bits, the 4 "new" bits will be random instead of always the same value.

##Memory Usage

RawSpeed will need:

  • Size of Raw File.
  • Image width * image height * 2 for ordinary Raw images with 16 bit output.
  • Image width * image height * 4 for float point images with float point output .
  • Image width * image height * 6 for ordinary Raw images with float point output.
  • Image width * image height / 8 for images with bad pixels.

##Submitting Requests and Patches

Please go to the github page and submit your (pull)requests and issues there.

rawspeed's People

Contributors

abrander avatar aferrero2707 avatar berb avatar dabbill avatar dtorop avatar hanatos avatar klauspost avatar lebedevri avatar mazhe avatar mgehre avatar nijel avatar pedrocr avatar pmjdebruijn avatar schenlap avatar serval2412 avatar stloeffler avatar xen2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rawspeed's Issues

DNG linearization curve is applied to 'Default Crop', not to entire image.

In DngDecoder.cpp linearization curve is applied to image cropped to 'DefaultCrop', but not to pixels outside of cropped area.
This is incorrect: default crop is usually within ActiveArea. These extra pixels (outside DefaultCrop, but within AciveArea) are for interpolation (demosaic) step, so extra pixels should be linearized the same way as pixels within default crop.

Also, according to DNG specs, Linearization Table should be applied to 'raw values'. All values, not only within ActiveArea. This difference may be very significant for cameras with BlackLevel calculation based on MaskedPixels.

Here is the patch (sorry for not using GitHub tools, I operate with my local repo):

https://gist.github.com/LibRaw/d61c11d9cb5aff0b78bf

Missing Decode16BitRawUnpacked() method

Hi, following error occurs while compiling:

/home/.../RawSpeed/RafDecoder.cpp:98: error: 'Decode16BitRawUnpacked' was not declared in this scope
Decode16BitRawUnpacked(input, width*2, height);

I haven't found Decode16BitRawUnpacked() anywhere, only Decode12BitRawUnpacked(). Other methods are for big-endian only.

Add a canonical name to cameras

To make it easier to reference cameras in lists of color matrices, WB presets, etc it would be nice to have a single canonical name reported by rawspeed. Here's my suggestion on how to do it:

  • Start reporting canonical_name and canonical_model instead of just make and model. Default those two values to whatever is in the definition. For cameras without aliases the values will be the same, for cameras with aliases it will be a consistent name
  • Create a new tag of the form so that the sometimes ugly and repetitive names can be replaced with something sensible. This avoids reporting the Nikon D800 as make "NIKON CORPORATION" model "NIKON D800" and instead do "Nikon" and "D800"

If this sounds ok I'll go ahead and prepare a PR.

Data dithering should be configurable

For precise raw data inspection it is vital to have data dithering configurable, because unaltered RAW data are very informative for many cases:

  • regular histogram holes indicates real camera 'bit count'
  • not so regular holes indicates different kinds of 'data cooking' within camera (e.g. Nikon's WB preconditioning, or several kinds of lens corrections/etc)

White balance coefficients are not returned for Canon G1X Mark II

The white balance coefficients that RawSpeed returns via RawDecoder are nan. The canonical make and model for the Canon G1X Mark II are correct.

The offset into ColorData7 seems to be incorrect in Cr2Decoder::decodeMetaDataInternal as cam_mul ends up an array of zeros

I checked raw files from two different sources and both had this problem. Here is a link to a raw that has this problem http://www.imaging-resource.com/PRODS/canon-g1x-ii/YIMG_0035.CR2.HTM

Add thumbnail and whitebalance extraction

For rawspeed to be able to completely replace libraw in apps at least two things seem to be needed:

  • Extracting white balance: should be relatively easy, it's just extra metadata that needs to be extracted in each format and added to mRaw
  • Extracting the embedded thumbnail: this should be easy as well but not very high performance as rawspeed reads in the whole file at once. The only option I see here is to mmap the file instead of actually reading it. Should actually be better in the general case if the OS is smart enough.

Comments?

Error when handling DNG files larger than 256Mb

In LJpegPlain::decodeScanLeft3Comps (and, very likely, in other DNG formats too), offsets table is 32-bit:
offset = new uint32[slices+1];
Low 28 bits of offset is offset in data, while upper 4 bits are slice number:
offset[slice] = ((t_x + offX) * mRaw->getBpp() + ((offY + t_y) * mRaw->pitch)) | (t_s << 28);

This effectively disables offsets larger than 256M (2^28)

Unfortunately, such offsets may exists. For example, this file:
https://www.dropbox.com/s/zteexbtcqmmvq4h/r021f07.dng?dl=0
The file is converted from Nikon Scan .NEF (6x6 medium format scan by Nikon Coolscan 9000) file by Adobe DNG converter image size is ~81Mpix (8900x8900), 6 bytes in pixel, so possible offset is ~500M

Even for bayer images (2 bytes per pixel), 28-bit offsets will limit resolution to 128Mpix, while 200-Mpix images are possible today (Hasselblad 200-MS). Situation is worse for linear (demosaiced) DNGs, the limit is about 42Mpix, while single shot cameras are up to 80Mpix today.

Possible solution: change offset[] to 64 bit. Use full 32 lower bits as offset. This will limit image size to 4Gb, so 2Gpix of bayer images and ~0.7Gpix of linear RGB images.

Two plane DNGs seem to be broken

A user submitted some RAF converted DNGs from Fuji cameras with EXR sensors (they output two images per file to do HDR). The current RAF decoder just ignores this second image whereas the DNG decoder seems to be broken with these files (totally black output):

33ed64a

I had a look around and it seems the DngDecoder already tries to work with these so it must be a bug. Looking at it some more I wonder if we should output 2 RawImage files from these instead of just one where each of them is a normal cpp=1 CFA output so raw developers can treat these the same as any other.

gcc warning with strict-aliasing [-Wstrict-aliasing]

JFTR, it only happens in 3 places
RawSpeed/BitPumpMSB.h:51
RawSpeed/BitPumpMSB.h:86
RawSpeed/BitPumpMSB.h:124

rawspeed/RawSpeed/BitPumpMSB.h: In member function 'RawSpeed::uint32 RawSpeed::BitPumpMSB::peekBitsNoFill(RawSpeed::uint32)':
rawspeed/RawSpeed/BitPumpMSB.h:51:53: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
uint32 ret = (uint32)&current_buffer[shift>>3];
^
rawspeed/RawSpeed/BitPumpMSB.h: In member function 'RawSpeed::uint32 RawSpeed::BitPumpMSB::peekByteNoFill()':
rawspeed/RawSpeed/BitPumpMSB.h:86:52: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
uint32 ret = (uint32)&current_buffer[shift>>3];
^
rawspeed/RawSpeed/BitPumpMSB.h: In member function 'unsigned char RawSpeed::BitPumpMSB::getByte()':
rawspeed/RawSpeed/BitPumpMSB.h:124:54: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
uint32 ret = (uint32)&current_buffer[shift>>3];

Support for Canon 5DS/5DSR files

Hi,

Canon 5DS/DSR files are different from all previous Canon files seen: LJPEG colors (component) count (equal to 4) contributes not only in image width (as in previous canons), but width/height should be multiplied by 2.

Dcraw patch is trivial (it is ugly b.c. based on camera model name, not on other metadata, but works):

--- dcraw.c Wed Feb 25 21:18:18 2015
+++ dcraw.1.473_5DS.c   Wed Mar 25 19:05:33 2015
@@ -934,12 +939,14 @@

 void CLASS lossless_jpeg_load_raw()
 {
-  int jwide, jrow, jcol, val, jidx, i, j, row=0, col=0;
+  int jwide, jhigh, jrow, jcol, val, jidx, i, j, row=0, col=0;
   struct jhead jh;
   ushort *rp;

   if (!ljpeg_start (&jh, 0)) return;
   jwide = jh.wide * jh.clrs;
+   jhigh = jh.high;
+   if (!strncmp(model, "EOS 5DS", 7)) jhigh *= 2;

   for (jrow=0; jrow < jh.high; jrow++) {
     rp = ljpeg_row (jrow, &jh);
@@ -949,13 +956,14 @@
       val = curve[*rp++];
       if (cr2_slice[0]) {
    jidx = jrow*jwide + jcol;
-   i = jidx / (cr2_slice[1]*jh.high);
+   i = jidx / (cr2_slice[1]*jhigh);
    if ((j = i >= cr2_slice[0]))
         i  = cr2_slice[0];
-   jidx -= i * (cr2_slice[1]*jh.high);
+   jidx -= i * (cr2_slice[1]*jhigh);
    row = jidx / cr2_slice[1+j];
    col = jidx % cr2_slice[1+j] + i*cr2_slice[1];
       }
+
       if (raw_width == 3984 && (col -= 2) < 0)
    col += (row--,raw_width);
       if ((unsigned) row < raw_height) RAW(row,col) = val;
@@ -5720,8 +5728,16 @@
        tiff_ifd[ifd].height  = jh.high;
        tiff_ifd[ifd].bps     = jh.bits;
        tiff_ifd[ifd].samples = jh.clrs;
-       if (!(jh.sraw || (jh.clrs & 1)))
+       if (!(jh.sraw || (jh.clrs & 1))) {
          tiff_ifd[ifd].width *= jh.clrs;
+               
+       if (!strncmp(model, "Canon EOS 5DS", 13))
+       {
+           tiff_ifd[ifd].width  = jh.wide * 2;
+           tiff_ifd[ifd].height = jh.high * 2;
+       }
+      }
+           
        i = order;
        parse_tiff (tiff_ifd[ifd].offset + 12);
        order = i;

I'm not familiar enough with RawSpeed to implement this patch into it myself, hope your help.

Raw samples are available at Imaging Resource: http://www.imaging-resource.com/PRODS/canon-5ds-r/canon-5ds-rA7.HTM

ARW2 files may need extra dithering

ARW2 files use a weird encoding where groups of 16 pixels are encoded as 16 bytes by having the max and minimum pixel encoded with 11 bits and then the other 14 encoded as 7bit differences from the min. When the difference between min and max is too large those 7 bits need to be shifted left so the encoding loses resolution in the low order bits. When the max-min difference is very large and the image is smooth this can lead to noticeable artifacts. Here's an article detailing this including a sample image with the issue:

http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection

I was wondering if this can be improved upon by using dithering. I tried this patch with very limited success:

--- a/src/external/rawspeed/RawSpeed/ArwDecoder.cpp
+++ b/src/external/rawspeed/RawSpeed/ArwDecoder.cpp
@@ -399,6 +399,9 @@ void ArwDecoder::decodeThreaded(RawDecoderThread * t) {
     bits.setAbsoluteOffset((w*8*y) >> 3);
     uint32 random = bits.peekBits(24);

+    // Initialize random state so we always return the same data
+    uint32 randstate = bits.peekBits(32);
+
     // Process 32 pixels (16x2) per loop.
     for (uint32 x = 0; x < w - 30;) {
       bits.checkPos();
@@ -413,7 +416,8 @@ void ArwDecoder::decodeThreaded(RawDecoderThread * t) {
         if (i == _imax) p = _max;
         else if (i == _imin) p = _min;
         else {
-          p = (bits.getBits(7) << sh) + _min;
+          uint32 r = rand_r(&randstate) & ~(0xffffffff << sh);
+          p = ((bits.getBits(7) << sh) | r) + _min;
           if (p > 0x7ff)
             p = 0x7ff;
         }

This just sets the least significant bits to random values instead of 0. At least for the artifact around the star trails on the example image this looks like an improvement but only very slightly.

Any other ideas on how to improve this?

Leaf LJpeg cameras don't work with the current decoder

Just so we have an issue to track the state of the Leaf Ljpeg cameras. The current issues are the following:

  • Aptus 75: LJpegDecompressor: Quantized components not supported
  • Aptus II-7: Out of buffer read
  • Aptus II-8: Corrupt JPEG data: bad Huffman code:17
  • Aptus II-10: Out of buffer read
  • Aptus II-10R: LJpegDecompressor: Quantized components not supported
  • Aptus II-12: LJpegDecompressor: Quantized components not supported

I may be launching the ljpeg decoder improperly though.

New 5Ds R sRaw/mRaw files are broken

It seems the new 5Ds has once again tweaked the sRaw format so it doesn't work again. The code is failing with:

LJpegPlain::decodeScanLeft: Ran out of slices

I'm guessing there's something wrong with the calculation of dimensions, possibly due to the double height thing these cameras do. Could you have a look and point me in the right direction?

There's a sample file here:

http://scratch.corujas.net/_E5A0430.CR2

Incorrect output from the RawImageDataU16::scaleValues SSE code path

While tracking down a difference in output between rawspeed's scaled output and darktable's own scaling I found out RawImageDataU16::scaleValues produces apparently different output between the sse and non-sse code paths. I need to debug this more but this seems to happen even without dithering enabled so it's not just a matter of having different random values.

Nikon sNEF files are too saturated

Hi klaus, your new sNEF code looks great. Unapplying the WB is definitely the way to go. Unfortunately it seems all the colors in those files are way too saturated when compared to identically processed normal NEF files. I've tested this with both D810 and D4S files. Any idea what could be happening?

the zero_is_bad flag should perhaps be reversed

I just noticed that the Panasonic FZ1000 doesn't have zero_is_bad set so isn't properly handling broken pixels. It would probably make sense to reverse the flag to zero_is_ok so it's only set for whatever files actually need it, which in the RW2 format should be few if any of them.

C++ standard version?

Hi.

I'm aware of some leaks i rawspeed.
I'd like to fix them.

Question: currently, what is the maximal c++ std version that can be used in rawspeed?

I'd highly prefer to use std::unique_ptr, or at least an 'official' statement that c++11 can not be used.

FujiFilm compressed RAF support (X-Pro2, more cameras expected)

We just opensourced X-Pro2 compressed decoder created /reverse-engineered by Alexey Danilchenko and contributed to LibRaw.

Here is the code:

The code is written 'as separate from LibRaw as possible' to easy adaptation in other software.
There are four points of interaction:

  • parse_xtrans_header() is called if data size is not width_height_2 (so, not uncompressed format), it checks fields in compressed data block start and sets four variables (libraw_internal_data.unpacker_data.fuji_*) for further re-use.
  • xtrans_compressed_load_raw() called after raw buffer allocation and performs decoding. It calls
    • xtrans_decode_loop() which, in turn calls
    • xtrans_decode_strip().
      xtrans_decode_loop() may be converted to parallel calls (if locking is implemented in file reading to make sure seek/read pairs works as expected)
  • File read is performed in fuji_fill_buffer()
  • And copying decoded data to place is in copy_line_to_xtrans()

The code works faster than Fuji's SDK (used by Adobe, Irridient, Silkypix) even in single-thread mode.

So, feel free to include this into RawSpeed.

LJpegDecoder doesn't support Hasselblad style files

I've been trying to get the .3FR format supported by rawspeed. Here's my current effort so far:

https://github.com/pedrocr/darktable/compare/rawspeed-mamiya-support...rawspeed-hasselblad-support?expand=1

I've created a new decoder and hooked into TiffParser. I've also fixed ByteStream::skipToMarker() to correctly identify the starting marker of the DHT section in the hasselblad files (ignoring 0xffff markers which apparently are FILL sections that we don't care about, since the DHT section was being confused with a FILL section). I've also accepted predictor number 8 as apparently that's what is in the hasselblad files.

So right now I just need to actually do the decoding. I guess I need to implement a decodeScanSomething() function. The relevant dcraw code is this one:

void CLASS hasselblad_load_raw()
{
  struct jhead jh;
  int row, col, pred[2], len[2], diff, c;

  fprintf(stderr, "In hasselblad_load_raw with c=%d\n", c);

  if (!ljpeg_start (&jh, 0)) return;
  order = 0x4949;
  ph1_bits(-1);
  for (row=0; row < raw_height; row++) {
    pred[0] = pred[1] = 0x8000;
    for (col=0; col < raw_width; col+=2) {
      FORC(2) len[c] = ph1_huff(jh.huff[0]);
      FORC(2) {
        diff = ph1_bits(len[c]);
        if ((diff & (1 << (len[c]-1))) == 0)
          diff -= (1 << len[c]) - 1;
        if (diff == 65535) diff = -32768;
        RAW(row,col+c) = pred[c] += diff;
      }
    }
  }
  ljpeg_end (&jh);
  maximum = 0xffff;
}

It seems pretty straightforward. But before I try and reimplement this blindly do you know how this relates to the code already there? Is it a special case? Any tips on how to go about it?

Nikon, compressed raws: an issue with whitelevel

@klauspost
Hi.

I have noticed a strange thing.

Sample raw file is attached to https://redmine.darktable.org/issues/11135

When converting it to dng, the metadata says Whitelevel=3880.
If i load it via rawspeed, the used whitelevel is 3880.

But if i load the original raw file, rawspeed tells me it's whitelevel is 4095.
(Despite the fact that in cameras.xml i set whitelevel to 3880.)

And the histograms/brightness(?) for these 2 versions of the same image do not match.

Offending lines:

mRaw->whitePoint = curve[_max-1];
mRaw->blackLevel = curve[0];

If i manually set whitelevel for that nef to 3880, or delete these 2 lines, the histograms and brightness(?) do match.
(not counting slight differences in crop)

Which one is wrong? The dng metadata or those overrides?

Rawspeed considered as copylib

Hi, I am the Darktable Fedora package co-mantainer.
Fedora packaging policies forbid to ship packages with bundled libraries, but there are a few exceptions, for example when a library is a copylib.
Darktable ships Rawspeed, and I am trying to ask to the Fedora Packaging Committee to consider Darktable's Rawspeed as a copylib (ticket URL https://fedorahosted.org/fpc/ticket/550 ).
A member of Fedora Packaging Committee asked me to answer the two following questions (that are from https://fedoraproject.org/wiki/Packaging:No_Bundled_Libraries#Copylibs ):

  1. Does the upstream library make actual releases? If they do, then it is likely not a copylib.
  2. Does upstream define what they put together as a library or as reusable code snippets that are to be modified and incorporated as source in individual packages? If the latter, it's more likely that the library is a copylib under this definition.

I thought it was better to ask directly to you.
Thank you for your time.

Pentax K110D support

Hi, I noticed that older PEF decompressor were added and that the Pentax K100D was now supported by rawspeed.

The Pentax K110D is a K100D without sensor stabilization, so adding support boils down to copy/paste the relevant sections. Would you consider commiting the following patch?
(tested with rawspeed in darktable)

diff --git a/data/cameras.xml b/data/cameras.xml
index 1d2f04f..0c80300 100644
--- a/data/cameras.xml
+++ b/data/cameras.xml
@@ -3604,6 +3604,16 @@
                <Crop x="0" y="0" width="3040" height="2024"/>
                <Sensor black="127" white="3950"/>
        </Camera>
+       <Camera make="PENTAX Corporation" model="PENTAX K110D">
+               <CFA width="2" height="2">
+                       <Color x="0" y="0">RED</Color>
+                       <Color x="1" y="0">GREEN</Color>
+                       <Color x="0" y="1">GREEN</Color>
+                       <Color x="1" y="1">BLUE</Color>
+               </CFA>
+               <Crop x="0" y="0" width="3040" height="2024"/>
+               <Sensor black="127" white="3950"/>
+       </Camera>
        <Camera make="PENTAX Corporation" model="PENTAX K100D Super">
                <CFA width="2" height="2">
                        <Color x="0" y="0">RED</Color>
@@ -3624,6 +3634,16 @@
                <Crop x="0" y="0" width="3040" height="2024"/>
                <Sensor black="127" white="3950"/>
        </Camera>
+       <Camera make="PENTAX" model="PENTAX K110D">
+               <CFA width="2" height="2">
+                       <Color x="0" y="0">RED</Color>
+                       <Color x="1" y="0">GREEN</Color>
+                       <Color x="0" y="1">GREEN</Color>
+                       <Color x="1" y="1">BLUE</Color>
+               </CFA>
+               <Crop x="0" y="0" width="3040" height="2024"/>
+               <Sensor black="127" white="3950"/>
+       </Camera>
        <Camera make="PENTAX Corporation" model="PENTAX *ist D">
                <CFA width="2" height="2">
                        <Color x="0" y="0">RED</Color>

Thank you very much for your work!

Why is the D1X explicitely disabled?

The Nikon D1X is set as supported="no" in cameras.xml. I had a look and it seems it works fine. The only problem with it appears to be that the sensor has non-square pixels so to get a proper image it needs to be scaled 2x on the y dimension:

http://www.dpreview.com/reviews/nikond1x/

That scaling needs to happen after all the raw processing and demosaic. So from the point of view of raw processing rawspeed is already doing the right thing. Indeed in darktable the image already appears properly (but unscaled) with rawspeed and a broken purple with libraw. Shouldn't it be the app's responsibility to scale the image after processing. I guess rawspeed could help by providing the metadata for the needed scaling but not much more.

Implement a safer TiffEntry/CiffEntry way of accessing data

After the work in PR #141 I've come up with a sketch for a further API change to achieve two things at once:

  1. Make the TiffEntry/CiffEntry APIs safer against out of bounds access
  2. Finally remove the TiffIFD/TiffIFDBE split, as discussed in PR #27

So here's the basic sketch:

  • Remove getIntArray()/getShortArray() functions and instead make getInt()/getShort()/getData() take an optional integer argument that's the array offset (and in the case of getData() a count as well)
  • Make TiffEntry/CiffEntry no longer have a pointer to data but instead just keep around the reference to the underlying FileMap
  • Move the get4BE() and friends macros into the FileMap API
  • Have TiffEntry/CiffEntry know the host and file endianness (pushing into FileMap is also possible but sometimes there are BE bits inside a LE file and vice-versa)
  • Now when doing getInt()/getShort()/getData() just call into FileMap with getNBE()/getNLE()/getData() insuring proper bounds checks. It is also endian clean so there's no need to keep around the swapped versions of TiffIFD/TiffEntry. It also avoids the memory copying in TiffEntryBE.

Since the TiffEntry API is used for metadata only the performance implications should be minimal if any. How does the plan sound?

Rawspeed build fails, missing token in OpenMP directive

imageio_rawspeed.cc:253 has the following openmp directive

#ifdef _OPENMP
  #pragma omp parallel for default(none) schedule(static)
#endif   

but then proceeds to use the variables raw_height, raw_width and so on, which
results in the errors:

[ 29%] Building CXX object
src/CMakeFiles/lib_darktable.dir/common/imageio_rawspeed.cc.o
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc: In function
'dt_imageio_retval_t dt_imageio_open_rawspeed_sraw(dt_image_t*,
RawSpeed::RawImage, dt_mipmap_cache_allocator_t)':
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:255:3:
error: 'raw_height' not specified in enclosing parallel
   for(size_t row = 0; row < raw_height; row++)
   ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:257:110:
error: 'raw_img' not specified in enclosing parallel
     const uint16_t *in = ((uint16_t *)raw_img) +
(size_t)(img->cpp*(dimUncropped.x*(row+cropTL.y) + cropTL.x));

                              ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:257:67:
error: 'img' not specified in enclosing parallel
     const uint16_t *in = ((uint16_t *)raw_img) +
(size_t)(img->cpp*(dimUncropped.x*(row+cropTL.y) + cropTL.x));
                                                                   ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:257:83:
error: 'dimUncropped' not specified in enclosing parallel
     const uint16_t *in = ((uint16_t *)raw_img) +
(size_t)(img->cpp*(dimUncropped.x*(row+cropTL.y) + cropTL.x));

   ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:257:88:
error: 'cropTL' not specified in enclosing parallel
     const uint16_t *in = ((uint16_t *)raw_img) +
(size_t)(img->cpp*(dimUncropped.x*(row+cropTL.y) + cropTL.x));

        ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:258:33:
error: 'buf' not specified in enclosing parallel
     float *out = ((float *)buf) + (size_t)4*row*raw_width;
                                 ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:258:33:
error: 'raw_width' not specified in enclosing parallel
     float *out = ((float *)buf) + (size_t)4*row*raw_width;
                                 ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:271:36:
error: 'black' not specified in enclosing parallel
           out[k] = (((float)(*in)) - black) / scale;
                                    ^
/home/cgwork/OSS/Repos/DT/darktable/src/common/imageio_rawspeed.cc:253:11:
error: enclosing parallel
   #pragma omp parallel for default(none) schedule(static)
           ^
make[2]: *** [src/CMakeFiles/lib_darktable.dir/common/imageio_rawspeed.cc.o]
Error 1
make[1]: *** [src/CMakeFiles/lib_darktable.dir/all] Error 2
make: *** [all] Error 2

it seems according to the OpenMP specifications, raw_height, raw_img, img, buf,
raw_width, dimUncropped, black, cropTL need to be included in the shared()
token of the parallel for directive.

"The default(none) clause requires that each variable that is referenced in the
construct, and that does not have a predetermined data-sharing attribute, must
have its data-sharing attribute explicitly determined by being listed in a
data-sharing attribute clause."

#ifdef _OPENMP
  #pragma omp parallel for default(none)
shared(raw_height,raw_img,img,buf,raw_width,dimUncropped,black,cropTL)
schedule(static)
#endif 

seems to address the issue, but i am no expert in OpenMP.
To see this in action, one can do a checkout of darktable
git clone https://github.com/darktable-org/darktable.git

set CFLAGS/CXXFLAGS to "-fopenmp". Darktable has included Rawspeed in
src/external/rawspeed/RawSpeed

use cmake to build a Release build, and it will trigger the error.

Consider changing to Unix line endings uniformly

The code seems to use a mix of Unix and Windows line endings. This doesn't make much sense and Unix style line endings would generally be preferable. Running dos2unix on all the files should be enough to do the conversion in one go.

Incorrect decoding of some DNG files

RawSpeed (master branch, have not tested develop branch) decode some DNG files incorrectly.
I was unable to quickly find the problem, so the bug report here.

Sample file: https://www.dropbox.com/s/t08h2djogfcti06/DR100.dng
It decodes this way: https://www.dropbox.com/s/cjy1987itfeq2ga/Screenshot%202014-08-20%2014.25.51.png
While LibRaw/dcraw do it right: https://www.dropbox.com/s/cie45bwudlr5ksx/Screenshot%202014-08-20%2014.25.26.png

All files I have are from Fuji cameras, converted from RAF to DNG by Adobe DNG Converter 8.x
These files are 'normal bayer', not X-Trans or 'rotated' SuperCCD.

getCamera() with prefix-search issue

I added that api in a45724e and e1b5cca.
And now i have just discovered how it backfires.
With raw file from https://redmine.darktable.org/issues/11354 from Canon EOS Rebel T6, which currently has broken alias, it matches to Canon EOS Rebel T6i.
That happens because internally "make model mode" is a string,

map<string,Camera*> cameras;

@klauspost would you be ok if that is to be changed to something that stores all three elements separately, so thing faulty match can not happen?

Canon 80D Support

Here are some sample images from the Canon 80D. 2 RAW straight from the camera and 2 DNG files using Adobe DNG converter 9.5.1. Tried to upload more photos but kept getting an error. Below is a link to public dropbox with more samples. I am running darktable-git release.2.1.0.r1031.gdf4eb00-1 from Arch AUR.

ISO range:
100,125,160,200,250,320,400,500,640,800,1000,1250,1600,2000,2500,3200,4000,5000,6400,8000,10000,12800,16000
"H" (expanded equivalent) 25600

https://www.dropbox.com/sh/17yxb36piaok77c/AADicWPIyZbSjIMT105mYfHma?dl=0

RawSpeed:Unable to find camera in database: Canon Canon EOS 80D
[rawspeed] Camera 'Canon' 'Canon EOS 80D', mode '' not supported, and not allowed to guess. Sorry.
[temperature] failed to read camera white balance information from IMG_0003.CR2'! [colorin]Canon Canon EOS 80D' color matrix not found!
[colorin] Canon Canon EOS 80D' color matrix not found! [colorin]Canon Canon EOS 80D' color matrix not found!

libgphoto2 has been updated on git to have basic support for the 80D

Add support for Nikon Coolpix P340

From darktable 1.6.6 I have the following output

$ darktable
RawSpeed:Unable to find camera in database: NIKON COOLPIX P340 
[rawspeed] Camera 'NIKON' 'COOLPIX P340', mode '' not supported, and not allowed to guess. Sorry.

Can I provide more info to help solve the problem?

White balance coefficients are not returned for Canon EOS M10

he white balance coefficients that RawSpeed returns via RawDecoder are nan. The canonical make and model for this model are correct.

The offset into ColorData5 seems to be incorrect in Cr2Decoder::decodeMetaDataInternal as cam_mul ends up an array of zeros

Here is a link to a raw that has this problem http://www.imaging-resource.com/PRODS/canon-eos-m10/YIMG_1794.CR2.HTM

This is probably the same issue as #159, but I thought I would file it separate for the sake of thoroughness.

clang 3.8 throws up some warnings in the ljpeg big table code

@LebedevRI found that when compiling rawspeed with clang 3.8 we get these warnings:

rawspeed/RawSpeed/LJpegDecompressor.cpp:507:37: warning: shifting a negative signed value is undefined [-Wshift-negative-value]
        htbl->bigTable[i] = (-32768 << 8) | (16 + l);
                             ~~~~~~ ^
rawspeed/RawSpeed/LJpegDecompressor.cpp:509:37: warning: shifting a negative signed value is undefined [-Wshift-negative-value]
        htbl->bigTable[i] = (-32768 << 8) | l;
                             ~~~~~~ ^

We think we can just change (-32768 << 8) to -(32768 << 8) and have the same effect. Do you see any possible way this could be wrong?

develop branch compiler warning signed/unsigned comparison

GCC 4.8 seems to be pedantic about signed/unsigned comparisons by default:

RawSpeed/Camera.cpp:186:33: warning: comparison between signed and unsigned
integer expressions [-Wsign-compare] if (strlen(key) != cfa.size.x) {

x/y seems to be defined as int, while strlen is a size_t.

Strange if statement in RawDecoder.cpp

Line 541 of RawDecoder.cpp has this if statement:

    if (me->taskNo >= 0)
      me->parent->decodeThreaded(me);
    else
      me->parent->decodeThreaded(me);

This looks obviously wrong as both branches do the same thing. Is the if statement useless or is one of the branches wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.