Coder Social home page Coder Social logo

kyamagu / js-segment-annotator Goto Github PK

View Code? Open in Web Editor NEW
521.0 40.0 159.0 428 KB

Javascript image annotation tool based on image segmentation.

License: BSD 3-Clause "New" or "Revised" License

JavaScript 98.44% HTML 0.09% CSS 1.47%
javascript image-segmentation javascript-image-annotation

js-segment-annotator's Introduction

JS Segment Annotator

Javascript image annotation tool based on image segmentation.

  • Label image regions with mouse.
  • Written in vanilla Javascript, with require.js dependency (packaged).
  • Pure client-side implementation of image segmentation.

A browser must support HTML canvas to use this tool.

There is an online demo.

Importing data

Prepare a JSON file that looks like the following. The required fields are labels and imageURLs. The annotationURLs are for existing data and can be omitted. Place the JSON file inside the data/ directory.

{
  "labels": [
    "background",
    "skin",
    "hair",
    "dress",
    "glasses",
    "jacket",
    "skirt"
  ],
  "imageURLs": [
    "data/images/1.jpg",
    "data/images/2.jpg"
  ],
  "annotationURLs": [
    "data/annotations/1.png",
    "data/annotations/2.png"
  ]
}

Then edit main.js to point to this JSON file. Open a Web browser and visit index.html.

Known issues

Browser incompatibility

A segmentation result can greatly differ due to the difference in Javascript implementation across Web browsers. The difference stems from numerical precision of floating point numbers, and there is no easy way to produce the exact same result across browsers.

Python tips

Annotation PNG

The annotation PNG file contains label map encoded in RGB value. Do the following to encode an index map.

import numpy as np
from PIL import Image

# Decode
encoded = np.array(Image.open('data/annotations/1.png'))
annotation = np.bitwise_or(np.bitwise_or(
    encoded[:, :, 0].astype(np.uint32),
    encoded[:, :, 1].astype(np.uint32) << 8),
    encoded[:, :, 2].astype(np.uint32) << 16)

print(np.unique(annotation))

# Encode
Image.fromarray(np.stack([
    np.bitwise_and(annotation, 255),
    np.bitwise_and(annotation >> 8, 255),
    np.bitwise_and(annotation >> 16, 255),
    ], axis=2).astype(np.uint8)).save('encoded.png')

JSON

Use JSON module.

import json

with open('data/example.json', 'r') as f:
    dataset = json.load(f)

Using dataURL

Do the following to convert between dataURL and NumPy format.

from PIL import Image
import base64
import io

# Encode
with io.BytesIO() as buffer:
    encoded.save(buffer, format='png')
    data_url = b'data:image/png;base64,' + base64.b64encode(buffer.getvalue())

# Decode
binary = base64.b64decode(data_url.replace(b'data:image/png;base64,', b''))
encoded = Image.open(io.BytesIO(binary))

Matlab tips

Annotation PNG

The annotation PNG file contains label map encoded in RGB value. Do the following to encode an index map.

% Decode

X = imread('data/annotations/0.png');
annotation = X(:, :, 1);
annotation = bitor(annotation, bitshift(X(:, :, 2), 8));
annotation = bitor(annotation, bitshift(X(:, :, 3), 16));

% Encode

X = cat(3, bitand(annotation, 255), ...
           bitand(bitshift(annotation, -8), 255), ...
           bitand(bitshift(annotation, -16)), 255));
imwrite(uint8(X), 'data/annotations/0.png');

JSON

Use the matlab-json package.

Using dataURL

Get the byte encoding tools.

Do the following to convert between dataURL and Matlab format.

% Decode

dataURL = 'data:image/png;base64,...';
png_data = base64decode(strrep(dataURL, 'data:image/png;base64,', ''));
annotation = imdecode(png_data, 'png');

% Encode

png_data = imencode(annotation, 'png');
dataURL = ['data:image/png;base64,', base64encode(png_data)];

Citation

We appreciate if you cite the following article in an academic paper. The tool was originally developed for this work.

@article{tangseng2017looking,
Author        = {Pongsate Tangseng and Zhipeng Wu and Kota Yamaguchi},
Title         = {Looking at Outfit to Parse Clothing},
Eprint        = {1703.01386v1},
ArchivePrefix = {arXiv},
PrimaryClass  = {cs.CV},
Year          = {2017},
Month         = {Mar},
Url           = {http://arxiv.org/abs/1703.01386v1}
}

js-segment-annotator's People

Contributors

jopasserat avatar kyamagu avatar matt3o avatar nicolaiharich avatar ofk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

js-segment-annotator's Issues

The getAnnotation() method returns an entirely black png file

I clone the repo and ranned the example in Firefox, but when exporting the base64 annotation, i can only see a black image. This also happens in the online demo.

Here's an example od the exported data:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAcIAAAJYCAYAAADxDeb9AAANvElEQVR4nO3X0Y5TRxRFQfP/H528IATMjLHh2ut0dz2UBIFEC7dztrjdbrf/AOBgeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAE/4NqABNpMHAA/6dvs4hN8+8Tf/fv1ng1AeANzx1dB99s/vjeGffv+z/z3YSB4A/OZfR+tV6s8FXiQPAL6rh84Qcqg8AI5XD5xB5HB5ABytHrV/GUOjyCbyADhWPWb+hgi3/24DAuBI9XgZQvghD4Aj1eNlCOGHPACOUw+XMYRf5AFwlHqwjCF8kAfAUeqxMoTwQR4AR6hHyhjCl/IAOEI9UMYQvpQHwPbqYTKCcFceANurx8kIwl15AGyvHihjCHflAbC9epwMIdyVB8D26nEyhHBXHgBbq4fJGMIf5QGwtXqUDCH8UR4A26oHyRDCQ/IA2FY9SIYQHpIHwLbqQTKE8JA8ALZUj1Gt/vzhCXkAbKceoQnqN4An5AGwlXqApqjfAZ6QB8BW6gGapH4LeFAeANuoh2ea+j3gQXkAbKMenmnq94AH5QGwjXp4JqrfBB6QB8AW6sGZqn4XeEAeAMurx2ay+m3gAXkALK8em8nqt4EH5AGwtHpoVlC/EfxBHgBLq0dmFfU7wR15ACyrHpeV1G8Fd+QBsKR6WFZUvxl8IQ+AJdWjsqL6zeALeQAspx6UVdXvBl/IA2A59aCsqn43+EIeAMupB2Vl9dvBJ/IAWEo9JKur3w8+kQfAUuoh2UH9hvCbPACWUo/IDuo3hN/kAbCMekB2Ur8l/CQPgGXU47GT+i3hJ3kALKMej93U7wnf5QGwjHo4dlW/K8fLA2AJ9VjsrH5bjpcHwBLqsdhZ/bYcLw+AJdRjsbP6bTleHgBLqMdiZ/Xbcrw8AMarh2J39ftyvDwAxquH4gT1G3O0PADGq0fiBPUbc7Q8AMarR+IE9RtztDwAxqtH4gT1G3O0PABGqwfiFPU7c7Q8AEarB+IU9TtztDwARqsH4hT1O3O0PADGq0fiBPUbc7Q8AMarR2J39ftyvDwAxquHYnf1+3K8PADGq4diV/W7wnd5AIxXD8bO6reF24AAGKseiRPUbwy3AQEwVj0SJ6jfGG4DAmCseiROUL8x3AYEwHj1WOyufl+OlwfASL8f6Hosdla/NcfLA2CUrw51PRa7q9+do+UBMEY9Bqer359j5QEwRj0E9N8BjpQHwBj1CNB/BzhSHgAj1AOAMSSTB8AI9fHHEJLJAyBXH36MIak8AFL1wccQkssDIFUffIwhuTwAMvWhxxAyQh4AmfrQYwwZIQ+ATH3kMYSMkAdApj7yGEJGyAMgUR94DCFj5AHwdvVxxxAySh4Ab1cfdwwho+QB8Hb1cccQMkoeAG9VH3YMIePkAfBW9WHHEDJOHgBvUx91DCEj5QHwNvVRxxAyUh4Ab1MfdQwhI+UB8Db1UccQMlIeAG9RH3QMIWPlAfBy9THHEDJaHgAvVx9zDCGj5QHwUvUhxxAyXh4AL1Ufcgwh4+UB8DL1EccQsoQ8AF6mPuIYQpaQB8DL1EccQ8gS8gB4mfqIYwhZQh4AL1EfcAwhy8gD4CXqA44hZBl5AFyuPt4YQpaSB8Dl6uONIWQpeQBcrj7eGEKWkgfA5erjjSFkKXkAXKo+3BhClpMHwKXqw40hZDl5AFymPtoYQpaUB8Bl6qONEWRJeQBcpj7cGEGWlAfAZerjjSFkSXkAXKI+3BhBlpUHwCXq440hZFl5AFyiPt68bhSvHMevxvbbJz82ysfIA+AS9cGG+v8B/loeAJeojyBzvPL79be/zmh5AFymPsCsO3QcLQ+Ay9QHGkPIkvIAuEx9oDGELCkPgMvUBxpDyJLyAHhafYgxhGwlD4Cn1EeYXv0dZDt5ADylPsL06u8g28kD4Cn1EaZXfwfZTh4AT6mPML36O8h28gB4Sn2E6dXfQbaTB8BT6iOMEWQ7eQA8pT7EGD62kwfAU151aN91dOthWUn9XeMYeQA8ZZdj+1nHZz31GBlCDpAHwAeO7XqfozdhYXkAfODo7vc5ewsGywPgA8d3BkPIIfIA+JTjO5v3YCN5ADzEGM7kHdhAHgBPc3zn8BZsIA+Av/LZAXV823cwhCwqD4DLOL5rfObegWHyALiMI7zOZ+4NGCQPgMs4wGt85t6BYfIAuIwDvMZn7h0YJg+ASznAsz9v78JAeQBc7t4hdXSv/6yfGbd7P/cuRPIAYGFXDJghJJYHAAu7crgMIZE8AOAHQ0ggDwD4hRHkzfIAgF8YQt4sDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAKU8AABKeQAAlPIAACjlAQBQygMAoJQHAEApDwCAUh4AAJn/AZOzAKtBksxuAAAAAElFTkSuQmCC

Tool crashes on loading Pre-Annotated Images

Hi @kyamagu ,

Thank you for sharing this project.

I'm trying to use custom annotation images and I've created the JSON file accordingly. The tool loads up the relevant classes color-coded over the image. But when I try to correct some segmentation the tool stops working and the following error is printed in the console.

Browser Console Dump
localhost-1505715475195.txt

SampleJSON
untitled
)

Sample Browser Screenshot
untitled

Allow contiguous pixels only in superpixel tool

Is there a way to prevent the superpixel tool from associating non-contiguous pixels (aside from reducing the size)? For example, the red shapes below should be kept separate instead of associated as they currently are.

image

Anyway way to debug when finding boundaries goes wrong?

I love your work here and greatly appreciate you sharing it. I'm not at all overly familiar with some of the concepts but wanted to use this in a proof of concept.

I was using most of the default values and simply porting the code over to not be AMD modules. I have everything working except for the most important part - I've introduced a bug in the segmentation logic. What I end up with is almost perfect squares -

image

I've been combing over settings and options and trying to see what is different but nothing stands out. I know this isn't specific to your work on this library, but wanted to see if you had any tips for debugging this part of the code or if you had any tips for what could be going wrong?

can not load cross-origin data with canvas.getImageData()

When I load image from other site(I have to do that), canvas throw Error like " Uncaught SecurityError: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': the canvas has been tainted by cross-origin data. ",
after I do some research , I know it should not put image.src = URL before canvas.drawImage,but computeSegmentation need real imageData.

What can I do with this issue??

P.S. I must use this site image and can not save it on local

Denoising button / Change color

Hi kyamagu

Thank you for your cool labeltool! I did not get the function of the denoising-button? What is it doing?

Can you please tell me in which file I can change the colors for the labels?
Thank you in advance!

Best
Timo

cant compile paperdoll

Hi,
sorry to write it here, i was not able to find a repo for paper doll. I can not compile the paper doll. it seems I have some problem with gco, if I comment it out everything else will work, then when I type in

result = feature_calculator.apply(config, input_sample)
I will get
Invalid MEX-file '/BS/wild-search-gaze/work/paperdoll-v1.0/paperdoll-v1.0/lib/pose/+pose/private/shiftdt.mexa64': /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /BS/wild-search-gaze/work/paperdoll-v1.0/paperdoll-v1.0/lib/pose/+pose/private/shiftdt.mexa64)

Error in detect_fast>passmsg (line 128)
[score0(:,:,k),Ix0(:,:,k),Iy0(:,:,k)] = shiftdt(child.score(:,:,k), child.w(1,k), child.w(2,k), child.w(3,k), child.w(4,k),child.startx(k),child.starty(k),Nx,Ny,child.step);

Error in detect_fast (line 43)
[msg,parts(k).Ix,parts(k).Iy,parts(k).Ik] = passmsg(parts(k),parts(par));

Error in pose.estimate>process (line 38)
box = detect_fast(im, model, detector_threshold);

Error in pose.estimate (line 21)
boxes{i} = process(model, samples{i}, scale, nms_threshold);

Error in pose_calculator.apply (line 24)
samples(i).(config.output) = pose.estimate(config.model, ...

Error in feature_calculator.apply (line 25)
sample = calculators{j}(config{j}, sample, varargin{:}, 'Encode', false);

result =

 []

can you help me with it please?

Erase functionality

I want to add an Erase button which erases any selection previously done.
I used an button which when clicked calls a function very similar to the function responsible for Brush, which is:

for (var y = -1 * brushSize; y <= brushSize; y++) {
    for (var x = -1 * brushSize; x <= brushSize; x++) {
      if (x * x + y * y > brushLimit) continue;
      var offset =
        4 *
        ((pos[1] + y) * this.layers.visualization.canvas.width + (pos[0] + x));
      offsets.push(offset);
      labels.push(label);
    }
  }
  annotator._updateAnnotation(offsets, labels);

with one difference:

 labels.push(label);

The last line inside the for loop I've hardcoded 0 being pushed into labels.

 labels.push(0);

0 represents the background white color so on clicking the Erase button and selecting any area within a selection, it looks like stuff is being erased since the color of the label gets replaced by the background.
But in truth this isn't actually erasing anything, just continues to add label with white background. So when I export it, the erased area expectedly still shows up.

Any pointers on how to delete previously selected area?

Load a labelled image again

I labelled some images, export them and then i realized some mistakes. So, is there a option that one can load a labelled image again to correct some labels instead of label it completely new? Thank you in advance!

Image export

I cannot export the labeled image. I guess it was the incompatibility of the browser. Could you please suggest the right browser to use? Thanks in advance.

SL

Polygon is difficult to close

I find a bit difficult to close and complete a polygon. I checked the code but I don't understand where the relevant part is. How is the polygon closure checked?

the dimension of label picture?

@kyamagu Hello!
I have used this tool to make my label picture successful. But when I use this picture to train my model in caffe ,I get an error which is 'Check failed: outer_num_ * inner_num_ == bottom[1]->count() (250000 vs. 1000000) Number of labels must match number of predictions; e.g., if softmax axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be NHW, with integer values in {0, 1, ..., C-1}.'.
Then I look at my training details ,I find there are some problems with my label picture.
I0303 09:36:03.119379 4399 layer_factory.hpp:77] Creating layer data
I0303 09:36:03.119549 4399 net.cpp:86] Creating Layer data
I0303 09:36:03.119555 4399 net.cpp:382] data -> data
I0303 09:36:03.119561 4399 net.cpp:382] data -> label
I0303 09:36:03.131530 4399 net.cpp:124] Setting up data
I0303 09:36:03.131564 4399 net.cpp:131] Top shape: 1 3 500 500 (750000)
I0303 09:36:03.131567 4399 net.cpp:131] Top shape: 1 1 500 500 4 (1000000)
I0303 09:36:03.131570 4399 net.cpp:139] Memory required for data: 7000000
I0303 09:36:03.131573 4399 layer_factory.hpp:77] Creating layer data_data_0_split
Why my label picture is "1 1 500 500 4"?What is the meaning of the '4'?
Look forward to your reply.Thank you!

can't export annotation image with more than 2 images

when I put more than 2 images in the imageURLs field in the json file, I can only view the first two images when I visit index.html. I can view all the other images by clicking "Next" though.

The problem is: I can annotate the first two images and export the annotation pictures. But as for the other images, I am able to annotate the image, but not able to export it: There is no response when I click the "export" button.

How can I solve this issue?

p.s. The browser I use is Firefox(v42.0 Mac). index.html appeared to be blank with Chrome(v47.0 Mac) and Safari(v9.0).

Thanks,
john

The annotation masks disappear when clicked on Next button

I am able to export the annotated masks but as soon as I click on the "next" so that I can annotate the other images, the masks previously made are gone. We have to annotate the images again.

It seems as if as soon as we hit "next", the app is removing the overlaid mask that has just been made and I am left with nothing. I have just cloned the library and started working.

The Upload image size is limited

HI, I just meet a problem when trying to upload images which are more than 50kb,these images showing in Firefox all are grey with nothing to see. Maybe the image compression is ok, beside that, any solutions ? Thank you for advance..

Annotated image are mostly zero pixel value

@kyamagu i used this tool for annotating my images for caffe-segnet,i first saw that all images are darker(you can hardly notice edges of annotated objects) but i thought on pixel level it might be different,but after checking in matlab,the whole image is almolt zero pixel.i would be happy if you could point out what i am doing wrong.thx

How to code RGB and index information (8b)

Hi @kyamagu

thank you for such a nice tool to annotate pictures. I'd like to ask if it is possible to do following:

I already customized your tool and add 3 classes (Dress, Trousers, Jacket) as can be seen on this picture:
33
On the next picture below is annotated image from Pascal VOC 2012 for segmentation, containing RGB information + index for object and also the same for background.
11

However, output from JS segmenter looks like this:
22

If it is possible, could you please tell us on this example how to convert image from 32b to 8b and how to code the demanded information into image as it is in attached annotated VOC Pascal above.
E.g.
for background --> index 0 and RGB{0 0 0}
for jacket --> index 3 and RGB{192 128 0}.

Thank you in advance, much appreciated!

Export function

When I export results of segmentation via "Export" button it seems that there is no labels on mask image(all image is black). Even using demo provided.

I'm usng firefox if it metters.

Document minor error

"X = cat(3, bitand(annotation, 255), ...
bitand(bitshift(annotation, -8), 255), ...
bitand(bitshift(annotation, -16)), 255));"

The last line should be
bitand(bitshift(annotation, -16), 255));

Better annotation display

Hi,
Thanks for this awesome tool. May I propose an improvement in order to speed up the annotation process. I saw this in other tools and it was really efficient : the possibility to display raw images only in already annoted zones. I put below two examples of vegetation / soil annotation.

Only vegetation
toto

Only soil:
tata

I checked your code but Im not so familiar with images overlay with javascript and I failed to display annotations with alpha = 1. Could you give me an advice ?

Amazon MTurk

Thanks for this great implementation!
Is it possible to use it in Amazon MTurk?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.