Coder Social home page Coder Social logo

sharib-vision / ead2019 Goto Github PK

View Code? Open in Web Editor NEW
58.0 3.0 27.0 993 KB

Github pages

Home Page: https://sharibox.github.io/EAD2019/

License: MIT License

Python 85.79% Shell 12.57% Dockerfile 1.64%
machine-learning deep-learning classification-detection segmentation video-endocopy aretefacts viaannotation all2vocfileconverter

ead2019's Introduction

EAD2019 Challenge

Build Status Run Status

About:

Endoscopic Artefact Detection (EAD) is a core challenge in facilitating diagnosis and treatment of diseases in hollow organs. Precise detection of specific artefacts like pixel saturations, motion blur, specular reflections, bubbles and debris is essential for high-quality frame restoration and is crucial for realising reliable computer-assisted tools for improved patient care. The challenge is sub-divided into three tasks:

  • Multi-class artefact detection: Localization of bounding boxes and class labels for 7 artefact classes for given frames.
  • Region segmentation: Precise boundary delineation of detected artefacts.
  • Detection generalization: Detection performance independent of specific data type and source.

Checkout the updates of this challenge here: EAD2019-Challenge

What you will find here?

Artefact detection (updated!)

  • 7 classes for artefact detection. Below shows the distribution of samples in each artefact class.
  • Total labeled classes: 17818
  • Total labels per class: [5835, 1122, 5096, 675, 1558, 3079, 453]

bbox

Semantic segmentation (New)

  • For semantic segmentation we use only 5 classes {'Instrument', 'Specularity', 'Artefact' , 'Bubbles' and 'Saturation'}

    mask_file

ead2019's People

Contributors

fyz11 avatar sharib-vision avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ead2019's Issues

How to compute mAP for segmentation

A script 'compute_mAP_IoU.py' is used to compute mAP for detection. According to the method, 'confidence' is necessary to compute mAP. How to get it for the segmentation?

Bounding box

Thank you for arranging a great challenge and a valuable repo. I am just wondering if you can provide an example of the bounding box in (xmin, ymin, xmax, ymax) format.

I have confusion whether it is really given in <x, y, width, height> according to the documentation. When I convert normalized version to absolute version (xmin, ymin, xmax, ymax), it seems that ROIs are pointing to the different things.

BTW, I am a participant of the challenge. Thank you in advance.

Cheers,

  • Azam.

Total number of bbox per class

Dear organizer,
As per information on GitHub, for task 1, the number of labels per class is as follows: 5511 (specularity), 1099 (saturation), 5015 (artifact), 658 (blur), 1532 (contrast), 2904 (bubbles), 437 (instrument). However, I found different numbers for the same (PhaseI: specularity (4074), saturation (511), artifact (1609), blur (327), contrast (686), bubbles (1738), instrument (407). If we consider both phase1 and phase2, the figures is as follows: specularity (7596), saturation (1733), artifact (8583), blur (1023), contrast (2430), bubbles (4420), instrument (499). I am just wondering if you can check and update the statistics. Thank you in advance.

Best wishes,

  • Azam.

EAD比赛

大神你好,想请问下,你也参加EAD比赛吗,能不能加个qq,讨论请教一下你,谢谢,我QQ 2932221844

Is the second task an instance segmentation one or semantic segmentation one?

'Instance segmentation' is mentioned in some places or some settings mean that it is an instance segmentation. For instance, mAP must be considered in the second task. However, it is never used to evaluate semantic segmentation.
According to the ground truth of the data, it is a semantic segmentation task.
So, should we consider it as an instance segmentation task?

Bbox annotation mistake in image '00047'

Last line in 00047.txt shows:

5 0.7379540400296516 0.010194624652455977 0.06597479614529281 0.0

It might be a mistake. It could be solved by simply remove that line.

Submission Format for the challenge

Couple of questions and answers of interest for the participants.

1-) Is the results submission system online? If not do you plan make it online after the release of test data?
--> You will need to submit it online and all evaluations will be done online. We will set up a leaderboard by 16th February which is also test data release date.

2-) Which format should the test results be submitted (e.g. csv files containing bounding box locations and class probability scores )?

--> For detection and generalisation, you will need to submit the results as .txt in yolo format (please see our GitHub examples). You will find some test samples there. There are also evaluation codes that we will be using to evaluate. This might help you. If you have any problem exploring them just let us know.

3-) Will there be some test data completely from a different centre (looking very different from training set)?
--> Yes, it will from a different data centre. This might also look some what different.

4-) For segmentation challenge, should we consider the initial bounding boxes provided in the training data phase-I or generate bounding boxes using segmentation masks?
--> No, you can use the previous bounding boxes as well if you want if you want to do segmentation independent of classification.

5-) Is it possible to use two different algorithms for segmentations and classification?
--> Yes, absolutely. But, you will need to provide us with bounding boxes for classification as well for those images as '.txt' files (see our answer in 2).

6-) Is the .txt file we should submit. I checked the Github examples but I do not see any confidence scores there. I guess the submission should include confidence?
--> You have to provide class_name, confidence, x1, y1, x2, y2 as in the "prediction" folder of our test detection samples. Of course, ground truths are considered to have confidence 1 for all the labels provided. Just to add, it depends on you what confidence score you want to consider and provide us like threshold at 0.25?

7-) What is the submission format for masks? Should we provide a .tif file similar to ground truth?
--> Yes, the submission format for masks will be a 5 channel ".tif" file.

issue in downloading test data

I am not able to download test data. After running the python code to download the test data,I got the error which is : BadZipFile: File is not a zip file

Can't download test data

Hi, I have registered to the challenge. I want to test my segmentation model but I can't download the dataset by run 'python downloads/test_data_download.py'

Question of F2-score

The new metric F2-score is released. However, I am confused of it.
According to the code, we need to have the 'confusion_matrix' to get 'fp, tp and fn'. If the input of 'prediction and ground_truth' is of 5 channels, how to define the label of each channel? If the labels the 5 classes are defined as 255 while the backgrounds are 0, the 'confusion_matrix' is not right because only the bins of 0 and 1 have values.
As as result, I guess the confusion_matrix should be based on a single channel and the 'nr_labels' should be 2.
Thanks for your attention!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.