Comments (13)
Hi @kkirtac , the .list file is a text file, the content is the form of " ISIC_0000000.jpg ISIC_0000000_Segmentation.png", both training part and testing part are this form, i have tried ,it's right. the image of 480*480 is got by deconvolution, rather resized, i hope this can help you.
from melanoma-recognition.
Okay, it seems that feeding the network with cropped images (without resizing to a canonical size) is working good. But I still do not understand at which point we do get 480x480. The output size of the network should be the same with the input size.
from melanoma-recognition.
-
I want to understand if every cropped lesion is resized to 480x480 before being fed into the network.
crop_size
is set to480
in training prototxt. How does this operate during training? -
In an earlier post, Yu has noted that no resampling is applied in segmentation network. But training prototxt sets
mirror: true
-
training prototxt also sets
mirror: true
andcrop_size: 480
for validation set. Do you apply same preprocessing steps (lesion cropping and resizing) to validation samples? What is your percentage of training/validation splitting? I assume no resizing or cropping is applied in testing phase (to test samples released by challenge organizers), so using validation samples without resizing or cropping makes much more sense?
from melanoma-recognition.
hi @kkirtac i think that author had resized the image to 480 before the image was sent to network, only the 480 is matched with the training prototxt, if possible, i hope the @yulequan give the explicit detail explanation
from melanoma-recognition.
Hi, @kkirtac ,@muyulin.
In the segmentation task, we don't do any resampling operations. For each training image, we first find the bounding box from the annotation ground truth. If the bounding box is smaller than 480*480, then we enlarge the bounding box. After that, we crop the bounding box from the training image as subimage (You can see this subimage mainly contains the skin lesion). In order to include the background, we also crop another same size subimage randomly from the whole image (It contains the background).
In summary, we crop two subimages from one training images. These subimages are in the .lis file. When training the network, we use the caffe data layer to randomly crop 480480 patches from these subimages as network input. In the testing phase, the network input size is also 480480. We use sliding window strategy to tile these sub segmentation results.
Btw, the cropped subimages are a little larger than the annotation bounding boxes.
from melanoma-recognition.
Thank you @yulequan @muyulin , it is much more clear now.
I understand from caffe datatransformer that in test phase (on validation samples)
central crop and random mirroring is applied.
In the testing phase, the network input size is also 480480. We use sliding window strategy to tile these sub segmentation results.
Just 2 questions about this:
- How do you operate on borders when dimension is not multiple of 480?
- Do you also apply same tile strategy on validation samples?
Besides all,
- Do you perform mean subtraction (channel-wise) on test samples?
- what was your percentage of training/validation splitting?
Thanks.
from melanoma-recognition.
- When using the sliding window, there is overlap between different windows. If the dimension is not multiple of 480, we adjust the overlap of the last sliding window.
- The validation loss is only the segmentation performance of one 480*480 subimages.
- if my remember is correct, we perform the same mean subtraction (RGB values) for training and test samples.
- I forget the exact percentage of training/validation splitting. It may be 20%.
from melanoma-recognition.
Hi @yulequan ,
- What was the overlap ratio between consecutive sliding windows (or the stride between consecutive windows)? Were you simply averaging pixels in overlapping regions while merging two subimage results?
from melanoma-recognition.
I forget the specific overlap ration. Yes, we use the simple averaging of the probabilities.
from melanoma-recognition.
Hi @yulequan ,
I have implemented the same segmentation pipeline using keras-tf. I left %10 random portion of my training data as validation data. Then I performed the same method during preparing training samples as you explained. I finally come up with 1399 training (including the resampled background images) and 90 validation samples.
I am experiencing overfitting issues. Please see my validation error when I fine tune only final layers versus fine tuning all layers as your training prototxt suggests. During fine tuning final layers, I skipped multi-scale feature aggregation, and just performed deconvolution on the output of the final convolution layer with stride 32. How did you overcome overfitting while fine tuning all layers?
from melanoma-recognition.
Hi @https://github.com/yulequan
Can you please share with us ,how you augmented the testing and training data?
from melanoma-recognition.
hi @yulequan
how do you get bounding box from the annotation ground truth
Hi, @kkirtac ,@muyulin.
In the segmentation task, we don't do any resampling operations. For each training image, we first find the bounding box from the annotation ground truth. If the bounding box is smaller than 480*480, then we enlarge the bounding box. After that, we crop the bounding box from the training image as subimage (You can see this subimage mainly contains the skin lesion). In order to include the background, we also crop another same size subimage randomly from the whole image (It contains the background).In summary, we crop two subimages from one training images. These subimages are in the .lis file. When training the network, we use the caffe data layer to randomly crop 480_480 patches from these subimages as network input. In the testing phase, the network input size is also 480_480. We use sliding window strategy to tile these sub segmentation results.
Btw, the cropped subimages are a little larger than the annotation bounding boxes.
hi @yulequan
how do you find the bounding box from the annotation ground truth?
there are only mask files in the ground truth zip file, isn't it?
do you add bounding box with the labeling tool by yourself ?
from melanoma-recognition.
@yulequan @kkirtac
I find ”ignore_label: 255” in the prototxt,so the mask files should be binarized。 am I right?
from melanoma-recognition.
Related Issues (7)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from melanoma-recognition.