woodman718 / fixcaps Goto Github PK
View Code? Open in Web Editor NEWFixCaps: An Improved Capsules Network for Diagnosis of Skin Cancer,DOI: 10.1109/ACCESS.2022.3181225
License: MIT License
FixCaps: An Improved Capsules Network for Diagnosis of Skin Cancer,DOI: 10.1109/ACCESS.2022.3181225
License: MIT License
Line 8 in 16d5adc
How does the dataset offered there differ from the original? (Is it possible to set this page in English?) And can you also just provide the code so that I can apply the augmentations myself on the original dataset to maintain comparability and verifiability?
First, Thanks for good research and i'm interested in skin cancer lession classification task.
So i want try reproduce your experiments but i only got 94% accuracy on HAM10000 dataset.
I execute 01_Skin_Distinction.ipynb, 02_Aug_img.ipynb, 03_Svd_Blend.ipynb to make datasets and i make 54292 images to train and 828 images to validation.
And i use FixCaps_HAM_24.ipynb to train and validate FixCaps model but I got 94% accuracy.
Is there any train methods or I did some mistakes?
And another question is about random seed.
I tried to fix seeds to use
np.random.seed(10)
torch.manual_seed(10)
torch.cuda.manual_seed(10)
these three codes but i can't fix result.
Is there any other methods?
Thank you
The number of parameters reported for each FixCaps model seems to be coming from torchsummary (see cell. 6 in FixCaps-DS notebook).
However, torchsummary parameter count is inaccurate for custom modules, in particular when parameters name is not "weight" or "bias" (see this issue). The count does not consider network.digits.W
parameters that represent the majority of the FixCaps-DS model's parameters.
@Woodman718 I see that the other datasets tested have their respective checkpoint (.pth) files.
Do you have the checkpoint files for the HAM10000 dataset? I want to replicate your results
Sir currently I am working on your project named FixCaps but i cant find some of the directories like 525_train , 501_train all some directories present in the code in augmentation for HAM10000 ,
Waiting for your response , Thankyou Sir .
Hello, I just wanted to confirm that the results presented (96pct acc) were on the ham10000 test set, as opposed to a custom split of the training set?
Thanks,
Brett
Hello! I am replicating your model for the HAM10000 for a course project. I had a question about the dynamic routing implementation inside your Digit Caps module.
In model.py
line 120, you initialize the b_ij
routing logits to have shape (1, features, num_units, 1)
. Here the routing logits have a batch dimension of 1, implying that one set of routing logits are shared across the entire batch.
Similarly, on line 142 you have taken a mean of u_vj1
across dimension 0 (batch dimension).
It is my understanding that these routing logits should be different for each image in the batch. So the dimension of b_ij
should actually be (batch, features, num_units, 1)
. Also you should not perform the mean(dim=0)
on u_vj1
.
This averaging is a problem for me because, when I try to run the test dataset with batchsize 1 (T_size = 1
), the accuracy drops to ~30%. This is generally not expected at testing time, because ideally each image's inference should be independent.
Question: I was wondering if you could tell me some details about why you have opted to average the dynamic routing logits across the batch?
Note that we did not face issue in recreating your accuracy with the correct batch size settings, based on the FixCaps_HAM-29.ipynb
and your comments in issue #1 .
Thank you for the well commented code and detailed explanation of running your notebooks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.