Comments (20)
I think the gains lie in the pre-processing transforms.ColorJitter.
DeepAll without transforms.ColorJitter reaches 70% Accuracy when the test domain is sketch. I find in your run_PACS_photo.sh, you set --jitter 0. So I think the reported results are no problem.
But after I add the transforms.ColorJitter, simple DeepAll can reach 76% when the test domain is sketch.
I think the current DG should have a fair pre-processing otherwise the gains of current DG methods are misleading. A simply pre-processing ColorJitter can significantly improve the performance without any DG strategies.
from jigendg.
I have the same problem. I've used the same parameter suggested in 'run_PACS_photo.sh', and the test accuracy is estimated using the model that shows the best performance in the validation set. As far as I know, this is the protocol proposed in Li et al. However, when I followed the protocol, the accuracy was almost 3~4% higher than the reported results.
from jigendg.
Are you sure to use the "requirements.txt" in the folder for the correct version of the libraries?
from jigendg.
I also have the same problem. Deep all performance reaches 80%.
from jigendg.
Do you reach 80% with ResNet18?
from jigendg.
@silvia1993 I think I made a mistake. I was looking at the results in the terminal. Should I check the results in the tflog? did you report val_test result or max_test? did you run the experiment 5 times? Thanks.
from jigendg.
You should check in the terminal, but actually even if you look in the tflog the results should be the same.
We report val_test result running the experiments 3 times.
from jigendg.
Hi,
I also have the same issue that Baseline DeepAll reaches at least 80% Acc in PACS, which is even better than the proposed method.
So may I further make sure that the trained DG model is tested on the test split or all data (including train, val and test splits) in the unseen domain?
Thanks.
from jigendg.
Hi,
we test the DG model on all data
from jigendg.
Thanks.
I also tested on all data, but still, my implemented DeepAll can reach close to 80% Acc when the unseen domain is sketch (the hardest one) based on Resnet-18.
May I know do you use the pre-trained Resnet-18 on ImageNet or random initialization?
from jigendg.
We used the pre-trained Resnet-18 on ImageNet
from jigendg.
Are you sure to use the "requirements.txt" in the folder for the correct version of the libraries?
Why can’t I successfully pip requirements? It shows that can’t found the version.
from jigendg.
Thanks. That's very strange that I also use the pre-trained Resnet-18 on ImageNet as the backbone of DeepAll and same datasets and preprocessing as yours. But the Acc of Sketch can reach to 80% compared to the reported 70%. So I am very doubted the performance gains of the current DG papers really come from.
from jigendg.
from jigendg.
deep all with simple data augmentation, smaller batch size with more epochs will beat the JigenDG by 2 or 3 percent.
…
On Wed, Jan 6, 2021 at 12:57 PM Rui Shao @.***> wrote: Thanks. That's very strange that I also use the pre-trained Resnet-18 on ImageNet as the backbone of DeepAll and same datasets and preprocessing as yours. But the Acc of Sketch can reach to 80% compared to the reported 70%. So I am very doubted the performance gains of the current DG papers really come from. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#16 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQRQ3CDEMNFSI4TELZP7PTSYPGQ7ANCNFSM4KO7FRIQ .
How exactly did you choose the smaller batch size and more epochs?
In DG you cannot look at the target domain to set those.
The point is, it is possible to have DeepAll outperform Jigen on a single setting if you spend enough time tweaking hyper-parameters, but that will not generalize at all to other settings.
from jigendg.
Thanks. For me, I did not change any default settings and parameters used in your codes. But still, DeepAll could reach close to 80% Accuracy.
from jigendg.
Thanks. For me, I did not change any default settings and parameters used in your codes. But still, DeepAll could reach close to 80% Accuracy.
It's possible the pretrained model got updated or some different library versions lead to slightly different results. Did you try running the deep all on the whole setting to see if the average result is the same?
from jigendg.
Apologies for the late reply, I'm pretty confident we used the same augmentation protocol for both deep all and Jigen.
One thing to note is that on a single setting it is pretty easy to see weird effects, it's always important to run the whole setting.
from jigendg.
Hi
I am a beginner in DG and I wanted to try all the existing DG methods on my own dataset but whenever I read references I find that DeepAll is the baseline so firstly I should run deepALL with my data. my question is what is deep ALL and how I can get the source code of this model.
I will be grateful to anyone could give me some information.
kind regards,
from jigendg.
Hi I am a beginner in DG and I wanted to try all the existing DG methods on my own dataset but whenever I read references I find that DeepAll is the baseline so firstly I should run deepALL with my data. my question is what is deep ALL and how I can get the source code of this model. I will be grateful to anyone could give me some information. kind regards,
Hi, deep all simply means training with the datasets together - it's the most simple baseline. You simply need to combine the data and then train as usual
from jigendg.
Related Issues (20)
- About requisite of this repository HOT 1
- About the performance on VLCS dataset HOT 4
- Hi, Could you give some details about the generation of permutations based on Hamming distance? HOT 1
- Paper Question HOT 1
- Questionable implementation of parsing boolean args HOT 1
- PyTorch version of this code? HOT 1
- Comparison on PACS and VLCS HOT 4
- Python Version HOT 1
- TypeError: alexnet() got an unexpected keyword argument 'jigsaw_classes' HOT 1
- Error with argparser HOT 3
- what is the meanin of patch_based? HOT 1
- Doubt about experimental results HOT 1
- Office home dataset HOT 4
- Why is the input data composed with 9 grid? HOT 2
- python version HOT 2
- evaluate on VLCS dataset HOT 5
- Train on different datasets HOT 2
- The question about the classifier
- understanding bias_whole_image
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jigendg.