Coder Social home page Coder Social logo

Comments (20)

rshaojimmy avatar rshaojimmy commented on July 29, 2024 2

I think the gains lie in the pre-processing transforms.ColorJitter.

DeepAll without transforms.ColorJitter reaches 70% Accuracy when the test domain is sketch. I find in your run_PACS_photo.sh, you set --jitter 0. So I think the reported results are no problem.

But after I add the transforms.ColorJitter, simple DeepAll can reach 76% when the test domain is sketch.

I think the current DG should have a fair pre-processing otherwise the gains of current DG methods are misleading. A simply pre-processing ColorJitter can significantly improve the performance without any DG strategies.

from jigendg.

postBG avatar postBG commented on July 29, 2024

I have the same problem. I've used the same parameter suggested in 'run_PACS_photo.sh', and the test accuracy is estimated using the model that shows the best performance in the validation set. As far as I know, this is the protocol proposed in Li et al. However, when I followed the protocol, the accuracy was almost 3~4% higher than the reported results.

from jigendg.

silvia1993 avatar silvia1993 commented on July 29, 2024

Are you sure to use the "requirements.txt" in the folder for the correct version of the libraries?

from jigendg.

neouyghur avatar neouyghur commented on July 29, 2024

I also have the same problem. Deep all performance reaches 80%.

from jigendg.

silvia1993 avatar silvia1993 commented on July 29, 2024

Do you reach 80% with ResNet18?

from jigendg.

neouyghur avatar neouyghur commented on July 29, 2024

@silvia1993 I think I made a mistake. I was looking at the results in the terminal. Should I check the results in the tflog? did you report val_test result or max_test? did you run the experiment 5 times? Thanks.

from jigendg.

silvia1993 avatar silvia1993 commented on July 29, 2024

You should check in the terminal, but actually even if you look in the tflog the results should be the same.
We report val_test result running the experiments 3 times.

from jigendg.

rshaojimmy avatar rshaojimmy commented on July 29, 2024

Hi,

I also have the same issue that Baseline DeepAll reaches at least 80% Acc in PACS, which is even better than the proposed method.

So may I further make sure that the trained DG model is tested on the test split or all data (including train, val and test splits) in the unseen domain?

Thanks.

from jigendg.

silvia1993 avatar silvia1993 commented on July 29, 2024

Hi,

we test the DG model on all data

from jigendg.

rshaojimmy avatar rshaojimmy commented on July 29, 2024

Thanks.

I also tested on all data, but still, my implemented DeepAll can reach close to 80% Acc when the unseen domain is sketch (the hardest one) based on Resnet-18.

May I know do you use the pre-trained Resnet-18 on ImageNet or random initialization?

from jigendg.

silvia1993 avatar silvia1993 commented on July 29, 2024

We used the pre-trained Resnet-18 on ImageNet

from jigendg.

gshuangchun avatar gshuangchun commented on July 29, 2024

Are you sure to use the "requirements.txt" in the folder for the correct version of the libraries?
Why can’t I successfully pip requirements? It shows that can’t found the version.

from jigendg.

rshaojimmy avatar rshaojimmy commented on July 29, 2024

Thanks. That's very strange that I also use the pre-trained Resnet-18 on ImageNet as the backbone of DeepAll and same datasets and preprocessing as yours. But the Acc of Sketch can reach to 80% compared to the reported 70%. So I am very doubted the performance gains of the current DG papers really come from.

from jigendg.

neouyghur avatar neouyghur commented on July 29, 2024

from jigendg.

fmcarlucci avatar fmcarlucci commented on July 29, 2024

deep all with simple data augmentation, smaller batch size with more epochs will beat the JigenDG by 2 or 3 percent.

On Wed, Jan 6, 2021 at 12:57 PM Rui Shao @.***> wrote: Thanks. That's very strange that I also use the pre-trained Resnet-18 on ImageNet as the backbone of DeepAll and same datasets and preprocessing as yours. But the Acc of Sketch can reach to 80% compared to the reported 70%. So I am very doubted the performance gains of the current DG papers really come from. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#16 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQRQ3CDEMNFSI4TELZP7PTSYPGQ7ANCNFSM4KO7FRIQ .

How exactly did you choose the smaller batch size and more epochs?
In DG you cannot look at the target domain to set those.

The point is, it is possible to have DeepAll outperform Jigen on a single setting if you spend enough time tweaking hyper-parameters, but that will not generalize at all to other settings.

from jigendg.

rshaojimmy avatar rshaojimmy commented on July 29, 2024

Thanks. For me, I did not change any default settings and parameters used in your codes. But still, DeepAll could reach close to 80% Accuracy.

from jigendg.

fmcarlucci avatar fmcarlucci commented on July 29, 2024

Thanks. For me, I did not change any default settings and parameters used in your codes. But still, DeepAll could reach close to 80% Accuracy.

It's possible the pretrained model got updated or some different library versions lead to slightly different results. Did you try running the deep all on the whole setting to see if the average result is the same?

from jigendg.

fmcarlucci avatar fmcarlucci commented on July 29, 2024

Apologies for the late reply, I'm pretty confident we used the same augmentation protocol for both deep all and Jigen.
One thing to note is that on a single setting it is pretty easy to see weird effects, it's always important to run the whole setting.

from jigendg.

DrraBL avatar DrraBL commented on July 29, 2024

Hi
I am a beginner in DG and I wanted to try all the existing DG methods on my own dataset but whenever I read references I find that DeepAll is the baseline so firstly I should run deepALL with my data. my question is what is deep ALL and how I can get the source code of this model.
I will be grateful to anyone could give me some information.
kind regards,

from jigendg.

fmcarlucci avatar fmcarlucci commented on July 29, 2024

Hi I am a beginner in DG and I wanted to try all the existing DG methods on my own dataset but whenever I read references I find that DeepAll is the baseline so firstly I should run deepALL with my data. my question is what is deep ALL and how I can get the source code of this model. I will be grateful to anyone could give me some information. kind regards,

Hi, deep all simply means training with the datasets together - it's the most simple baseline. You simply need to combine the data and then train as usual

from jigendg.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.