Coder Social home page Coder Social logo

chen742 / pipa Goto Github PK

View Code? Open in Web Editor NEW
85.0 3.0 15.0 4.23 MB

Official Implementation of PiPa: Pixel- and Patch-wise Self-supervised Learning for Domain Adaptative Semantic Segmentation

Home Page: https://arxiv.org/abs/2211.07609

Python 99.71% Shell 0.29%
contrastive-learning deep-learning domain-adaptation semantic-segmentation transformer

pipa's People

Contributors

chen742 avatar layumi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pipa's Issues

out of memory

May I ask the author of PIPA how much video memory is required for this work? Why does the 24G 3090 report an error of out of memory?

patch loss

Why your patch loss use the feature of teacher network?And how to train your self.classifier in dacs.py?

About reproducing the performance.

Great project! I am very insterested in your work and thanks for the release.

However, after reproducing the experiment with the code(HRDA+PiPa), I was not able to achieve the result reported in the paper. Specifically, I reproduced the experiment by directly run

python run_experiments.py --config configs/pipa/gtaHR2csHR_hrda.py

with the random seed 0, 1, 2, and achieved mean intersection over union score 74.52, 74.34, 74.73 respectively. Here is my logs:

I don't know the reason of the performance drop. Could you please tell me the possible reason or any hint to reproduce the results?

Thanks in advance.

Best,
Yuanbing

About GPU device

Dear authors,

Thanks for your impressive work and code for the community, I am a student studying your paper and your work inspires me a lot.

I want to learn more about your work by running the code. Could I train it with my 3080 GPU (laptop)?Do I need to use more GPUS for distributed?

Questions about embedding features for contrastive learning

Hi, thanks for your sharing codes, it's really an awesome work for UDA.

I have noticed that the embedding features for “Source Pixel Contrast” and "Target Patch Contrast" are not the same.

Concretely, embedding features for “Source Pixel Contrast” are obtained by concatenating 4 features from the backbone by yourself.

feat1 = x[0]
feat2 = F.interpolate(x[1], size=(h, w), mode="bilinear", align_corners=True)
feat3 = F.interpolate(x[2], size=(h, w), mode="bilinear", align_corners=True)
feat4 = F.interpolate(x[3], size=(h, w), mode="bilinear", align_corners=True)
feats = torch.cat([feat1, feat2, feat3, feat4], 1)
out = self.cls_head(feats)
emb = self.proj_head(feats)

However, embedding features for “Target Patch Contrast” are directly obtained from the fuse_layer

x = self.fuse_layer(torch.cat(list(_c.values()), dim=1))
fuse_x = x
x = self.cls_seg(x)
return x, fuse_x

Why not use the same embedding features?

About reproducing the performance about pixel-wise contrast

Hi, I do the experiment about the pixel-wise contrast on GTA to Cityscapes, but I can only get 69.8 mIoU finally. I only delete the code about the patch-wise contrast and don't modify the other code. What is wrong in my experiment? Looking forward to your reply, thanks.
image
image

Can I train PiPa with more than one GPU?

Hi,

Thanks for your great work!
I am doing a research project and I would like to adopt your model as my baseline, but I would like to know if this model could only be trained on one GPU.

how to run this code on CPU only

Dear Author,

This is an excellent piece work and thanks a lot for sharing it. I was trying to run your code for experimentation. Can you please let me know how to run this code on CPU only device?

--
Best Regards,
Dinesh

Code of DAFormer + PiPa

Hi, thanks for your awesome code.

I noticed that the released code is disigned for HRDA, could you please provide the code for DAFormer?

patch loss + pixel loss

How to train your self.classifier in dacs.py?
And How to train your self.cls_head in encoder_decoder.py?
I did not find related loss such as cross entropy.

Loss problem

I only found PixelContrastLoss but did not find regional loss. The code of the article here is the same as that of Exploring Cross-Image PixelContrast for Semantic Segmentation. What the article describes conflicts with the code implementation.There is also no bank that stores positive and negative pixel comparisons

Running Error

Hi authors,

I installed the codebase following your instruction, but got the below error,
could you helpme ? any suggestion is good.

"RuntimeError: nms is not compiled with GPU support"

Thanks

Have you used imagenet feature distance?

From the code you released, the imagenet feature distance is used in your work. But in your paper, the fd loss is not included in the total loss. Can you explain it. Thank you very much!

Questions about ablation study

image

Thank you for your outstanding work! I would like to ask you whether the ablation result in DAFormer use the original settings of DAFormer with 40k iterations and a 512x512 crop size or keep same with PiPa's 60k iterations and 640x640 crop size. I'd appreciate it if you replied to me.

about performance

I want to know the results you report are the best checkpoint or the last checkpoint?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.