Coder Social home page Coder Social logo

rogerzhangzz / cag_uda Goto Github PK

View Code? Open in Web Editor NEW
141.0 6.0 20.0 2.13 MB

(NeurIPS2019) Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation

License: MIT License

Python 92.05% C 0.27% C++ 2.73% Cuda 4.94%
segmentation uda

cag_uda's Introduction

Category-anchor Guided Unsupervised Domain Adaptation for Semantic Segmentation

Qiming Zhang*, Jing Zhang*, Wei Liu, Dacheng Tao

paper

Table of Contents

Introduction

This respository contains the CAG-UDA method as described in the NeurIPS 2019 paper "Category-anchor Guided Unsupervised Domain Adaptation for Semantic Segmentation".

Requirements

The code is implemented based on Pytorch 0.4.1 with CUDA 9.0, Python 3.6.7. The code is trained using a NVIDIA Tesla V100 with 16 GB memory. Please see the 'requirements.txt' file for other requirements.

Usage

Assuming you are in the CAG-UDA master folder.

  1. Preparation:
  • Download the GTA5 dataset as the source domain, and the Cityscapes dataset as the target domain.
  • Then put them into a folder (dataset/GTA5 for example). Please carefully check the directory of the folder whether containing invalid characters.
  • Please notice that images in GTA5 have slightly different resolutions, which has been resolved in our code.
  • Download pretrained models here and put them in the 'pretrained/' folder. There are four models for warmup, stage 1, stage 2, and stage 3 respectively.
  1. Setup the config file with directory 'config/adaptation_from_city_to_gta.yml'.
  • Set the dataset path in the config file (data:source:rootpath and data:target:rootpath).
  • Set the pretrained model path to 'training:resume' and 'training:Pred_resume' in the config file. 'Pred_resume' model is used to assign pseudo-labels..
  • To better understand the meaning of each parameter in the config file, please see 'config/readme'.
  1. Training
  • To run the code:
python train.py
  • During the training, the generated files (log file) will be written in the folder 'runs/..'.
  1. Evaluation
  • Set the config file for test (configs/test_from_city_to_gta.yml): (1). Set the dataset path as illustrated before. (2). Set the model path in 'test:path:'.
  • Run:
python test.py

to see the results.

  1. Constructing anchors
  • Setting the config file 'configs/CAC_from_gta_to_city.yml' as illustrated before.
  • Run:
python cac.py
  • The anchor file would be in 'run/cac_from_gta_to_city/..'

License

MIT

The code is heavily borrowed from the repository (https://github.com/meetshah1995/pytorch-semseg).

If you use this code and find it usefule, please cite:

@inproceedings{zhang2019category,
  title={Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation},
  author={Zhang, Qiming and Zhang, Jing and Liu, Wei and Tao, Dacheng},
  booktitle={Advances in Neural Information Processing Systems},
  pages={433--443},
  year={2019}
}

Notes

The category anchors are stored in the file 'category_anchors'. It is calculated as the mean value of features with respect to each category from the source domain.

Contact: [email protected] / [email protected]

cag_uda's People

Contributors

rogerzhangzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

cag_uda's Issues

about 'category_anchor'

I got a category anchor file after running cac.py, but when using it for training, I encountered an error:

File "train.py", line 87, in train
model.objective_vectors = objective_vectors['objective_vectors']
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

can you pls point out what's wrong, THX!

Warm up

Hi!

First of all, thanks for releasing the code.
In algorithm 1, in the paper, it says only X_s and Y_s are used. However, in "Pixel-level Adversarial and Constraint-based Adaptation" they use X_s, Y_s and X_t in their "warm up", a tleast thats how I understand it.

Is this a typo or did you adapt the warmup technique?

Thanks :)

about category anchors

I used CycleGAN to style-transfer the GTA5 dataset.
should i run cac.py to generate a new category anchor file using my transferred dataset?
but when applying the generated category_anchors i got an error:
File "train.py", line 87, in train
model.objective_vectors = objective_vectors['objective_vectors']
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

Some typos in the final version of the paper

We apologize for the typos in the final version paper at
http://papers.nips.cc/paper/8335-category-anchor-guided-unsupervised-domain-adaptation-for-semantic-segmentation.pdf

The formulation of Equation 5 and 10 should be
image
image repectively,
instead of
image
and
image.

The reviewed version of the paper is correct, and this mistake is due to the unintentional operation of 'lookup and replace' when we were editing the final version.

This has been corrected at arxig.org, and we apologize again for this mistake.

Thanks.

GTA5/split.mat

Thanks for your job.
When I try to reproduce your job by running 'train.py', I could not find the file of 'GTA5/split.mat'. I could not find it in GTA5, and could you share the file?

What is feature transformation net?

you said on paper "the decoder can be further divided in to a feature transformation net and a classifier"

In my thought, ASPP is classifier which is layer5 in this code. But I don't know what is feature transformation net.

11950891-0631-41EB-9734-687AFEFC4DAF

Inconsistency between paper and code

Thanks for releasing codes!
I found several differences between the paper and the released code:

  1. The batch size. In your paper, the batch size of segmentation model was set as 1, in your code, the default batch size is 4.
  2. The ASPP module is different with a common design. You use concat and conv instead of sum
    x = torch.cat((x1, x2, x3, x4, x5), dim=1)
  3. The decoder uses low-level feature as input
    def forward(self, x, low_level_feat):

I wanna know if I just set the batch size to 1 as the paper described, is there a signifigent performance drop? What's the reason of the design for the decoder? Is it necessary to use low-level feature?

Warmup for deeplabv3plus

I want to know how you deal with the batchnorm in deeplabv3plus head. As I know, AdaSeg freeze the bn layers which pretrained on imagenet in backbone. In the deeplabv2 head, there are no bn layers

Inconsistent results in Table 3 (SYNTHIA->Cityscapes)

Hello, thank for sharing the great work.
I find some inconsistent results in Table 3. Why the reported results of the CAG-UDA model in two mIoU metrics (13- and 16-class subsets) are different? Previous SOTA methods, like AdvEnt, report the results on the 13-class subset based on that of 16-class subset. Could you please explain it?

image

from [arxiv](https://arxiv.org/abs/1910.13049)

Is the model DeepLab v3+ ?

Hello authors, thanks for the nice work and the published codes. I have a question regarding your segmentation framework. As stated in your paper, it is DeepLab v2. However when I look at your implementation of the ASPP module and the decoder, it seems to be the DeepLabv3+ (which performs better than DeepLab v2 generally https://arxiv.org/pdf/1802.02611.pdf ). Could you please confirm this point?

warm-up strategy

Hi, authors, thanks for the nice work and the published codes. Since warm-up strategy is very important, could you release the warm-up code? Thanks.

Training with custom datast

I want to train your network with my own dataset. could you please let me know how can I train with the custom dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.