feipanir / intrada Goto Github PK
View Code? Open in Web Editor NEWUnsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision (CVPR 2020 Oral)
Home Page: https://arxiv.org/pdf/2004.07703.pdf
License: MIT License
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision (CVPR 2020 Oral)
Home Page: https://arxiv.org/pdf/2004.07703.pdf
License: MIT License
Sorry to bother you. Your team will release the code of the above paper?
Hey, the pretrained and evaluation models links are broken.
And I want to ask how can I get the result about "Without adaptation" on you paper page#5(table a).
Hey, how do you initialize the model in the intra-da stage, train from scratch, or resume from where the inter-da stopped?
It seems that resume from inter-da step is a more reasonable way.
By the way, how does the quality of masks of easy samples influence the intra-da step? Will not-so-good masks lead to worse results?
Hello, thank you for sharing your codes.
could you tell me the initial parameters of the model in step 3, please?
Are they loaded from DeepLab_resnet_pretrained_imagenet.pth or retrained by ADVENT?
Thank you in advance.
I use the published advent model to generate color masks, then train the intrada model. But I got 45.56% mIoU instead. How to get the 47% mIoU.
(uda) brcao@dawn:~/Repos/fork/IntraDA/intrada$ python train.py --cfg ./intrada.yml
Called with args:
Namespace(cfg='./intrada.yml', random_train=False, tensorboard=False, viz_every_iter=None, exp_suffix=None)
Using config:
{'DATA_DIRECTORY_SOURCE': '../ADVENT/data/Cityscapes',
'DATA_DIRECTORY_TARGET': '/home/brcao/Repos/fork/IntraDA/ADVENT/data/Cityscapes',
'DATA_LIST_SOURCE': '../entropy_rank/easy_split.txt',
'DATA_LIST_TARGET': '../entropy_rank/hard_split.txt',
'EXP_NAME': 'CityscapesEasy2CityscapesHard_DeepLabv2_AdvEnt',
'EXP_ROOT': PosixPath('/home/brcao/Repos/fork/IntraDA/ADVENT/experiments'),
'EXP_ROOT_LOGS': '/home/brcao/Repos/fork/IntraDA/ADVENT/experiments/logs',
'EXP_ROOT_SNAPSHOT': '/home/brcao/Repos/fork/IntraDA/ADVENT/experiments/snapshots',
'GPU_ID': 0,
'NUM_CLASSES': 19,
'NUM_WORKERS': 1,
'SOURCE': 'CityscapesEasy',
'TARGET': 'CityscapesHard',
'TEST': {'BATCH_SIZE_TARGET': 1,
'IMG_MEAN': array([104.00699, 116.66877, 122.67892], dtype=float32),
'INFO_TARGET': '/home/brcao/Repos/fork/IntraDA/ADVENT/advent/dataset/cityscapes_list/info.json',
'INPUT_SIZE_TARGET': [1024, 512],
'MODE': 'best',
'MODEL': ['DeepLabv2'],
'MODEL_WEIGHT': [1.0],
'MULTI_LEVEL': [True],
'OUTPUT_SIZE_TARGET': [2048, 1024],
'RESTORE_FROM': [''],
'SET_TARGET': 'val',
'SNAPSHOT_DIR': [''],
'SNAPSHOT_MAXITER': 120000,
'SNAPSHOT_STEP': 2000,
'WAIT_MODEL': True},
'TRAIN': {'BATCH_SIZE_SOURCE': 1,
'BATCH_SIZE_TARGET': 1,
'DA_METHOD': 'AdvEnt',
'EARLY_STOP': 120000,
'IGNORE_LABEL': 255,
'IMG_MEAN': array([104.00699, 116.66877, 122.67892], dtype=float32),
'INFO_SOURCE': '',
'INFO_TARGET': '/home/brcao/Repos/fork/IntraDA/ADVENT/advent/dataset/cityscapes_list/info.json',
'INPUT_SIZE_SOURCE': [1024, 512],
'INPUT_SIZE_TARGET': [1024, 512],
'LAMBDA_ADV_AUX': 0.0002,
'LAMBDA_ADV_MAIN': 0.001,
'LAMBDA_ENT_AUX': 0.0002,
'LAMBDA_ENT_MAIN': 0.001,
'LAMBDA_SEG_AUX': 0.1,
'LAMBDA_SEG_MAIN': 1.0,
'LEARNING_RATE': 0.00025,
'LEARNING_RATE_D': 0.0001,
'MAX_ITERS': 250000,
'MODEL': 'DeepLabv2',
'MOMENTUM': 0.9,
'MULTI_LEVEL': True,
'POWER': 0.9,
'RANDOM_SEED': 1234,
'RESTORE_FROM': '../ADVENT/pretrained_models/DeepLab_resnet_pretrained_imagenet.pth',
'SAVE_PRED_EVERY': 2000,
'SET_SOURCE': 'all',
'SET_TARGET': 'train',
'SNAPSHOT_DIR': '/home/brcao/Repos/fork/IntraDA/ADVENT/experiments/snapshots/CityscapesEasy2CityscapesHard_DeepLabv2_AdvEnt',
'TENSORBOARD_LOGDIR': '',
'TENSORBOARD_VIZRATE': 100,
'WEIGHT_DECAY': 0.0005}}
Model loaded
0%| | 0/120000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/brcao/Repos/fork/IntraDA/intrada/train.py", line 149, in <module>
main()
File "/home/brcao/Repos/fork/IntraDA/intrada/train.py", line 145, in main
train_domain_adaptation(model, easy_loader, hard_loader, cfg)
File "/home/brcao/Repos/fork/IntraDA/intrada/train_UDA.py", line 360, in train_domain_adaptation
train_advent(model, trainloader, targetloader, cfg)
File "/home/brcao/Repos/fork/IntraDA/intrada/train_UDA.py", line 115, in train_advent
_, batch = trainloader_iter.__next__()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/_utils.py", line 644, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/brcao/Repos/fork/IntraDA/intrada/cityscapes.py", line 42, in __getitem__
label = self.get_labels(label_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brcao/Repos/fork/IntraDA/intrada/../ADVENT/advent/dataset/base_dataset.py", line 44, in get_labels
return _load_img(file, self.labels_size, Image.NEAREST, rgb=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/brcao/Repos/fork/IntraDA/intrada/../ADVENT/advent/dataset/base_dataset.py", line 48, in _load_img
img = Image.open(file)
^^^^^^^^^^^^^^^^
File "/home/brcao/Apps/miniconda3/envs/uda/lib/python3.11/site-packages/PIL/Image.py", line 3218, in open
fp = builtins.open(filename, "rb")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '../entropy/color_masks/weimar_000053_000019_leftImg8bit.png'
Hey, I want to try your model with resnet50 as the backbone, where should I download the pretrained model? Thanks!
Your research has inspired me very much. So I'm trying to reproduce this experiment, is there a batch size of 1 per gpu?
I follow your step
python train.py --cfg ./configs/advent.yml
python entropy.py --best_iter BEST_ID --normalize False --lambda1 0.67
python train.py --cfg ./intrada.yml
But i can not reproduce your performance.
For the first stage, i got 41.98 mIoU
For the second stage, i got 44.57 mIoU.
Could you help me?
Hi, could you provide the trained model in STEP-1 (ADVENT) ?
We reproduce the first step using your code keeping the default setting, and we got the mIoU=40.9.
I believe many researchers have struggled in the same problem.
If you can provide the trained model of the first phase (ADVENT), we will be grateful.
Thanks.
Can you update the entire training code of intraDA? The training code here is the code of ADVENT.
Thanks for your great work, but I have a little confusion about loading Synscapes and SYNTHIA dataset. Could you please provide some suggestions?
BTW, in the first stage, can I directly use the provided evaluation models of ADVENT and AdaptSegNet?
Hello! How to draw Entropy Maps in figure4.(b) in your paper? Will you update you code? Or you just visualize features of "pred_trg_entropy " in entropy_rank.py in line 167. Do you use some special tools?
老师,如果一开始目标域有标签的话,代码是可以运行的,但是无监督域自适应不是要求目标域没有标签吗?
如果一开始目标域没有标签的话,代码就不能运行。
如果一开始目标域是有伪标签的话,那么刚开始的伪标签是怎么生成的?
hi, this is good work, and i want to ask, how to train the model on own custom dataset not using official dataset like cityscape, gta5? thank you in advance
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.