Coder Social home page Coder Social logo

relationformer's People

Contributors

bwittmann avatar rajatkoner08 avatar suprosanna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

relationformer's Issues

debug_relationformer

Hi! Is it possible to share debug_relationformer notebook for road_network and road_network_rgb? Thanks!

Problem about running the code

Hello author, I am very interested in your open source work, but I encountered several problems when running your code. Can you help me solve them?
First,error in calculating evaluation indicators.It raise an error about matrix dimension,the questions are as follows
aa407fbacb7b2f519d2075504f8611d
I tried to simply modify the code according to the error to make it run, but is there a potential problem here?
another problem is as follows,It seems that you have inherited some practices of detr, whether these practices lead to index out of range, or whether I ignored some problems?
image
Second,I see that the training time of three Titan graphics cards in one epoch is 7 hours. Is the training time of 175 hours for 25 epoch reasonable?
81b5f8d3d0ce6e261b9a3e97e4289e1
We look forward to your reply. Thank you!

question about hid_dim when using pretrained deformable DETR

Hi there, thank you for providing your work as open source. I can understand your work more thoroughly with your repo.
I have a curiosity about HIDDEN_DIM https://github.com/suprosanna/relationformer/blob/scene_graph/configs/scene_2d.yaml#L40
I believe you must have used pretrained weight from https://github.com/fundamentalvision/Deformable-DETR, because I cannot find mentioning about pretraining in your paper.
However, as far as I know, all deformable DETR pretrained weight is set to have 256 hidden_dim, while your configuration says 512.
Was there any trick for loading pretrained weight or any public available deformable DETR model with 512 hidden dim?

Problem about Compiling CUDA operators

Hello, the author. Thank you very much for publishing this excellent work. Some problems occurred when I ran the statement Compiling CUDA operators. Can you help me solve them?
image
image

RuntimeError: DataLoader worker (pid(s) 411948) exited unexpectedly

Hello,

I have an error when trying to run the code of vesselformer (train.py) on a GPU Cluster of my university. I tried to reduce the batch size (from 50 to 2), but still, it is not able to train properly.

2023-08-28 12:02:44,306 ignite.distributed.launcher.Parallel INFO: Initialized processing group with backend: 'nccl'
2023-08-28 12:02:44,306 ignite.distributed.launcher.Parallel INFO: - Run '<function main at 0x7f2cd2855670>' in 1 processes
2023-08-28 12:02:44,379 ignite.distributed.auto.auto_dataloader INFO: Use data loader kwargs for dataset '<dataset_vessel3d.ve':
        {'batch_size': 50, 'shuffle': True, 'num_workers': 16, 'collate_fn': <function image_graph_collate at 0x7f2cd2a1b040>, 'pin_memory': True}
/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py:554: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  warnings.warn(_create_warning_msg(
2023-08-28 12:02:44,397 ignite.distributed.auto.auto_dataloader INFO: Use data loader kwargs for dataset '<dataset_vessel3d.ve':
        {'batch_size': 50, 'shuffle': False, 'num_workers': 16, 'collate_fn': <function image_graph_collate at 0x7f2cd2a1b040>, 'pin_memory': True}
2023-08-28 12:02:46,254 ignite.distributed.auto.auto_model INFO: Apply torch DataParallel on model
2023-08-28 12:02:46,254 ignite.distributed.auto.auto_model INFO: Apply torch DataParallel on model
Current run is terminating due to exception: DataLoader worker (pid(s) 411948) exited unexpectedly
Exception: DataLoader worker (pid(s) 411948) exited unexpectedly
Traceback (most recent call last):
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1120, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 411948) is killed by signal: Killed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 807, in _run_once_on_dataset
    self.state.batch = next(self._dataloader_iter)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1316, in _next_data
    idx, data = self._get_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1272, in _get_data
    success, data = self._try_get_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 411948) exited unexpectedly
Engine run is terminating due to exception: DataLoader worker (pid(s) 411948) exited unexpectedly
Exception: DataLoader worker (pid(s) 411948) exited unexpectedly
Traceback (most recent call last):
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1120, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 411948) is killed by signal: Killed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 753, in _internal_run
    time_taken = self._run_once_on_dataset()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 854, in _run_once_on_dataset
    self._handle_exception(e)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 464, in _handle_exception
    self._fire_event(Events.EXCEPTION_RAISED, e)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 421, in _fire_event
    func(*first, *(event_args + others), **kwargs)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/monai/handlers/stats_handler.py", line 148, in exception_raised
    raise e
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 807, in _run_once_on_dataset
    self.state.batch = next(self._dataloader_iter)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1316, in _next_data
    idx, data = self._get_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1272, in _get_data
    success, data = self._try_get_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 411948) exited unexpectedly
2023-08-28 12:03:30,077 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'nccl'
Traceback (most recent call last):
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1120, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/queue.py", line 179, in get
    self.not_empty.wait(remaining)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/threading.py", line 306, in wait
    gotit = waiter.acquire(True, timeout)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 411948) is killed by signal: Killed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "train.py", line 205, in <module>
    parallel.run(main, args)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/distributed/launcher.py", line 316, in run
    func(local_rank, *args, **kwargs)
  File "train.py", line 196, in main
    trainer.run()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/monai/engines/trainer.py", line 56, in run
    super().run()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/monai/engines/workflow.py", line 250, in run
    super().run(data=self.data_loader, max_epochs=self.state.max_epochs)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 704, in run
    return self._internal_run()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 783, in _internal_run
    self._handle_exception(e)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 464, in _handle_exception
    self._fire_event(Events.EXCEPTION_RAISED, e)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 421, in _fire_event
    func(*first, *(event_args + others), **kwargs)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/monai/handlers/stats_handler.py", line 148, in exception_raised
    raise e
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 753, in _internal_run
    time_taken = self._run_once_on_dataset()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 854, in _run_once_on_dataset
    self._handle_exception(e)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 464, in _handle_exception
    self._fire_event(Events.EXCEPTION_RAISED, e)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 421, in _fire_event
    func(*first, *(event_args + others), **kwargs)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/monai/handlers/stats_handler.py", line 148, in exception_raised
    raise e
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/ignite/engine/engine.py", line 807, in _run_once_on_dataset
    self.state.batch = next(self._dataloader_iter)
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1316, in _next_data
    idx, data = self._get_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1272, in _get_data
    success, data = self._try_get_data()
  File "/home/guests/pascual_cervera/miniconda3/envs/lvs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 411948) exited unexpectedly
slurmstepd: error: Detected 9 oom-kill event(s) in StepId=20449.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.

Any help you can provide will be welcomed.

branch road_network_rgb currently not support ddp

hi,

Thanks for sharing your great work.
When I try to train with 20_cities dataset, the code in readme is not work. (it says "nproc_per_node" is not supported).

Will you guys add support for ddp in the near future?

Input type and weight type error in scene graph code

Hi, I have installed the code in python3.8, pytorch 1.8.0 and cuda11. And the debug_relationformer.ipynb runs well about Debug Dataloader and Debug Model part.
However, when I run the train.py using "nohup python3 train.py --config configs/scene_2d.yaml --cuda_visible_device 0 1 2 --exp_name VGtest1 --nproc_per_node 3 --b 16 &> log/Muti.out& ", there is an error:

*** Config file
configs/scene_2d.yaml
Experiment Name : VGtest1
Batch size : 16
Running Distributed: True ; GPU: 0 ; RANK: 0
Number of parameters : 92944451
ERROR:ignite.engine.engine.RelationformerTrainer:Current run is terminating due to exception: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
ERROR:ignite.engine.engine.RelationformerTrainer:Current run is terminating due to exception: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
ERROR:ignite.engine.engine.RelationformerTrainer:Current run is terminating due to exception: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
ERROR:ignite.engine.engine.RelationformerTrainer:Engine run is terminating due to exception: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
ERROR:ignite.engine.engine.RelationformerTrainer:Engine run is terminating due to exception: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
ERROR:ignite.engine.engine.RelationformerTrainer:Engine run is terminating due to exception: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
Traceback (most recent call last):
File "train.py", line 292, in
parallel.run(main, args)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/distributed/launcher.py", line 275, in run
idist.spawn(self.backend, func, args=args, kwargs_dict=kwargs, **self._spawn_params)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/distributed/utils.py", line 323, in spawn
comp_model_cls.spawn(
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/distributed/comp_models/native.py", line 304, in spawn
start_processes(
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/distributed/comp_models/native.py", line 272, in _dist_worker_task_fn
fn(local_rank, *args, **kw_dict)
File "/home/ymf/dockerFile/relationformer/train.py", line 282, in main
trainer.run()
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/monai/engines/trainer.py", line 56, in run
super().run()
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/monai/engines/workflow.py", line 250, in run
super().run(data=self.data_loader, max_epochs=self.state.max_epochs)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 702, in run
return self._internal_run()
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 775, in _internal_run
self._handle_exception(e)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception
raise e
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 745, in _internal_run
time_taken = self._run_once_on_dataset()
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 850, in _run_once_on_dataset
self._handle_exception(e)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 469, in _handle_exception
raise e
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/ignite/engine/engine.py", line 833, in _run_once_on_dataset
self.state.output = self._process_function(self, self.state.batch)
File "/home/ymf/dockerFile/relationformer/trainer.py", line 40, in _iteration
h, out = self.network(images)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 711, in forward
output = self.module(*inputs, **kwargs)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ymf/dockerFile/relationformer/models/relationformer_2D.py", line 108, in forward
features, pos = self.backbone(samples)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ymf/dockerFile/relationformer/models/deformable_detr_backbone.py", line 117, in forward
xs = self0
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ymf/dockerFile/relationformer/models/deformable_detr_backbone.py", line 84, in forward
xs = self.body(tensor_list.tensors)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torchvision/models/_utils.py", line 63, in forward
x = module(x)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/data/anaconda3/envs/ymf_rel38/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

Do you have any clue about this error and how to fix it? Thanks!

Inference code for scene_graph branch

run_batch_inference.py isn't complete. Can you add code for running inference on new test images without ground truth with a trained model? Thanks~

Issue about the evaluation

Hi, I would first like to thanks your contribution. However, I think there exist some issues of the evaluation.

        if self.mode=='sgdet':
            iou_overlap = bbox_overlaps(gt_boxes, pred_boxes)
            if self.use_gt_filter:
                idx = torch.where(iou_overlap >= 0.5)
                valid_rels_idx = np.asarray([i for i, rel in enumerate(pred_rels) if (rel[0] in idx[1]) and (rel[1] in idx[1])]) #filter the junk detections
                if len(valid_rels_idx)>=1:
                    pred_rels = pred_rels[valid_rels_idx,:]
                    predicate_scores = predicate_scores[valid_rels_idx]
            if self.sort_only_by_rel:
                sorted_rel_idx = np.argsort(predicate_scores)[::-1]
                pred_rels = pred_rels[sorted_rel_idx]
                predicate_scores = predicate_scores[sorted_rel_idx]
        pred_to_gt, pred_5ples, rel_scores, sort_idx = evaluate_recall(
                    gt_rels, gt_boxes, gt_classes,
                    pred_rels, pred_boxes, pred_classes,
                    predicate_scores, obj_scores, phrdet= self.mode=='phrdet', vis=vis,
                    **kwargs)

For "SGDet" mode, you use "use_gt_filter" to filter out those boxes with less IoU overlaps . However, we have no ground truths during test. This is unfair to calculate the recall. And I have no found other codes use this method (https://github.com/rowanz/neural-motifs/blob/master/lib/evaluation/sg_eval.py, https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch/blob/master/maskrcnn_benchmark/data/datasets/evaluation/vg/sgg_eval.py).

Could you release the checkpoint ? And the code can not reproduce the same result as reported in the paper.
I doubt that the evaluation is unfair and incorrect, and the true performance is not as well as reported in the paper.
I have run experiments with your code, and fix some bugs, but it still cannot address my above concerns.

The lack of proposals.h5

In the scene_graph branch, it seems that the dataset "proposals.h5" link is not provided?Or maybe the link for "proposals.h5" is visualgenome.org which is now unavailable (need account to visit this website)?
stanford_filtered/
├── image_data.json
├── proposals.h5
├── VG-SGG-dicts.json
└── VG-SGG.h5

Questions about road_network_rgb

Hi authors, thanks for sharing the excellent work! After going through the code of road_network_rgb, I have some questions and hope you could give some hints or suggestions:

  1. Are there any pretrained checkpoints provided? Really appreciate it if you could provide some checkpoints!
  2. Are the evaluation conducted on large stitched image tiles or a set of small patches? For example, Sat2graph can finally output 2048 $\times$ 2048 image tiles covering a large area. But in your testing script, I cannot find code for window sliding or patch stitching. So I wonder whether you evaluate RelationFormer on 128 $\times$ 128 image patches. Please allow me to apologize in advance if I missed any information.
  3. If you evaluate models (i.e., both Sat2Graph and RelationFormer) on small image patches, these two models have different patch sizes: 128 $\times$ 128 for RelationFormer and 352 $\times$ 352 for Sat2Graph.

Thank you so much for your time and kind help! Looking forward to your reply!

question about the detail

This is my first time using GPS. The propagation distance in the topo function is set to 300 meters. I want to know if the size of my input image is 512*512 pixels, where r = 0.00300, how many pixels does the propagation distance correspond to?

Looking forward to your reply

Question about Vessel VTK

I tried replicating results from the paper on the DeepVesselNet synthetic dataset but my predictions look weird. Even what I'm assuming is the ground truth graph for a volume doesn't really seem to match when I visualize them. My first point of uncertainty with regard to data loading is as follows:

The VTK archive once unpacked looks like this:

vtk
    ├── 0
    │   ├── 1_arteries_final_1.vtk
    │   ├── 1_arteries_final_2.vtk
    │   ├── 1_arteries_final_3.vtk
    │   ├── 1_arteries_final_4.vtk
    │   ├── 1_arteries_final_5.vtk
    │   ├── 1_arteries_final_6.vtk
    │   ├── 1_arteries_final_7.vtk
    │   └── 1_arteries_final_8.vtk
    ├── 1
    │   ├── 1_arteries_final_10.vtk
    │   ├── 1_arteries_final_11.vtk
    │   ├── 1_arteries_final_12.vtk
    │   ├── 1_arteries_final_13.vtk
    │   ├── 1_arteries_final_14.vtk
    │   ├── 1_arteries_final_15.vtk
    │   ├── 1_arteries_final_16.vtk
    │   ├── 1_arteries_final_1.vtk
    │   ├── 1_arteries_final_2.vtk
    │   ├── 1_arteries_final_3.vtk
    │   ├── 1_arteries_final_4.vtk
    │   ├── 1_arteries_final_5.vtk
    │   ├── 1_arteries_final_6.vtk
    │   ├── 1_arteries_final_7.vtk
    │   ├── 1_arteries_final_8.vtk
    │   └── 1_arteries_final_9.vtk
    ├── 2
    │   ├── 1_arteries_final_10.vtk
    │   ├── 1_arteries_final_11.vtk
    │   ├── 1_arteries_final_12.vtk
    │   ├── 1_arteries_final_13.vtk
    │   ├── 1_arteries_final_14.vtk
    │   ├── 1_arteries_final_15.vtk
    │   ├── 1_arteries_final_16.vtk
    │   ├── 1_arteries_final_1.vtk
    │   ├── 1_arteries_final_2.vtk
    │   ├── 1_arteries_final_3.vtk
    │   ├── 1_arteries_final_4.vtk
    │   ├── 1_arteries_final_5.vtk
    │   ├── 1_arteries_final_6.vtk
    │   ├── 1_arteries_final_7.vtk
    │   ├── 1_arteries_final_8.vtk
    │   └── 1_arteries_final_9.vtk
    ├── 3
    │   ├── .
    │   ├── . 
    │   ├── .

The raw and seg files, however don't follow the same nested folder structure, instead being unpacked like this:

├── raw
│   ├── 100.nii.gz
│   ├── 101.nii.gz
│   ├── 102.nii.gz
│   ├── 103.nii.gz
│   ├── 104.nii.gz
│   ├── .
│   ├── . 
│   ├── .

Owing to this, the dataset preparation code in generate_data.py

    DATA_PATH = "./data/vessel_data/"

    img_folder = os.path.join(DATA_PATH, "raw")
    seg_folder = os.path.join(DATA_PATH, "seg")
    vtk_folder = os.path.join(DATA_PATH, "vtk")

    raw_files = []
    seg_files = []
    vtk_files = []

    for file_ in os.listdir(seg_folder):
        file_ = file_[:-7]
        raw_files.append(os.path.join(img_folder, file_+'.nii.gz'))
        seg_files.append(os.path.join(seg_folder, file_+'.nii.gz'))
        print(file_)
        vtk_files.append(os.path.join(vtk_folder, file_+'.vtk'))

fails for the VTK files since it doesn't account for the nested folder structure.

How I got around this is by doing:

    raw_files = natsorted(glob(f"{os.path.join(DATA_PATH, 'raw')}/*.nii.gz"))
    seg_files = natsorted(glob(f"{os.path.join(DATA_PATH, 'seg')}/*.nii.gz"))
    vtk_files = natsorted(
        glob(f"{os.path.join(DATA_PATH, 'vtk')}/*/*.vtk", recursive=True)
    )

What this gives me is:

[
    ('./data/vessel_data/raw/1.nii.gz', './data/vessel_data/seg/1.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_1.vtk'),
    ('./data/vessel_data/raw/2.nii.gz', './data/vessel_data/seg/2.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_2.vtk'),
    ('./data/vessel_data/raw/3.nii.gz', './data/vessel_data/seg/3.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_3.vtk'),
    ('./data/vessel_data/raw/4.nii.gz', './data/vessel_data/seg/4.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_4.vtk'),
    ('./data/vessel_data/raw/5.nii.gz', './data/vessel_data/seg/5.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_5.vtk'),
    ('./data/vessel_data/raw/6.nii.gz', './data/vessel_data/seg/6.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_6.vtk'),
    ('./data/vessel_data/raw/7.nii.gz', './data/vessel_data/seg/7.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_7.vtk'),
    ('./data/vessel_data/raw/8.nii.gz', './data/vessel_data/seg/8.nii.gz', './data/vessel_data/vtk/0/1_arteries_final_8.vtk'),
    ('./data/vessel_data/raw/9.nii.gz', './data/vessel_data/seg/9.nii.gz', './data/vessel_data/vtk/1/1_arteries_final_1.vtk'),
    ('./data/vessel_data/raw/10.nii.gz', './data/vessel_data/seg/10.nii.gz', './data/vessel_data/vtk/1/1_arteries_final_2.vtk'),
    .
    .
    .
]

I wanted to check if this is this is what the expected triplets of files would be?

Pretrained weights

Hi,
Thanks for the great work!
It would be really helpful if you could upload the pre-trained weights for the scene graph model.

Graphics memory keeps changing during training on scene_graph branch

Hi, authors. I am trying to use your framework for scene graph generation(python3 train.py --config configs/scene_2d.yaml --cuda_visible_device 0 --nproc_per_node 1 -b 6), I find that the graphics memory keeps changing during training. I tried to remove gc.collect() and torch.cuda.empty_cache(), but it didn't work. Is there any way to fix the graphics memory? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.