jamycheung / matchformer Goto Github PK
View Code? Open in Web Editor NEWRepository of MatchFormer
License: Apache License 2.0
Repository of MatchFormer
License: Apache License 2.0
Hi,
could you provide a small demo to just run the feature matching on a single image pair and provide the correspondences?
Thanks in advance!
感谢您杰出的工作!我想请问您论文中涉及到ClusterGNN: Cluster-based Coarse-to-Fine Graph Neural Network for Efficient
Feature Matching(cvpr2022)部分的实验数据是怎么获得的,据我所知,原作者并没有开源代码,您自己复现了它么?能否和我分享一下您复现的这部分工作?或者您知道哪里有相关的开源,万分感谢!我的邮箱是[email protected]或者[email protected]
Hi~ Thx for your excellent work!
I now have some problems when running the test code.It may be caused by the wrong file structure.
In fact, I'm not familiar with the structure of the two data sets of scannet and megadepth. I don't know where to download the megadepth_test_1500.txt
file
Could you give me some help on downloading and organizing the dataset structure? Or modify the readme file to make the data format configuration easier?
Looking forward to your reply, and wish you have a nice day!
I want to use the matcher outside the test script, but i'm unable to build the model :/
the function Matchformer_SEA_lite() asks for a config in the first argument (line)
but the build_backbone function, does not pass any config ahead (line)
config = get_cfg_defaults()
# adjust lite SEA model config:
config.MATCHFORMER.BACKBONE_TYPE = 'litesea'
config.MATCHFORMER.SCENS = 'outdoor'
config.MATCHFORMER.RESOLUTION = (8,4)
model = PL_LoFTR(config, pretrained_ckpt="./pretrained/indoor-lite-LA.ckpt", dump_dir="/tmp")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_38/1928295852.py in <module>
6 config.MATCHFORMER.RESOLUTION = (8,4)
7
----> 8 model = PL_LoFTR(config, pretrained_ckpt="./pretrained/indoor-lite-LA.ckpt", dump_dir="/tmp")
/kaggle/working/MatchFormer/model/lightning_loftr.py in __init__(self, config, pretrained_ckpt, profiler, dump_dir)
27
28 # Matcher: LoFTR
---> 29 self.matcher = Matchformer(config=_config['matchformer'])
30
31 # Pretrained weights
/kaggle/working/MatchFormer/model/matchformer.py in __init__(self, config)
14 # Misc
15 self.config = config
---> 16 self.backbone = build_backbone(config)
17 self.coarse_matching = CoarseMatching(config['match_coarse'])
18 self.fine_preprocess = FinePreprocess(config)
/kaggle/working/MatchFormer/model/backbone/__init__.py in build_backbone(config)
11 return Matchformer_LA_large()
12 elif config['backbone_type'] == 'litesea':
---> 13 return Matchformer_SEA_lite()
14 elif config['backbone_type'] == 'largesea':
15 return Matchformer_SEA_large()
TypeError: __init__() missing 1 required positional argument: 'config'
Am I doing something terrible wrong ?
The dataset resources provided by Loftr do not seem to include scan/index/scan net_ How can I obtain the test. txt and scan/inedx/test. npz files?
Hi I'm looking to use Matchformer and I can't quite figure out how to test the model on my own images, could you include some instructions on how to do so?
感谢你的工作,这个test demo根本行不通。以scannet为例,scannet_test.txt和test.npz如何的来,原来的LoFTR根本没有这些,你自己测试过了吗?
Hi, thank you for your great job. But where is training code? why not open?
Hi! While trying to run the test in a jupyter notebook, I was able to initialise the model with
pl.seed_everything(config.TRAINER.SEED) # reproducibility
model = PL_LoFTR(config, pretrained_ckpt="../input/matchformer/MatchFormer/model/weights/outdoor-lite-SEA.ckpt", dump_dir="dump/")
# lightning data
data_module = MultiSceneDataModule(args, config)
# lightning trainer
trainer = pl.Trainer.from_argparse_args(args, logger=False, accelerator="gpu", gpus=1)
However, upon running
trainer.test(model, datamodule=data_module, verbose=True)
I am getting
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_33/3924240025.py in <module>
----> 1 trainer.test(model, datamodule=data_module, verbose=True)
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in test(self, model, dataloaders, ckpt_path, verbose, datamodule)
934 """
935 self.strategy.model = model or self.lightning_module
--> 936 return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule)
937
938 def _test_impl(
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _call_and_handle_interrupt(self, trainer_fn, *args, **kwargs)
719 return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
720 else:
--> 721 return trainer_fn(*args, **kwargs)
722 # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7
723 except KeyboardInterrupt as exception:
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _test_impl(self, model, dataloaders, ckpt_path, verbose, datamodule)
981
982 # run test
--> 983 results = self._run(model, ckpt_path=self.ckpt_path)
984
985 assert self.state.stopped
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model, ckpt_path)
1170 self.__setup_profiler()
1171
-> 1172 self._call_setup_hook() # allow user to setup lightning_module in accelerator environment
1173
1174 # check if we should delay restoring checkpoint till later
/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _call_setup_hook(self)
1488
1489 if self.datamodule is not None:
-> 1490 self.datamodule.setup(stage=fn)
1491 self._call_callback_hooks("setup", stage=fn)
1492 self._call_lightning_module_hook("setup", stage=fn)
/kaggle/input/matchformer/MatchFormer/model/data.py in setup(self, stage)
114
115 try:
--> 116 self.world_size = dist.get_world_size()
117 self.rank = dist.get_rank()
118 logger.info(f"[rank:{self.rank}] world_size: {self.world_size}")
/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in get_world_size(group)
865 return -1
866
--> 867 return _get_group_size(group)
868
869
/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in _get_group_size(group)
323 """
324 if group is GroupMember.WORLD or group is None:
--> 325 default_pg = _get_default_group()
326 return default_pg.size()
327 return group.size()
/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py in _get_default_group()
428 if not is_initialized():
429 raise RuntimeError(
--> 430 "Default process group has not been initialized, "
431 "please make sure to call init_process_group."
432 )
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
It seems to an error related to DistributedDataParallel but I am running on only 1 gpu.
Any help will be appreciated, thank you!
Hello, where did you get the data['spv_b_ids'] in coarse matching from? I've looked at the dataset file and found that there is no such key value
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.