Coder Social home page Coder Social logo

superpoint-pytorch's Introduction

SuperPoint-Pytorch (A Pure Pytorch Implementation)

SuperPoint: Self-Supervised Interest Point Detection and Description

Thanks

This work is based on:

About -TODO: comment the following line if you want the same result as tf version-

Please ignore the code with such comments. When i reproducing rpautrat's code, I looked at almost looked at the output results of all their key functions and attempted to strictly reproduce them through torch. However, the two frameworks (torch and tf) are different. For example, the matrix inversion operations, the results of torch and tf are different. You can test them yourself. If I remember correctly,

homography = np.linalg.inv(homography)  
homography = torch.inverse(homography)  

These two lines is completely consistent with tf.(I don't know why,who can tell me...)
Therefore, it is obvious that, if you do not pursue the same result as tf, but only focus on the operation of inverse, torch can meet all your needs, which satisfies the basic fact that ((A^-1)^-1)=A
Of course, some functions, such as convolution operations, cannot guarantee complete consistency between tf and torch, which is also the difference between this project and the tf project.

Finished (12/09/2021)

Welcome to star this repository!

Performance

  • Detector repeatibility: 0.67
  • Homography estimation on images with viewpoint changes in HPatches dataset: 0.698
    • Corresponding result displayed in rpautrat's repository is 0.712.
    • Much better performance can be achieved (0.725) if using the magic-points generated by rpautrat's model.
    • Possible way to improve performance is to set better values for hyper-parameters, such as det_threshold, nms and top_k.

New Update (09/04/2021)

  • Convert model released by rpautrat to torch format
  • Usage:
    • 1 Construct network by superpoint_bn.py (Refer to train.py for more details)
    • 2 Set parameter eps=1e-3 for all the BatchNormalization functions in model/modules/cnn/*.py
    • 3 Set parameter momentum=0.01 (not tested)
    • 4 Load pretrained model superpoint_bn.pth and run forward propagation

Usage

  • 0 Update your repository to the latested version (if you have pulled it before)

  • 1 Prepare your data. Make directories data and export. The data directory should look like,

    data
    |-- coco
    |  |-- train2017
    |  |     |-- a.jpg
    |  |     |-- ...
    |  --- test2017
    |        |-- b.jpg
    |        |-- ...
    |-- hpatches
    |   |-- i_ajustment
    |   |   |--1.ppm
    |   |   |--...
    |   |   |--H_1_2
    |   |-- ...
    

    Create soft links if you already have coco, hpatches data sets, commands are like,

    cd data
    ln -s dir_to_coco ./coco
    
  • 2 Training steps are much similar to rpautrat/Superpoint. However we strongly suggest you read the scripts first before training

    • 2.0 Modify the following code in train.py, to save your models, if necessary
      if (i%118300==0 and i!=0) or (i+1)==len(dataloader['train']):
    • 2.1 set proper values for training epoch in *.yaml.
    • 2.2 Train MagicPoint (>1 hours):
      python train.py ./config/magic_point_syn_train.yaml
      (Note that you have to delete the directory ./data/synthetic_shapes whenever you want to regenerate it)
    • 2.3 Export coco labels data set v1 (>50 hours):
      python homo_export_labels.py #run with your data path
    • 2.4 Train MagicPoint on coco labels data set v1 (exported by step 2.2)
      python train.py ./config/magic_point_coco_train.yaml #run with your data path
    • 2.5 Export coco labels data set v2 using the magicpoint trained by step 2.3
    • 2.6 Train SuperPoint using coco labels data set v2 (>12 hours)
      python train.py ./config/superpoint_train.yaml #run with your data path
    • others. Validate detection repeatability or description
      python export_detections_repeatability.py #(very fast)  
      python compute_repeatability.py  #(very fast)
      ## or
      python export_descriptors.py #(> 5.5 hours) 
      python compute_desc_eval.py #(> 1.5 hours)
      

    Descriptions of some important hyper-parameters in YAML files

    model
        name: superpoint # superpoint or magicpoint
        pretrained_model: None # or path to a pretrained model to load
        using_bn: true # apply batch normalization in the model
        det_thresh: 0.001 # point confidence threshold, default, 1/65
        nms: 4 # nms window size
        topk: -1 # keep top-k points, -1, keep all
     ...
    data:
        name: coco #synthetic
        image_train_path: ['./data/mp_coco_v2/images/train2017',] #several data sets can be list here
        label_train_path: ['./data/mp_coco_v2/labels/train2017/',]
        image_test_path: './data/mp_coco_v2/images/test2017/'
        label_test_path: './data/mp_coco_v2/labels/test2017/'
        ...
        data_dir: './data/hpatches' #path to the hpatches dataset
        export_dir: './data/repeatibility/hpatches/sp' #dir where to save the output data
    solver:
        model_name: sp #Prefix of the model name you want
    

superpoint-pytorch's People

Contributors

shaofengzeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

superpoint-pytorch's Issues

README.md 中的 Usage 中是否有几个错误

首先感谢您的工作
我是一个深度学习新手
我在阅读 README.md 时,发现可能存在一些笔误,虽然不影响代码的运行,但是给我带来了一些困扰,比如
Usage 2.0 中表述的 in train.py, line 61,实际上由于代码改动,好像不在那一行了。
2.4 和 2.6 中的 python train.py ./config/magic_point_coco_train.py #with correct data dirs 和 python train.py ./config/superpoint_train.py #with correct data dirs 表述的 ./config/.py 是否应为 ./config/.yaml 。

训练完成后使用时爆显存的问题

作者您好!我在训练完成后使用Superpoint推断时发现即使设置了with torch.no_grad():,当我连续推断两张图片时显存溢出,并且速度很慢,一张480p的图片放在gpu上需要1s
`device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
# device='cpu'
net = SuperPointBNNet(config['model'], device=device, using_bn=config['model']['using_bn'])
net.load_state_dict(torch.load(config['model']['pretrained_model'],map_location=device))
net.to(device).eval()

example = cv2.imread("/home/xiefei/COCO_test2014_000000000001.jpg",0)
img_tensor = torch.as_tensor(example.copy(), dtype=torch.float, device=device)
img_tensor=img_tensor/255.
img_tensor=img_tensor.unsqueeze(0)
img_tensor=img_tensor.unsqueeze(0)

with torch.no_grad():
    t1=time.time()
    output=net(img_tensor)
    t2=time.time()
    print(t2-t1)`

inference time

您好,请问您有测试过推理时间吗,是否和论文给出的70FPS一致。

coco images size when superpoint trainning

Hi, thanks for your great job, if I trained magic point with rpautrat's project to get coco gt points, then use your version to train superpoint, will transforming the coco image to 240* 320 by function ratio_preserving_resize have a big impact on the results?

train superpoint时的超参数设置

作者您好,我注意到您在训练superpoint时的descriptor loss部分的lambda_d设置为0.05。这个lambda_d不是添加在正样本上的权重吗?(正样本数量较少,理应设置大点的权重)Superpoint的作者在论文中陈述的设置为250,您为什么使用0.05呢?是您在实际训练中找到的更合适的新参数吗?
https://github.com/shaofengzeng/SuperPoint-Pytorch/blob/master/config/superpoint_train.yaml

我在训练时给正样本设置大的权重也没有获得好的效果,看到您这么设置,能不能问下这么设置的原因呢?

Training speed and results

Sorry to bother you, I have two question about this project. for training work, I use multi gpus, the speed about 1.79it/s in first two epoch, it takes one day. After that, the speed is so fast, I don't know why this happened. Also the result, the detections repeatability is more better than rpautrat/Superpoint's, but the descriptors: hpatches-i is 0.90. hpatches-v is 0.55. Compare with rpautrat/Superpoint's, the result is not good.

About magic_point_coco_train.yaml

Hi, thanks for your great job.
I want to train magicpoint on kitti , so is it needed to use magic_point_coco_train.yaml ? I noticed there is a path of labels. but how to ensure its correctness so as to train magicpoint ? because if the label result generated in the previous step is average, what is the significance of this step of train magicpoint ?

License

Thanks for this implementation. Would you consider adding an explicit license to the repo?

relationship between superpoint_v1.pth and superpointbn.pth

I guess superpoint_v1 is the pytorch version model file converted by the TF version. Is the superpointbn.pth have same weights as the former one except the modified parts (I see that in the bn version, you made some modifications on the architecture)?

Training

How to train with Superpoint instead of Superpoint_BN?

det_thresh in yaml files.

Thanks for the sharing! I have a confusion about the setting of det_thresh in the superpoint_train.yaml and magic_point_coco_train.yaml files. Why is the det_thresh 0.001 rather than 0.015?

Matching

How does the trained Superpoint model combine with Superglue to achieve image matching?

multi GPU train

How should I modify the code to conduct multi GPU training? At the end of train.py file the following code is added before train_eval(model, data_loaders, config) function.

if torch.cuda.device_count() > 1: 
    model = torch.nn.DataParallel(model, device_ids=list(range(torch.cuda.device_count()))
model.to(device)

After that, When I train the code, I get the following messages.

Traceback (most recent call last):
  File "train.py", line 212, in <module>
    train_eval(model, data_loaders, config)
  File "train.py", line 84, in train_eval
    prob_warp, desc_warp, device)
  File "/root/Source/gtr/SuperPoint_pytorch/temp/SuperPoint-Pytorch-master/solver/loss.py", line 16, in loss_func
    device=device)
  File "/root/Source/gtr/SuperPoint_pytorch/temp/SuperPoint-Pytorch-master/solver/loss.py", line 65, in detector_loss
    ce_loss = F.cross_entropy(logits, labels, reduction='none',)
  File "/root/PersonalData/miniconda/envs/superpoint/lib/python3.7/site-packages/torch/nn/functional.py", line 3026, in cross_entropy
    return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
ValueError: Expected input batch_size (16) to match target batch_size (64).

It seems to be the problem of data set allocation under multi GPU training. But I don't know how to modify it.

Training Speed Quite Slow!

Hi, thanks for this implementation! Training Magicpoint took about 8-10 minutes per epoch, but training Superpoint took 5 hours per epoch. I am running using Pytorch Lightning distributed over 8 server grade GPUs. Can you help me?

Descriptor loss train

Hi, first thank you for this great work which really helped me a lot!
I want to use the Superpoint model to train on my own data. The detection loss part seems normal but the descriptor loss is oscillating and cannot converge.
The input are 256*256 image and warped image with homography, and the model and loss function are the same with your repo.
The detector and descriptor loss are as below with 300 epochs.
Detector loss
image
Descriptor loss
image

Can you give me some advice?

Cuda: out of memory

作者您好,近期在遵循您给出的训练步骤进行复现时,发现如果将输入图片尺寸修改为[480, 640],则代码将无法正常工作,并提示显存不足,我目前所采用的设备是2080Ti显卡,8G显存,因为我个人觉得依照这个网络模型,[480,640]的图片尺寸不应该出现超出8G显存的问题,针对超出显存的报错,我不知道问题出在了哪里,我在训练的过程中即使将batch_size修改为1仍然会提示超出显存,所以想请教一下您,为什么网络面向[480,640]的图片尺寸时即使batch_size为1还会出现显存不够的问题呢?期待您的回复

Inference or demo codes using the pretrained models?

Hi, fantastic work done! Applause for you!!

May I know if there is any instruction or codes for a quick demo using pre-trained models? Thank you in advance!

Also, may I know: between the weight released by magicleap and the weight released by rpautrat's superpoint, which one has better performance?

how to test

Thank you for your code, but how can I test the feature point matching of a pair of images? :D

Crop in augmentation_legacy.py

请问在dataset/utils/augmentation_legacy.py文件的elastic_transform函数中为什么会按照如下的方式定义min_row,max_row,min_col,max_col
min_row = int(math.ceil(np.max(dy))) + padding
max_row = int(math.floor(np.min(dy))) + shape[0] - padding
min_col = int(math.ceil(np.max(dx))) + padding
max_col = int(math.floor(np.min(dx))) + shape[1] - padding

color input

Hello, thanks for your promising job. would you has planed to implemented rgb color instead of gray input.

some envs problem

I have installed Torch ==1.9.0 as required.But when I run train. py , I'm reminded of no module named 'torch'.

About pre-trained model

Thank your great work!
The superpoint_bn.pth and superpoint_v1.pth provided by you have some difference? I want the magicpoint parameter trained from synthetic dataset , which I can train in my dataset rather COCO. Could you share it ?
Thanks!

RuntimeError: dets should have the same type as scores

Envs is installed as required.

0it [00:00, ?it/s]D:\Users\sichu\anaconda3\envs\point-torch\lib\site-packages\kornia\utils\helpers.py:96: UserWarning: torch.solve is deprecated in favor of torch.linalg.solveand will be removed
in a future PyTorch release.
torch.linalg.solve has its arguments reversed and does not return the LU factorization.
To get the LU factorization see torch.lu, which can be used with torch.lu_solve or torch.lu_unpack.
X = torch.solve(B, A).solution
should be replaced with
X = torch.linalg.solve(A, B) (Triggered internally at ..\aten\src\ATen\native\BatchLinearAlgebra.cpp:760.)
out1, out2 = torch.solve(input.to(dtype), A.to(dtype))
D:\Users\sichu\anaconda3\envs\point-torch\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change.
Please do not use them for anything important until they are released as stable. (Triggered internally at ..\c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)

Traceback (most recent call last):
File "train.py", line 139, in
train_eval(model, data_loaders, config)
File "train.py", line 30, in train_eval
raw_outputs = model(data['raw'])
File "D:\Users\sichu\anaconda3\envs\point-torch\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "D:\experments\SuperPoint-Pytorch-master\SuperPoint-Pytorch-master\model\magic_point.py", line 42, in forward
keep_top_k=self.topk).squeeze(dim=0) for p in prob]
File "D:\experments\SuperPoint-Pytorch-master\SuperPoint-Pytorch-master\model\magic_point.py", line 42, in
keep_top_k=self.topk).squeeze(dim=0) for p in prob]
File "D:\experments\SuperPoint-Pytorch-master\SuperPoint-Pytorch-master\solver\nms.py", line 45, in box_nms
indices = torchvision.ops.nms(boxes=boxes, scores=scores, iou_threshold=iou)
File "D:\Users\sichu\anaconda3\envs\point-torch\lib\site-packages\torchvision\ops\boxes.py", line 35, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
RuntimeError: dets should have the same type as scores

Superglue matching

Hello, can the model trained under this network be used for Superglue matching?

box_nms: Boxes are negative values

Hi,

Thank you so much for this implementation. I was trying to train the magicpoint network in Linux, and I encountered this issue in NMS:

RuntimeError: Trying to create tensor with negative dimension -1459651072: [-1459651072]

On investigating, I see that the boxes have negative values. How do I correct this?

File /workspace/SuperPoint/models/magicpoint.py:39, in MagicPoint.forward(self, x)
     [37](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:37) prob = output['prob']
     [38](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:38) if self.nms is not None:
---> [39](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:39)     prob = [box_nms(p.unsqueeze(dim=0),
     [40](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:40)                     self.nms,
     [41](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:41)                     min_prob=self.threshold,
     [42](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:42)                     keep_top_k=self.top_k).squeeze(dim=0) for p in prob]
     [43](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:43)     prob = torch.stack(prob)
     [45](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:45)     pred = prob[prob>=self.threshold]

File /workspace/SuperPoint/models/magicpoint.py:39, in <listcomp>(.0)
     [37](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:37) prob = output['prob']
     [38](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:38) if self.nms is not None:
---> [39](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:39)     prob = [box_nms(p.unsqueeze(dim=0),
     [40](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:40)                     self.nms,
     [41](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:41)                     min_prob=self.threshold,
     [42](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:42)                     keep_top_k=self.top_k).squeeze(dim=0) for p in prob]
     [43](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:43)     prob = torch.stack(prob)
     [45](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/models/magicpoint.py:45)     pred = prob[prob>=self.threshold]

File /workspace/SuperPoint/utils/nms.py:70, in box_nms(prob, size, iou, min_prob, keep_top_k)
     [67](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/utils/nms.py:67) if boxes.nelement() == 0 or scores.nelement() == 0:
     [68](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/utils/nms.py:68)     print("Error: One of the tensors is empty")
---> [70](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/utils/nms.py:70) indices = torchvision.ops.nms(boxes=boxes, scores=scores, iou_threshold=iou)
     [71](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/utils/nms.py:71) pts = pts[indices,:]
     [72](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/workspace/SuperPoint/utils/nms.py:72) scores = scores[indices]

File /usr/local/lib/python3.8/dist-packages/torchvision/ops/boxes.py:41, in nms(boxes, scores, iou_threshold)
     [39](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torchvision/ops/boxes.py:39)     _log_api_usage_once(nms)
     [40](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torchvision/ops/boxes.py:40) _assert_has_ops()
---> [41](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torchvision/ops/boxes.py:41) return torch.ops.torchvision.nms(boxes, scores, iou_threshold)

File /usr/local/lib/python3.8/dist-packages/torch/_ops.py:442, in OpOverloadPacket.__call__(self, *args, **kwargs)
    [437](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torch/_ops.py:437) def __call__(self, *args, **kwargs):
    [438](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torch/_ops.py:438)     # overloading __call__ to ensure torch.ops.foo.bar()
    [439](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torch/_ops.py:439)     # is still callable from JIT
    [440](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torch/_ops.py:440)     # We save the function ptr as the `op` attribute on
    [441](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torch/_ops.py:441)     # OpOverloadPacket to access it here.
--> [442](https://vscode-remote+attached-002dcontainer-002b7b22636f6e7461696e65724e616d65223a222f7375706572706f696e74227d-0040ssh-002dremote-002b10-002e64-002e60-002e182.vscode-resource.vscode-cdn.net/usr/local/lib/python3.8/dist-packages/torch/_ops.py:442)     return self._op(*args, **kwargs or {})

RuntimeError: Trying to create tensor with negative dimension -1707356657: [-1707356657]

请问训练sp时候的单应变换矩阵为什么要求逆呢?

    homography = torch.inverse(homography)#inverse here to be consistent with tf version

您好,请问在SuperPoint-Pytorch/tree/master/dataset/utils/homographic_augmentation.py中生成射影变换矩阵的时候,为什么要进行一次求逆。这样的话,数据增强中使用的参数的含义好像不太直观,调节的话也不太容易。

这里是什么意思(第一行说要注释,下面又说必须应用求逆),要想与tf版本保持一致的话,是需要注释求逆的代码吗?

    #TODO: comment the follwing line if you want same result as tf version
    # since if we use homography directly ofr opencv function, for example warpPerspective
    # the result we get is different from tf version. In order to get a same result, we have to
    # apply inverse operation,like this
    #homography = np.linalg.inv(homography)
    homography = torch.inverse(homography)#inverse here to be consistent with tf version

此外,我发现在keypoint_op.py中的warp_points函数中提示 保持一致需要求逆,但这与前面图像变换所使用的H不一致,得到的gt也是不对应的吧。

    ##TODO: uncomment the following line to get same result as tf version
    # homographies = torch.linalg.inv(homographies)
    points = torch.cat((points, torch.ones((points.shape[0], 1),device=device)),dim=1)
    ##each row dot each column of points.transpose
    warped_points = torch.tensordot(homographies, points.transpose(1,0),dims=([2], [0]))#batch dot

关于这些对应关系希望作者可以为我解惑,万分感谢。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.