Comments (14)
You mean the pure FCOS can already achieve 39.2 in Detectron2? I think the result of FCOS in original paper and MMDet is around 38.5-38.6, there may be some more tricks in the implementation of Detectron2. By the way, QFL is suggested to use IoU labels (not centerness labels) for implementation, and you need to adjust the related codes accordingly.
We will update the code with FCOS later. During this time, you can feel free to check your code and this repo. Thanks a lot for your interest~!
from gfocal.
Thanks for your kindly reply. I think it's just detectron2 standard horizon flip and min edge 640 - 800 resize. So you recommend that I should use IoU labels to generate soft class label. But FCOS is anchor free, how can I calculate IoU? Just move the center point with ground truth box?
Thanks!
from gfocal.
IoU label has nothing to do with "anchor-free" or "anchor-based". It is calculated by the predicted box and its corresponding gt box. See line 176 in https://github.com/implus/GFocal/blob/master/mmdet/models/anchor_heads/gfl_head.py for reference.
from gfocal.
IoU label has nothing to do with "anchor-free" or "anchor-based". It is calculated by the predicted box and its corresponding gt box. See line 176 in https://github.com/implus/GFocal/blob/master/mmdet/models/anchor_heads/gfl_head.py for reference.
Oh, I see. So you use the pred box and gt box IoU as score to generate soft label. I will try it.
Thanks!
from gfocal.
IoU label has nothing to do with "anchor-free" or "anchor-based". It is calculated by the predicted box and its corresponding gt box. See line 176 in https://github.com/implus/GFocal/blob/master/mmdet/models/anchor_heads/gfl_head.py for reference.
If using the pred box and gt box IoU as the score of box to generate soft label, the score is very low at the beginning of training. Does this matter?
from gfocal.
It doesn't matter as the provided models are all trained under this scheme.
from gfocal.
Only got 0.022 mAP boost using GFL with FCOS. Using this FCOS repo(https://github.com/aim-uofa/AdelaiDet) with default config and dIOU loss. Maybe I will try to modified the weight of the class loss and regression loss.
without QFL:
AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|
39.280 | 58.088 | 42.697 | 23.840 | 43.023 | 49.951 |
with QFL:
AP | AP50 | AP75 | APs | APm | APl |
---|---|---|---|---|---|
39.302 | 58.302 | 42.604 | 23.317 | 43.682 | 50.606 |
from gfocal.
Only got 0.022 mAP boost using GFL with FCOS. Using this FCOS repo(https://github.com/aim-uofa/AdelaiDet) with default config and dIOU loss. Maybe I will try to modified the weight of the class loss and regression loss.
without GFL:
AP AP50 AP75 APs APm APl
39.280 58.088 42.697 23.840 43.023 49.951
with GFL:AP AP50 AP75 APs APm APl
39.302 58.302 42.604 23.317 43.682 50.606
You mean using both QFL and DFL in this version of FCOS ? The results are quite different from our experiments... You are suggested to go through the codes in this repo carefully, as it may be not that simple as you thought, e.g., weight needs to be normalized or detached for IoU Loss and DFL Loss. You are also suggested to try this repo to reproduce the 40.2 AP for GFL_R50_1x.
from gfocal.
Only got 0.022 mAP boost using GFL with FCOS. Using this FCOS repo(https://github.com/aim-uofa/AdelaiDet) with default config and dIOU loss. Maybe I will try to modified the weight of the class loss and regression loss.
without GFL:
AP AP50 AP75 APs APm APl
39.280 58.088 42.697 23.840 43.023 49.951
with GFL:
AP AP50 AP75 APs APm APl
39.302 58.302 42.604 23.317 43.682 50.606You mean using both QFL and DFL in this version of FCOS ? The results are quite different from our experiments... You are suggested to go through the codes in this repo carefully, as it may be not that simple as you thought, e.g., weight needs to be normalized or detached for IoU Loss and DFL Loss. You are also suggested to try this repo to reproduce the 40.2 AP for GFL_R50_1x.
Sorry, I only use the QFL. It's a typo. I know the devil is in the detail, I will debug the code. And really thanks for your help. Looking forward to your FCOS GFL implementation code.
from gfocal.
I used the implementation of original ATSS with QFL,where the IoU is used as label and loc_weight for GIoU loss is setted as cls_score, and I only get 39.25 AP……
from gfocal.
from gfocal.
In both this repo and official mmdetection,we have pretrained models,logs and codes. You are welcome to check whether you have missed some important details. For ATSS with QFL,I suggest you try gfl_R50_1× by setting DFL weight to 0. Thank you!
…
------------------ Original ------------------ From: DongliXu <[email protected]> Date: Mon,Jul 20,2020 1:50 AM To: implus/GFocal <[email protected]> Cc: implus <[email protected]>, State change <[email protected]> Subject: Re: [implus/GFocal] Question about QFL in FCOS (#4) I used the implementation of original ATSS with QFL,where the IoU is used as label and loc_weight for GIoU loss is setted as cls_score, and I only get 39.25 AP…… — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe.
Thanks for your reply. Do not take it wrong, It is an elegant work and I like it.
I also wrote a paper last year just using MSE loss to shorten the distance between the cls score and the loc score (IoU). After that, I proposed a warpage loss which is very similar to your QFL, but it seems that you finished QFL earlier than warpage loss~ So I guess it would be hard to publish mine recent work now...
Recently, I was reading your codes but I have not reimplemented QFL in my own repo, I already tried the weight = 0 and it offered the same performance(39.15).
However, I used those IoU from the IoUs loss and .detach_() it, and I noticed you recalculate the IoU for the QFL . Do you think that matters? Or should I use detach() instead of detach_()?
from gfocal.
Thanks for your attention~ It is common to find many similar results and methods during our research period, including myself. It doesn't matter if you believe in yourself: You always have the right to continuously iterate yourself, improve yourself, and innovate constantly, until you discover more new technologies.
For the detach() problem, it is suggested to use the form of ``B = A.detach()'' to have a detached copy of A (i.e., B) along with a undetached one A, because you might want gradient for A in some loss but don't require gradient for A in another loss(or loss weight).
Good Luck~!
from gfocal.
@fangchengji hello, I notice that you have reimplemented ATSS and GFocal in you forked detectron2, did they get expected boost?
from gfocal.
Related Issues (20)
- How to use model by C++?
- How to draw the distribution, like Fig.3 HOT 1
- TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not DataContainer HOT 2
- About the alpha in focal loss.
- How to get the ground truth of classification-IoU score?
- test.py
- No such file or directory: 'data/coco/annotations/instances_val2017.json'
- centernet中Focal loss换GFocal HOT 1
- what the shapes are in pred and label of distribution_focal_loss
- Model training time
- fds
- How to get the quality scores
- 你好,有模型下载的国内链接吗?
- TypeError: ATSS: FPN: __init__() got an unexpected keyword argument 'extra_convs_on_inputs' HOT 2
- What means pos and pos_inds ?
- Questions about QFL
- 可以在anchor free上使用么
- Welcome update to OpenMMLab 2.0
- 在计算iou_target的一点疑问?
- QFL 代码的位置
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gfocal.