cvpr-2021-papers's Issues
add an accepted paper
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)
http://arxiv.org/abs/2102.12145
code: https://git.io/GDR-Net
One paper in your list cannot be found
One paper "Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo Collection" in 3D Face Reconstruction can not be found online. Would you please check whether the title is wrong? Thx~
Could you please add the code link for paper FedDG
The code for paper "FedDG: Federated Domain Generalization on Medical Image Segmentation via Episodic Learning in Continuous Frequency Space" is released at "https://github.com/liuquande/FedDG-ELCFS".
Many thanks!
一个小建议
您好,感谢您整理录用论文方便大家follow领域最新工作。建议您在每天更新的时候把当天新加入的论文单独在一个section里面列一下,这样子大家在每天刷新的时候就不用重复去找哪些是新的内容了。
Could you add code link for "Reformulating HOI Detection as Adaptive Set Prediction"
Thx for the great repo! The code for the paper "Reformulating HOI Detection as Adaptive Set Prediction" has been released at here.
“Learning the Superpixel in a Non-iterative and Lifelong Manner (CVPR'21)” official implementation
請教一下您是如何尋找被CVPR Accept的paper呢?
您好,感謝你分享這些有用的資訊!
想跟你請教一下,目前CVPR只有release出accepted paper index,想請問您是怎麼搜集到這些index對應到的論文呢?
One VOS paper
One paper accepted by CVPR 2021.
https://arxiv.org/abs/2104.04329 . Learning Position and Target Consistency for Memory-based Video Object Segmentation
This an extension of our solution in the CVPR 2020 DAVIS competition , which won the first prize.
请把这篇三维形状补全也归类到点云补全
请把下面这篇paper也归类到点云补全,谢谢!
三维形状补全
Unsupervised 3D Shape Completion through GAN Inversion
⭐code🏠project
Update a paper
Hi, thanks for your awesome collection! Could you please update the paper "Towards Unified Surgical Skill Assessment":
Paper: https://arxiv.org/abs/2106.01035
Code: https://github.com/Finspire13/Towards-Unified-Surgical-Skill-Assessment
Category: Medical Imaging(医学影像)
Thanks!
Add one new paper in object detection~
Generalized Focal Loss V2: Learning Reliable Localization Quality Estimation for Dense Object Detection
Paper: https://arxiv.org/abs/2011.12885
Code: https://github.com/implus/GFocalV2
Zhihu: https://zhuanlan.zhihu.com/p/147691786
Please add an accepted paper
Hi,
Could you add our CVPR2021 paper to "Object Detection" topic:
Title: End-to-End Object Detection with Fully Convolutional Network
Paper: https://arxiv.org/abs/2012.03544
Code: https://github.com/Megvii-BaseDetection/DeFCN
Zhihu: https://zhuanlan.zhihu.com/p/332281368
Thanks!
please add one new paper~
Hi, please add Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence (https://arxiv.org/abs/2011.13650) which is a cvpr2021 poster paper.
It can be classified as 3d vision paper.
Thanks.
Add one CVPR2021 paper, Domain Consensus Clustering for Universal Domain Adaptation
Hello, here is the code and paper.
Paper: http://reler.net/papers/guangrui_cvpr2021.pdf
Code: https://github.com/Solacex/Domain-Consensus-Clustering
Thank you~
Add one accepted paper
Topic: 3D object detection, 3D pose estimation, vehicle pose estimation
Title: Exploring intermediate representation for monocular vehicle pose estimation
Code: https://github.com/Nicholasli1995/EgoNet
add one accepted paper
Hi,
Here is the paper: "Domain Adaptation with Auxiliary Target Domain-Oriented Classifier" with the code link "https://github.com/tim-learn/ATDOC".
Thanks~
Add one oral paper (video person re-id)
Hi,
Thanks for your great collections! Could you please add this paper: Spatial-Temporal Correlation and Topology Learning
for Person Re-Identification in Videos, Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, Qibin Sun(https://arxiv.org/pdf/2104.08241.pdf), which has appeared at CVPR 2021 as an oral paper.
Thanks.
add one CVPR21 paper
Hi,
thanks for your repo! Could you please add the CVPR21 poster paper S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling (https://arxiv.org/abs/2101.06571), this can be tagged as 3D Vision paper.
Thanks.
Please add one CVPR paper
Thanks for your nice collection! Could you please update the following paper.
Video Prediction Recalling Long-term Motion Context via Memory Alignment Learning (CVPR 2021 ORAL)
Paper link: https://arxiv.org/abs/2104.00924
Code link: https://github.com/sangmin-git/LMC-Memory
Thank you.
Add one CVPR paper
Hi,
Thanks for your amazing collection of CVPR papers! Could you please add this paper AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles (https://arxiv.org/abs/2101.06549), this can be tagged as simulation, adversarial example, or self-driving / robotics paper.
Thanks.
文章链接错误
Riggable 3D Face Reconstruction via In-Network Optimization,这篇文章的链接错了。
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.