amusi / eccv2022-papers-with-code Goto Github PK
View Code? Open in Web Editor NEWECCV 2022 论文开源项目合集,同时欢迎各位大佬提交issue,分享ECCV 2020开源项目
ECCV 2022 论文开源项目合集,同时欢迎各位大佬提交issue,分享ECCV 2020开源项目
Paper name: Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring
Paper link: https://www.ecva.net/papers/eccv_2020/papers_ECCV/html/5116_ECCV_2020_paper.php
Code link: https://github.com/zzh-tech/ESTRNN
Spatial-Angular Interaction for Light Field Image Super-Resolution
PDF:https://arxiv.org/pdf/1912.07849.pdf
Code:https://github.com/YingqianWang/LF-InterNet
【提交issue的格式】
论文名:
论文链接:
代码链接:
[The format of the issue]
Paper name:
Paper link:
Code link:
[Paper name]: Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation
[Paper link]: https://arxiv.org/abs/2007.09183
[Code link]: https://github.com/charlesCXK/RGBD_Semantic_Segmentation_PyTorch
Thank you very much!
Paper: Contextual Diversity for Active Learning
Paper link: https://arxiv.org/pdf/2008.05723.pdf
Code: https://github.com/sharat29ag/CDAL
Paper name: Actions as Moving Points
Paper link: https://arxiv.org/abs/2001.04608
Code link: https://github.com/MCG-NJU/MOC-Detector
Paper name: Learning Flow-based Feature Warping For Face Frontalization with Illumination Inconsistent Supervision
Paper link: https://arxiv.org/pdf/2008.06843.pdf
Code: https://github.com/csyxwei/FFWM
Open source code: https://github.com/d-li14/SAN
The paper is to appear on ArXiv soon.
Thanks!
请问什么时候会更新2022专栏呢
title: Deep Decomposition Learning for Inverse Imaging Problems
paper: https://arxiv.org/pdf/1911.11028.pdf
Github: https://github.com/edongdongchen/DDN
Paper name: Consensus-Aware Visual-Semantic Embedding for Image-Text Matching
Paper link: https://arxiv.org/abs/2007.08883
Code link: https://github.com/BruceW91/CVSE
【提交issue的格式】
论文名:
论文链接:
代码链接:
[The format of the issue]
Paper name:
Paper link:
Code link:
Paper name: ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language
Paper link: https://arxiv.org/abs/1912.08830
Code link: https://github.com/daveredrum/ScanRefer
Paper name: ReferIt3D: Neural Listeners for Fine-Grained 3D Object Identification in Real-World Scenes
Paper link: http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123460409.pdf
Code link: https://referit3d.github.io/
Paper name: SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification
Paper link: http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123480069.pdf
Code link: https://github.com/wangyida/softpool
All papers are in 2020, not 2022!
Oral: Self-Challenging Improves Cross-Domain Generalization https://arxiv.org/abs/2007.02454
Learning to Generate Novel Domains for Domain Generalization https://arxiv.org/abs/2007.03304
Paper name: Dynamic R-CNN: Towards High Quality Object Detection via Dynamic Training
Paper link: https://arxiv.org/abs/2004.06002
Code link: https://github.com/hkzhang95/DynamicRCNN
Paper name: Towards Fast, Accurate and Stable 3D Dense Face Alignment
Paper link: https://guojianzhu.com/assets/pdfs/3162.pdf
Supp link: https://guojianzhu.com/assets/pdfs/3162-supp.pdf
Code: https://github.com/cleardusk/3DDFA_V2
Paper name: SNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentation for Accurate Freespace Detection
Paper link: http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123750341.pdf
Code link: https://github.com/hlwang1124/SNE-RoadSeg
Paper name: P2Net: Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
Paper link: https://arxiv.org/pdf/2007.07696.pdf
Code link: https://github.com/svip-lab/Indoor-SfMLearner
Paper name: Dense RepPoints: Representing Visual Objects with Dense Point Sets
Paper link: https://arxiv.org/pdf/1912.11473.pdf
Code link: https://github.com/justimyhxu/Dense-RepPoints
https://github.com/Scalsol/RepPointsV2
Thanks!
Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling
Webpage: http://structured3d-dataset.org
Paper link: https://arxiv.org/abs/1908.00222
Code link: https://github.com/bertjiazheng/Structured3D
Paper name: Learning Enriched Features for Real Image Restoration and Enhancement
Paper link: https://arxiv.org/abs/2003.06792
Code link: https://github.com/swz30/MIRNet
Paper name: GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision
Paper link: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123600511.pdf
Code link: https://github.com/lkeab/gsnet
Please add the work:
code: https://github.com/Colin97/Point2Mesh
paper: https://arxiv.org/pdf/2007.09267.pdf
Paper name: Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization.
Paper link: https://arxiv.org/pdf/2007.09316.pdf
Code link: https://github.com/EmmaW8/EISNet
Paper name: Asynchronous Interaction Aggregation for Action Detection
Paper link: https://arxiv.org/abs/2004.07485
Code link: https://github.com/MVIG-SJTU/AlphAction
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.