xdxie / wordart Goto Github PK
View Code? Open in Web Editor NEWThe official code of CornerTransformer (ECCV 2022, Oral) on top of MMOCR.
License: Apache License 2.0
The official code of CornerTransformer (ECCV 2022, Oral) on top of MMOCR.
License: Apache License 2.0
你好,我对您提出的corner_transformer工作非常感兴趣,最近正在阅读源码,想要学习一下,但是还有一点不懂得地方,想请教一下:我在【Toward Understanding WordArt: Corner-Guided Transformer for Scene Text Recognition】论文中看到计算出cc_loss和ce_loss后,会相加在一起,但是代码在recognizer中的corner_transformer中的forward_train分别计算出cc_loss和ce_loss后,以dict的形式返回,我没有看到将cc_loss和ce_loss合并的这一步,请问这一步在哪里呢?
I don't see the availability of the config file, and I don't see the outputs folder. Could you please guide me regarding this?
hello ,Could you provide me with a link to download the training weights about CornerTransformer again?
this link : https://drive.google.com/file/d/11FsyGxtvGPPvh9TXOVZDHqr6HlGdWDmZ/view?usp=sharing
can not be used.
thank you.
I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.
Here are the OpenMMLab 2.0 repos branches:
OpenMMLab 1.0 branch | OpenMMLab 2.0 branch | |
---|---|---|
MMEngine | 0.x | |
MMCV | 1.x | 2.x |
MMDetection | 0.x 、1.x、2.x | 3.x |
MMAction2 | 0.x | 1.x |
MMClassification | 0.x | 1.x |
MMSegmentation | 0.x | 1.x |
MMDetection3D | 0.x | 1.x |
MMEditing | 0.x | 1.x |
MMPose | 0.x | 1.x |
MMDeploy | 0.x | 1.x |
MMTracking | 0.x | 1.x |
MMOCR | 0.x | 1.x |
MMRazor | 0.x | 1.x |
MMSelfSup | 0.x | 1.x |
MMRotate | 1.x | 1.x |
MMYOLO | 0.x |
Attention: please create a new virtual environment for OpenMMLab 2.0.
Hello! I am very interested in CornerTransformer. Could you provide me with a way to download the training weights about CornerTransformer?
Hello Xudong Xie,
我是OpenMMLab的运营闻星。
冒昧打扰,我们关注到你们的最新工作“Toward Understanding WordArt: Corner-Guided Transformer for Scene Text Recognition”非常精彩,进而希望通过邮件与您取得联系。
目前我们社区也在筹备文字识别相关领域的顶会专题分享,您的论文工作中正好也使用了OpenMMLab中的MMOCR算法库,非常希望邀请您也参与其中,开展相关分享。
在此,我先向您简单介绍一下我们社区及线上分享的相关活动。
OpenMMLab 诞生于 2018 年,是深度学习时代计算机视觉领域最全面、最具影响力的开源算法体系。旨在为学术和产业界提供一个可跨方向、结构精良、跨站性强、易复现的统一算法工具库。目前,OpenMMLab 已经累计开源了超过 30 个算法库,涵盖分类、检测、分割、视频理解等众多算法领域,有超过 300 种算法实现、2,400 多个预训练模型。在 GitHub 上获得超过 75,000 个标星,同时吸引了超过 1,600 名社区开发者参与项目贡献,用户遍及超过 110 个国家和地区,覆盖全国全球顶尖高校、研究机构和企业。
社区开放麦是由OpenMMLab发起,面向所有AI领域用户开放的直播栏目,旨在搭建一个知识分享舞台。内容多元设计顶会分享、源码解读、圆桌讨论等。迄今为止已经举办超过40期,观看人数超过40,000人,视频回放超过420,000次。近期OpenMMLab联合ReadPaper、将门创投、白云兰开源联合开启学术专题分享。围绕前沿进展、学术热点特邀顶会作者参与分享。
期待您的回复
我的微信是 van-sin
Hi xdxie, thank you, your paper is so great, that helps me a lot in building ideas for my problem.
But I wonder how I could change some parts in your code to use your ideas in another language (e.g: Vietnamese)
We already have the OCR in our language( VietOCR ). How and where can I put/change it in your code to make sure that I can run without bugs?
for example, in the picture below, I want to it recognizes as "nguyễn"(VietOCR will handle it) but it does as "nguyen".
How to fix it, and how to use another OCR (not in EN) In your code. Pls, guide me, many tks!
When I train you model and meet thr problem :lmdb.Error: UniformConcatDataset: OCRDataset: AnnFileLoader: data/mixture/Syn90k/label.lmdb: No such file or directory.
Could you help me fix the problem? Thanks.
作者您好,我在搭环境时遇到了mmcv系列包(mmcv-full、mmdet、mmengine、mmocr)彼此不兼容的问题,十分头疼,可以麻烦您提供一下兼容的版本号吗?
你好,看了您关于艺术字识别的论文,对于您提出的基于角点去分割艺术字非常感兴趣,不知道这项工作有没有在中文艺术字上实验其效果呢?基于这项工作我想在此基础上做中文艺术字、手写字的识别,您有什么好的建议吗
Do you only use two synthetic datasets for training? Or the dataset you proposed is added in as well? Thank you.
作者您好,请问是否方便公开一下数据增强部分的相关代码吗?
作者你好,我想问下运行此代码最少需要多大内存?我云主机有14g的内存在SynthText数据集生成裁剪后的图像标注和代码训练时都killed了
您好!论文中解码器输出特征图可视化的效果非常好,请问可以公开下相关代码吗?或说明下图片是怎么生成的?感谢!
When reproducing, it was found that cross attention did not benefit. The result of running directly is 64.2%, and when the cross attention is replaced with self-attention, the result is 64.7%. What is the reason for this, and I hope you can provide some suggestions?
It was a great paper that really inspired me. But I have a question about the detail.
In the paper, there is a self attention module before Cross-Attention. But in this code, I can't find this. Is it not important for the result?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.