Coder Social home page Coder Social logo

yoloonme / ema-attention-module Goto Github PK

View Code? Open in Web Editor NEW
154.0 154.0 8.0 17.2 MB

Implementation Code for the ICCASSP 2023 paper " Efficient Multi-Scale Attention Module with Cross-Spatial Learning" and is available at: https://arxiv.org/abs/2305.13563v2

ema-attention-module's People

Contributors

yoloonme avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ema-attention-module's Issues

Inquiry about Availability of Code

Hi ,
I hope this message finds you well.I noticed that you have not released the code for the project yet. I was wondering if you have any plans to make the code available in the future?
Thank you for your attention, and I look forward to your response.

Best regards

转换成onnx失败

EMA中包含了adaptavgpool2d的操作,这个操作导致无法将其转换成onnx模型,请问如何解决这个问题?
非常感谢!!

resnet的卷积核大小

您好,看到您提供的resnet50上训练的权重文件,在一开头使用的是3×3大小的卷积核,而并非官方提供的7乘7大小的卷积核,而且3×3的卷积核只有一层,请问这样不会导致感受野不同而对最终的性能产生影响吗?
Snipaste_2023-09-18_21-57-46

inference formula

What is the complete process inference formula for EMA attention mechanism?

pre-trained weights

Hi, by inserting the attention module into backbone, will the original yolov5 pre-trained weights still be available?
I think that if the backbone structure is modified, the corresponding parameters will also change, and the weights that have been learned can not be used, and we have to start training from scratch. I don't know if my understanding is correct, and I ask the author for an answer.

add attention model

What is the position of the attention module added in the network when you conduct the experiment?

多尺度特征表示

论文中的模型3x3分支只使用了一个3x3卷积核,为什么文章中说能够capturing multi-scale feature representation呢?不是应该堆叠多个3x3的卷积核才能获得多尺度的特征表示吗,这点不太理解。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.