Coder Social home page Coder Social logo

elan's People

Contributors

xindongzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elan's Issues

question about experiments metric score of other methods

Thank for your work and congratulate accept for ECCV2022.

I read your paper and had a question about other model metric.

In Table1 in your paper, I think that LatticeNet metric reports (PSNR/SSIM) are wrong.

Your report about that and the reports in both LatticeNet and SwinIR paper are different.

So I guess if you did experiments with LatticeNet by yourself, or if it is just mistake.

Thanks for your work one more.

New Super-Resolution Benchmarks

Hello,

MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

If you are interested in participating, you can add your algorithm following the submission steps:

We would be grateful for your feedback on our work!

Tensor Shape size issue

Hello,
I am running the code and in second epoch, it says the in and out tensor size issue. The in is 3,256,256 and out is 3,248,258. I tried to use resize function. I have used the following configuration:
batch_size: 8
data_repeat: 80
data_augment: 1
epochs: 1000
lr: 0.00025

Please guide!

Shifted Window and Shared Attention Patterns in Consecutive GMSAs

Hello,

The paper is very interesting to me, since SwinIR suffers from high memory consumption and slow convergence. I recently have two questions about the proposed framework.

Firstly, two consecutive GMSAs can share the attention maps, while the shifted window is applied to partition neighboring pixels together, which should derive different attention patterns. How is it addressed or is interleaved sharing mechanism adopted?

Secondly, the results in Table 3 show the reduction of FLOPs and Latency by using the shifted mechanism. How could this method reduce the computational footprint? Is it solely due to the removal of the masking and relative positional encoding used in SwinIR?

Finally, could you present the convergence of ELAN, compared with SwinIR and other CNN-based models? It can provide a more comprehensive comparison and better show the advantages of ELAN.

Thanks a lot.

BTW, the neat model architecture is definitely appealing.

the flops problem

计算复杂度的代码有么? 我们算出来的和论文的不同,希望可以给个参考。

onnx model precision

We transferred the model to onnx and found that the accuracy was not consistent with the original model

Dataset Download issue

Hi,
I am unable to download dataset as it requires BaiduNetdisk account and outside china it is not working. Can you please share the drive link?

Positional Encoding

Hello,

Thanks for your great work, an efficient and neat transformer framework is essential for low-level vision I think.

According to your work, I tried discard the attention mask and positional encoding in SwinIR, the training and inference speed is largely improved, and the attention mask has slight effect on performance. However, the performance severely droped after removing RPE in original SwinIR.

Could you please give me some hints about how can we discard RPE (and attention mask) correctly? directly removing codes related to positional encoding or removing positional encoding should incorporate someother necessary elements?

Looking forward to your reply, thanks~

How to understand these parameters in "shift-convolution"?

self.weight[0g:1g, 0, 1, 2] = 1.0 ## left
self.weight[1g:2g, 0, 1, 0] = 1.0 ## right
self.weight[2g:3g, 0, 2, 1] = 1.0 ## up
self.weight[3g:4g, 0, 0, 1] = 1.0 ## down
self.weight[4*g:, 0, 1, 1] = 1.0 ## identity

i think [1,2] is down,[1,0] is up,[2,1] is right,[0,1] is left.

Thank you for your excellent work

I am reproducing your scores in your paper, and all looks well. Your framework is the neatest one I have seen. Anyway, thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.