Coder Social home page Coder Social logo

mcnn-pytorch's People

Contributors

commissarma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mcnn-pytorch's Issues

Label for custom dataset

Hi,
I want to create custom dataset for satellite images. I prepare dataset but only include images. I want to label my images. How can i do. Is there any app? or software. Finally, how can i learn label format.

How can we evaluate the results with the official MSE loss?

@CommissarMa Hi. I notice that in your implementation, the ground truth is down-sampled four times to compare with the predicted density map. From my understanding, this down-sampled ground truth makes a difference between your MSE loss and that calculated from the original MSE loss. Would you mind explaining this in detail?

FileNotFoundError about IMG_XXX.npy file

I apologize for the inconvenience. After modifying the paths for the dataset in 'knearestgaussiankernel.py' and 'train.py', I encountered the following error upon running: FileNotFoundError: [Errno 2] No such file or directory: 'D:\zhiwei\MCNN-pytorch\data\ShanghaipartA\traindata\groundtruth\IMG161.npy'.
I'm unsure where the issue lies, and I hope you can provide assistance. Thank you.

数据集

请问数据集链接下载失败了 该怎么办 有没有好心人分享一下资源

关于训练得问题

我依据你给出得data_preparation生成得.npy文件(其实里面有一个地方多余,就是你多加了一个break,导致程序直接跳出循环了),开始训练这个train.py,运行有误:报得是gpu out of memory,我采用得是GTX1060 6G 跑得 但是我看了后台其实并没有使用那么多得显存,这是什么原因呢,还是说作者您使用得gpu更大得缘故嘛,这是我以前没有遇到过得一个问题:希望能得到你得回复,感谢

Tried to allocate 18.07 GiB

Hi @CommissarMa

the dropbox link to the Shanghai dataset is broken already and I couldn't make use of the Baidu disk link to download the dataset since I don't have any Chinese account. Could you help fixing this?
Because of that, I tried the model out with the UCF-QNRF_ECCV18 dataset but the training requires allocating 18.07 GiB of memory where my running machine only has 8 GiB. Could you give some advice on how to reduce this amount so that I can train it on my local machine. If possible, could you also open-source the pre-trained models as well?

Thanks in advance.
Minh

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.