recasens / saliency-sampler Goto Github PK
View Code? Open in Web Editor NEWThe saliency-based is a distortion layer for convolutional neural networks that helps to improve the spatial sampling of input data for a given task.
The saliency-based is a distortion layer for convolutional neural networks that helps to improve the spatial sampling of input data for a given task.
I find the train loss can not descend on CUB-200.
Could you tell me the training details ,such as learning rate on taskNet and saliencyNet, pretrained dataset,epochs and so on.
Hello, I want to train the CUB-200-2011 using this model. Is the hyperparameter same as the imagnet's main.py?
As far as I can tell this could all be
add_pretraining_blur = epoch <= N_pretraining
...
output,image_output,hm = model(input_var,add_pretraining_blur)
...
if add_pretraining_blur:
We found helpful to blur the resampled input image of the task
network for some epochs at the beginning of the training procedure. It forces
the saliency sampler to zoom deeper into the image in order to further magnify
small details otherwise destroyed by the consequent blur. This is beneficial even
for the final performance of the model with the blur removed.
Lines 193 to 196 in 0557add
Line 199 in 0557add
Saliency-Sampler/saliency_sampler.py
Lines 117 to 120 in 0557add
Hi, I have a short question.
The paper says you use a gaussian kernel with a sigma set to one-third of the width of the saliency map. Has the saliency map the same size as the grid size defined in the saliency sampler class?, or I should check the output size of the saliency network to set the sigma value
Thank you in advance!
Hi,
This looks very interesting. A very logical approach. I am hoping to use it with high resolution medical images. Since the code is 4 years old, I was wondering if you (or other people) have come up with improvements to this approach. Would you still recommend this or something else?
Thanks so much.
Hi again – I've been trying to grok the math behind create_grid
, and am fairly confused. I've fiddled around with padding_size
, grid_size
, the kernel_size
formula, and fwhm
, and various but nothing seems to result in deterministic warping.
I'm doing some simple tests on 128x128 transformed mnist data, where I'm attempting to utilize the saliency map of a pretrained network. Below are the input, warped output, and then saliency map:
which I can get to warp with specific seeds after 10 or so iterations:
Essentially, my goal is to isolate some function with the signature:
warp_sample(hires_source, lowres_heatmap, warp_factor) -> lowres_warped
That spreads around resolution to the most salient regions. I understand that was also one of the paper's goals
Anyhow – any advice or comments on how to interpret the math would be greatly appreciated
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.