Coder Social home page Coder Social logo

saliency-sampler's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

saliency-sampler's Issues

What is the training details on CUB-200?

I find the train loss can not descend on CUB-200.
Could you tell me the training details ,such as learning rate on taskNet and saliencyNet, pretrained dataset,epochs and so on.

Is p just to control the pretraining blur?

As far as I can tell this could all be

 add_pretraining_blur = epoch <= N_pretraining
...
 output,image_output,hm = model(input_var,add_pretraining_blur) 
...
 if add_pretraining_blur: 

We found helpful to blur the resampled input image of the task
network for some epochs at the beginning of the training procedure. It forces
the saliency sampler to zoom deeper into the image in order to further magnify
small details otherwise destroyed by the consequent blur. This is beneficial even
for the final performance of the model with the blur removed.

Saliency-Sampler/main.py

Lines 193 to 196 in 0557add

if epoch>N_pretraining:
p=1
else:
p=0

output,image_output,hm = model(input_var,p)

if random.random()>p:
s = random.randint(64, 224)
x_sampled = nn.AdaptiveAvgPool2d((s,s))(x_sampled)
x_sampled = nn.Upsample(size=(self.input_size_net,self.input_size_net),mode='bilinear')(x_sampled)

sigma for the Gaussian Kernel

Hi, I have a short question.
The paper says you use a gaussian kernel with a sigma set to one-third of the width of the saliency map. Has the saliency map the same size as the grid size defined in the saliency sampler class?, or I should check the output size of the saliency network to set the sigma value

Thank you in advance!

Is this current?

Hi,

This looks very interesting. A very logical approach. I am hoping to use it with high resolution medical images. Since the code is 4 years old, I was wondering if you (or other people) have come up with improvements to this approach. Would you still recommend this or something else?

Thanks so much.

Help understanding create_grid

Hi again – I've been trying to grok the math behind create_grid, and am fairly confused. I've fiddled around with padding_size, grid_size, the kernel_size formula, and fwhm, and various but nothing seems to result in deterministic warping.

I'm doing some simple tests on 128x128 transformed mnist data, where I'm attempting to utilize the saliency map of a pretrained network. Below are the input, warped output, and then saliency map:
image

which I can get to warp with specific seeds after 10 or so iterations:

Essentially, my goal is to isolate some function with the signature:
warp_sample(hires_source, lowres_heatmap, warp_factor) -> lowres_warped
That spreads around resolution to the most salient regions. I understand that was also one of the paper's goals

Anyhow – any advice or comments on how to interpret the math would be greatly appreciated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.