Coder Social home page Coder Social logo

Comments (10)

cheeseheist avatar cheeseheist commented on May 2, 2024 1

Was just about to ask a separate question, but I think it will fit in this discussion. I'm running into cases where say I want three parallel trials. Often, the generator provides three nearly identical recommendations. As @trevor-haas described, it will be negligibly different in a real experiment with noise in the inputs. Any best practices to get the BO to provide more uniqueness between subsequent parallel recommendations, such as the step-size approach mentioned? In my case, I'm just doing single objective optimization.

Thanks for letting me tag along to your question! If it doesn't seem related, feel free to ignore me and I can make a separate thread!

from ax.

Balandat avatar Balandat commented on May 2, 2024 1

@cheeseheist Ah I see when you are generating points in sequence you are not properly accounting for these "pending points". Can you share some code how exactly you're using the API?

Note that the dev API when you call ModelBridge.gen() will just return a GeneratorRun with a parameter suggestion, but in the Dev API you are responsible for tracking those if you do want to make that suggestion a candidate arm for a trial. The easiest way to do this is to call experiment.new_trial() and pass in the GeneratorRun. Then, in the next call to ModelBridge.gen(), Ax will know to account for these parameters as "pending". The reason this is done in the Dev API is that it gives flexibility to generate different suggestions with different settings (e.g modified search spaces or optimization configs) from which the user can choose which ones to evaluate (rather than pre-committing to making everything into a trial).

It's also possible to manually pass in pending observations into the ModelBridge.gen() call here: https://www.internalfb.com/code/fbsource/[3e4a0e5bf97b59b364ac7e098fd85def9205a5ac]/fbcode/ax/modelbridge/base.py?lines=741, though if you do want to add the previously generated suggestions as trials you should do that instead.

Note that when using the AxClient ask/tell interface, we will automatically track previously generated suggestions under the hood as trials after calling AxClient.get_next_trial(), so you don't have to worry about it there.

from ax.

Balandat avatar Balandat commented on May 2, 2024 1

Actually, I need to correct myself: If you use the bare Modelbridge, creating and attaching the trial to the experiment is not enough. You have to manually pass in the pending_observations.

If you use AxClient this is done for you (code pointer, but this is not the case with barebones Dev API.

Internally, this uses get_pending_observation_features to extract that input, so if you do want to use the Dev API you could use that and manually pass in the pending_observations. Of course if you call gen(n=n) with n>1 then that will also properly account for generating a batch of candidates.

I am doing generally what you suggested, but perhaps the issue is that I'm marking them complete immediately?

Given the above, marking the trial as COMPLETED is not the issue here. You can look into the guts of get_pending_observation_features to see that this returns the features for trials as pending that are completed but do not have any data attached.

from ax.

Balandat avatar Balandat commented on May 2, 2024

@trevor-haas thanks for the accolades, happy to hear that you're enjoying Ax. And thanks for contributing to making it better by engaging with the community.

As you said in general you won't help the optimization much by artificially discretizing a continuous parameter - but you may shoot yourself in the foot if the optimal setting happens to be between two of the pre-specified values (this can be mitigated with domain knowledge of how the function behaves in which case you might just not get any benefit).

Under the hood we will model a discretized floating point parameter in the same way as we model a continuous one (we infer that the values are ordered). The main difference is that the acquisition function optimization will happen on the discrete values. This often turns out to be harder than continuous optimization since we can't use gradient information on a discrete search space, so this is another downside. This is particularly pronounced if there are many steps and many parameters due to the combinatorial explosion of choices.

A valid (and I would argue maybe the only valid) reason to "pre-discretize" your search space is if you are indeed restricted in what parameters you can choose (maybe the flow rate settings are incremental by some step size). In this case you'll want to let the optimization to know that, but that doesn't make the problem easier.

That said, if you did want to do that, the code you have above seems to achieve that just fine.

from ax.

Balandat avatar Balandat commented on May 2, 2024

Often, the generator provides three nearly identical recommendations. As @trevor-haas described, it will be negligibly different in a real experiment with noise in the inputs. Any best practices to get the BO to provide more uniqueness between subsequent parallel recommendations, such as the step-size approach mentioned? In my case, I'm just doing single objective optimization.

@cheeseheist, I'd like to understand whether this is in fact a problem - if the outcomes are noisy and the model is relatively confident in what the region with the optimal parameter settings is, then evaluating multiple configurations close together may be the appropriate strategy. Note that the underlying GP surrogate model will be able to use multiple noisy observations in the same area to better infer the noise level (if not provided) and better estimate the latent function.

from ax.

cheeseheist avatar cheeseheist commented on May 2, 2024

Often, the generator provides three nearly identical recommendations. As @trevor-haas described, it will be negligibly different in a real experiment with noise in the inputs. Any best practices to get the BO to provide more uniqueness between subsequent parallel recommendations, such as the step-size approach mentioned? In my case, I'm just doing single objective optimization.

@cheeseheist, I'd like to understand whether this is in fact a problem - if the outcomes are noisy and the model is relatively confident in what the region with the optimal parameter settings is, then evaluating multiple configurations close together may be the appropriate strategy. Note that the underlying GP surrogate model will be able to use multiple noisy observations in the same area to better infer the noise level (if not provided) and better estimate the latent function.

Thanks for the response @Balandat. I have found a work around and this issue may have stemmed from my lack of understanding in how multiple subsequent generator calls work. Previously, I was doing multiple subsequent model.gen(n=1) calls to get multiple suggestions as opposed to say a single model.gen(n=3) call. The reason I was avoiding the second case was that I read in the documentation that these batch trials should only be used if they will be evaluated simultaneously, which they aren't really in my case. When I do a single model.gen(n=3) call, the recommendations are quite different and unique (which is what I'm looking for). When I do three model.gen(n=1) calls, they aren't that unique, but are slightly different. Perhaps this isn't surprising? I'm not sure how the model tracks multiple subsequent gen calls.

Anyway, my work around is to do a single model.gen(n=3) call and then to pull the parameters out of these and make three new individual trials. So, what the model sees in terms of trials are three separate trials rather than one batch trial, but I can use the batch generation to get more unique recommendations. So, I think I'm good to go (unless you have concerns with this approach) but would still be curious if consecutive model.gen(n=1) calls should behave better or if that just won't work without provided new data and retraining. I guess I assumed it would either have some memory and do effectively the same thing as model.gen(n=3) if I did it three times, or it would give an identical suggestion each time, but neither of those seem to be the case.

from ax.

trevor-haas avatar trevor-haas commented on May 2, 2024

@Balandat, this makes sense. I am only able to input certain values into the machine, ie 100% 100.1% 100.2% etc. So a value of 100.15% wouldn't be useful to me. I think due to the noise I can get away with using a continuous search space and just round the values on my own later in the pipeline. Thanks for the response.

I also have questions similar to that of @cheeseheist regarding the sample size and doing batches, however only with the service api. Please let me know if I should put this in another issue as I know we are straying away from the original question. I know this was also talked about I think in 2019 and the conclusion was that the service api doesn't support batch trials, so I've decided to just loop through ax_client_modified.get_next_trial() as many times as I want and manage the batches on my own. I also came across this which makes me think the way I am approaching it is my only option.
image

My problem calls for generating a new trial to run on the machine and Id like to include a couple arms to make the process more efficient (its expensive and time consuming to setup the machine for each trial). All the trials are processed by the machine one after the other without human intervention. Then at the end, all the arms are evaluated at the same time.

Im wondering if its okay to do the batch generation the way I have described for this application. From my understanding of the docs, this should be done with a batch trial as all arms will be evaluated at the same time, not multiple one armed trials. Im also not sure if this really even matters that much, could it just be best practices? Or would it substantially reduce the performance of the model?

Thanks for all your help!

from ax.

Balandat avatar Balandat commented on May 2, 2024

@trevor-haas this approach should work fine. For the "evaluate multiple configurations together to avoid overhead" scenario it is fine to manually group a set of individual trials together and use the standard AxClient API. Where it really matters to use a "batch trial" is where the trial deployment has to be done concurrently b/c the underlying data generation process may change over time (e.g. if you run online A/B tests with multiple treatment groups).

from ax.

cheeseheist avatar cheeseheist commented on May 2, 2024

@cheeseheist Ah I see when you are generating points in sequence you are not properly accounting for these "pending points". Can you share some code how exactly you're using the API?

Note that the dev API when you call ModelBridge.gen() will just return a GeneratorRun with a parameter suggestion, but in the Dev API you are responsible for tracking those if you do want to make that suggestion a candidate arm for a trial. The easiest way to do this is to call experiment.new_trial() and pass in the GeneratorRun. Then, in the next call to ModelBridge.gen(), Ax will know to account for these parameters as "pending". The reason this is done in the Dev API is that it gives flexibility to generate different suggestions with different settings (e.g modified search spaces or optimization configs) from which the user can choose which ones to evaluate (rather than pre-committing to making everything into a trial).

It's also possible to manually pass in pending observations into the ModelBridge.gen() call here: https://www.internalfb.com/code/fbsource/[3e4a0e5bf97b59b364ac7e098fd85def9205a5ac]/fbcode/ax/modelbridge/base.py?lines=741, though if you do want to add the previously generated suggestions as trials you should do that instead.

Note that when using the AxClient ask/tell interface, we will automatically track previously generated suggestions under the hood as trials after calling AxClient.get_next_trial(), so you don't have to worry about it there.

Here is a code snippet. I am doing generally what you suggested, but perhaps the issue is that I'm marking them complete immediately? The reason this is done is because my metric basically prompts a user for the output within the metric (currently playing around with this for offline optimization of experiments) and I have to mark it complete before the exp.fetch_data() works to trigger the metric and prompt the user. If there is a better way that I should be attaching the data to the trials and then accessing the trials and marking them complete, let me know. Though honestly, now I'm playing with my old code where I was generating them sequentially and it seems to be doing just fine, so maybe there was a separate bug I inadvertently fixed or it was all in my head :).

for j in range(BATCH_SIZE): 
    generator_run = gpei.gen(n=1)    
    trial = exp.new_trial(generator_run=generator_run)  
    trial.run()  
    trial.mark_completed()  
exp.fetch_data()

from ax.

trevor-haas avatar trevor-haas commented on May 2, 2024

Thank you everyone for the help!

from ax.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.