Comments (5)
@CCranney this is possible using something we call a "HierarchicalSearchSpace" -- we developed this with situations exactly like yours in mind, though we haven't written a tutorial showing off this functionality quiet yet. Assuming you're using AxClient you would set up your optimization as follows:
ax_client.create_experiment(
name="nas_example",
parameters=[
{
"name": "num_layers",
"type": "choice",
"values": [1, 2, 3],
"is_ordered": True,
"dependents": {
1: ["num_neurons_1_1"],
2: ["num_neurons_2_1", "num_neurons_2_2"],
3: ["num_neurons_3_1", "num_neurons_3_2", "num_neurons_3_3"],
},
},
{
"name": "num_neurons_1_1",
"type": "range",
"bounds": [1, 8],
},
{
"name": "num_neurons_2_1",
"type": "range",
"bounds": [1, 8],
},
{
"name": "num_neurons_2_2",
"type": "range",
"bounds": [1, 8],
},
{
"name": "num_neurons_3_1",
"type": "range",
"bounds": [1, 8],
},
{
"name": "num_neurons_3_2",
"type": "range",
"bounds": [1, 8],
},
{
"name": "num_neurons_3_3",
"type": "range",
"bounds": [1, 8],
},
],
objectives={"loss": ObjectiveProperties(minimize=True)},
)
Notice how there is an extra option "dependents" on our choice parameter that maps some value to a list of parameters -- this tells Ax to only generate a values for those parameters if a certain value is chosen. Calling ax_client.get_next_trial()
will yield results like {'num_layers': 2, 'num_neurons_2_1': 3, 'num_neurons_2_2': 5}
and {'num_layers': 1, 'num_neurons_1_1': 7}
.
Tree shaped search spaces like this have been an active area of research for our team and I'm excited about how we can take advantage of this structure to optimize more efficiently. Currently by default we actually just flatten the search space under the hood and use our SAAS model (this works shockingly well even with the "dead" parameters!), but as our research develops we will update Ax to always use SOTA methodology and our model selection heuristics will opt users into the improved methodology.
I hope this was helpful and don't hesitate to reopen this task if you have any follow-up questions!
from ax.
As a correction, you would not set the neurons in layer 1 to 0
, but rather to the length of the previous layer.
from ax.
@CCranney this is possible using something we call a "HierarchicalSearchSpace" -- we developed this with situations exactly like yours in mind, though we haven't written a tutorial showing off this functionality quiet yet. Assuming you're using AxClient you would set up your optimization as follows:
ax_client.create_experiment( name="nas_example", parameters=[ { "name": "num_layers", "type": "choice", "values": [1, 2, 3], "is_ordered": True, "dependents": { 1: ["num_neurons_1_1"], 2: ["num_neurons_2_1", "num_neurons_2_2"], 3: ["num_neurons_3_1", "num_neurons_3_2", "num_neurons_3_3"], }, }, { "name": "num_neurons_1_1", "type": "range", "bounds": [1, 8], }, { "name": "num_neurons_2_1", "type": "range", "bounds": [1, 8], }, { "name": "num_neurons_2_2", "type": "range", "bounds": [1, 8], }, { "name": "num_neurons_3_1", "type": "range", "bounds": [1, 8], }, { "name": "num_neurons_3_2", "type": "range", "bounds": [1, 8], }, { "name": "num_neurons_3_3", "type": "range", "bounds": [1, 8], }, ], objectives={"loss": ObjectiveProperties(minimize=True)}, )
Notice how there is an extra option "dependents" on our choice parameter that maps some value to a list of parameters -- this tells Ax to only generate a values for those parameters if a certain value is chosen. Calling
ax_client.get_next_trial()
will yield results like{'num_layers': 2, 'num_neurons_2_1': 3, 'num_neurons_2_2': 5}
and{'num_layers': 1, 'num_neurons_1_1': 7}
.Tree shaped search spaces like this have been an active area of research for our team and I'm excited about how we can take advantage of this structure to optimize more efficiently. Currently by default we actually just flatten the search space under the hood and use our SAAS model (this works shockingly well even with the "dead" parameters!), but as our research develops we will update Ax to always use SOTA methodology and our model selection heuristics will opt users into the improved methodology.
I hope this was helpful and don't hesitate to reopen this task if you have any follow-up questions!
immensely helpful! Testing it now
from ax.
Thank you for your comments! I'm going to try to implement this using the ChoiceParameter
class as used in the tutorial I referenced above, which I see also has a dependents
option. I'm pretty new to Ax, so am not familiar with how to use ax_client
in code.
Can I ask what the difference is between ax_client.create_experiment
function and the ax.core.Experiment
class? It looks like they serve similar functions, but I'm not seeing the distinction. Is there a potential problem with using the ChoiceParameter
class instead of what you described that I should be aware of?
from ax.
@CCranney There is no issue using ChoiceParameter directly -- go ahead and do so if you would prefer.
AxClient and its create_experiment method come from our "Service API" which is an ask-tell interface for using Ax. In this setup we:
- Initialize and AxClient and configure our experiment with ax_client.create_experiment
- Call ax_client.get_next_trial to generate candidate parameterizations
- Evaluate the parameterization however we want outside of Ax (in your case train and eval the NN)
- Call ax_client.complete_trial to save data to the experiment
- Repeat 2-4
In general we recommend most users use Ax through this API rather than dealing with the Experiment and GenerationStrategy directly because it can be quite a bit simpler, but should someone want/need to use the ax.core abstractions directly they should feel free to do so.
from ax.
Related Issues (20)
- Tracking of auxiliary metrics HOT 2
- `qMaxValueEntropy` doesn't seem to work with `ObjectiveProperties(minimize=True)` HOT 4
- when should we end the Bandit Optimization HOT 4
- Nontrivial parameter constraints HOT 2
- Can't control arguments in fit_gpytorch_mll under the hood. Getting ABNORMAL_TERMINATION_IN_LNSRCH warning HOT 1
- Different Errors when initializing my loop with Service API and Developer API HOT 7
- `_random_seed` not retained when using `ax_client.save_to_json_file()` and `AxClient.load_from_json_file()` HOT 2
- Question: SEBO optimization with parameter dependency | logistic parameter constrains HOT 4
- Tutorial Request: Deploying Runners on Clusters, Debugging Runners/Schedulers HOT 4
- Issue with tolerance for floating point and its relevance when using log_scale = True HOT 7
- Question: does Ax support working with Tensorflow models? HOT 2
- Questions about define how to evaluate HOT 3
- get_countour_plot() not plotting all trials HOT 4
- Error: A list of 'ChoiceParameter' is not iterable HOT 4
- [Bug] Generation Strategy equality check error without call to repr HOT 1
- ax.dev, GitHub readme, and loop tutorial examples are ignoring `minimize` kwarg HOT 2
- Get Started code gives the same result regardless of random seed HOT 3
- Trouble with searching documentation (Algolia, API docs, etc.), for example `get_next_trials` HOT 1
- Issue when starting an AxClient with out-of-design points HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ax.