Coder Social home page Coder Social logo

Comments (5)

mpolson64 avatar mpolson64 commented on April 27, 2024 1

@CCranney this is possible using something we call a "HierarchicalSearchSpace" -- we developed this with situations exactly like yours in mind, though we haven't written a tutorial showing off this functionality quiet yet. Assuming you're using AxClient you would set up your optimization as follows:

ax_client.create_experiment(
    name="nas_example",
    parameters=[
        {
            "name": "num_layers",
            "type": "choice",
            "values": [1, 2, 3],
            "is_ordered": True,
            "dependents": {
                1: ["num_neurons_1_1"],
                2: ["num_neurons_2_1", "num_neurons_2_2"],
                3: ["num_neurons_3_1", "num_neurons_3_2", "num_neurons_3_3"],
            },
        },
        {
            "name": "num_neurons_1_1",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_2_1",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_2_2",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_3_1",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_3_2",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_3_3",
            "type": "range",
            "bounds": [1, 8],
        },
    ],
    objectives={"loss": ObjectiveProperties(minimize=True)},
)

Notice how there is an extra option "dependents" on our choice parameter that maps some value to a list of parameters -- this tells Ax to only generate a values for those parameters if a certain value is chosen. Calling ax_client.get_next_trial() will yield results like {'num_layers': 2, 'num_neurons_2_1': 3, 'num_neurons_2_2': 5} and {'num_layers': 1, 'num_neurons_1_1': 7}.

Tree shaped search spaces like this have been an active area of research for our team and I'm excited about how we can take advantage of this structure to optimize more efficiently. Currently by default we actually just flatten the search space under the hood and use our SAAS model (this works shockingly well even with the "dead" parameters!), but as our research develops we will update Ax to always use SOTA methodology and our model selection heuristics will opt users into the improved methodology.

I hope this was helpful and don't hesitate to reopen this task if you have any follow-up questions!

from ax.

CCranney avatar CCranney commented on April 27, 2024

As a correction, you would not set the neurons in layer 1 to 0, but rather to the length of the previous layer.

from ax.

Runyu-Zhang avatar Runyu-Zhang commented on April 27, 2024

@CCranney this is possible using something we call a "HierarchicalSearchSpace" -- we developed this with situations exactly like yours in mind, though we haven't written a tutorial showing off this functionality quiet yet. Assuming you're using AxClient you would set up your optimization as follows:

ax_client.create_experiment(
    name="nas_example",
    parameters=[
        {
            "name": "num_layers",
            "type": "choice",
            "values": [1, 2, 3],
            "is_ordered": True,
            "dependents": {
                1: ["num_neurons_1_1"],
                2: ["num_neurons_2_1", "num_neurons_2_2"],
                3: ["num_neurons_3_1", "num_neurons_3_2", "num_neurons_3_3"],
            },
        },
        {
            "name": "num_neurons_1_1",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_2_1",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_2_2",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_3_1",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_3_2",
            "type": "range",
            "bounds": [1, 8],
        },
        {
            "name": "num_neurons_3_3",
            "type": "range",
            "bounds": [1, 8],
        },
    ],
    objectives={"loss": ObjectiveProperties(minimize=True)},
)

Notice how there is an extra option "dependents" on our choice parameter that maps some value to a list of parameters -- this tells Ax to only generate a values for those parameters if a certain value is chosen. Calling ax_client.get_next_trial() will yield results like {'num_layers': 2, 'num_neurons_2_1': 3, 'num_neurons_2_2': 5} and {'num_layers': 1, 'num_neurons_1_1': 7}.

Tree shaped search spaces like this have been an active area of research for our team and I'm excited about how we can take advantage of this structure to optimize more efficiently. Currently by default we actually just flatten the search space under the hood and use our SAAS model (this works shockingly well even with the "dead" parameters!), but as our research develops we will update Ax to always use SOTA methodology and our model selection heuristics will opt users into the improved methodology.

I hope this was helpful and don't hesitate to reopen this task if you have any follow-up questions!

immensely helpful! Testing it now

from ax.

CCranney avatar CCranney commented on April 27, 2024

Thank you for your comments! I'm going to try to implement this using the ChoiceParameter class as used in the tutorial I referenced above, which I see also has a dependents option. I'm pretty new to Ax, so am not familiar with how to use ax_client in code.

Can I ask what the difference is between ax_client.create_experiment function and the ax.core.Experiment class? It looks like they serve similar functions, but I'm not seeing the distinction. Is there a potential problem with using the ChoiceParameter class instead of what you described that I should be aware of?

from ax.

mpolson64 avatar mpolson64 commented on April 27, 2024

@CCranney There is no issue using ChoiceParameter directly -- go ahead and do so if you would prefer.

AxClient and its create_experiment method come from our "Service API" which is an ask-tell interface for using Ax. In this setup we:

  1. Initialize and AxClient and configure our experiment with ax_client.create_experiment
  2. Call ax_client.get_next_trial to generate candidate parameterizations
  3. Evaluate the parameterization however we want outside of Ax (in your case train and eval the NN)
  4. Call ax_client.complete_trial to save data to the experiment
  5. Repeat 2-4

In general we recommend most users use Ax through this API rather than dealing with the Experiment and GenerationStrategy directly because it can be quite a bit simpler, but should someone want/need to use the ax.core abstractions directly they should feel free to do so.

from ax.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.