Coder Social home page Coder Social logo

Comments (7)

mseeger avatar mseeger commented on June 6, 2024 1

Hello Andreas, thanks for your interest.

In the context of TrialsOfExperimentResults, a setup maps to an experiment, whose trial learning curves are shown in one of the subplots. In the example, we have setups ASHA and HYPERTUNE-INDEP, which are two different HPO methods. We get two sub-plots, and can compare how they differ.

You can also plot these results for a single experiment (one method, one seed). In the example above, to plot results for ASHA only, just use SETUPS_TO_COMPARE = ("ASHA",).

from syne-tune.

geoalgo avatar geoalgo commented on June 6, 2024 1

We just merge a tool to do those plots directly from ExperimentResult, if you can pull from mainline, you should be able to do:

...
tuner.run()

from syne_tune.experiments import load_experiment
exp = load_experiment(tuner.name)
exp.plot_trials_over_time()

If you cant pull from mainline and want to use the latest version from pip, you can use the snippet I gave above.

Hope those help, feel free to reopen if not :-)

from syne-tune.

mseeger avatar mseeger commented on June 6, 2024

Can you attach the launcher script you were using?

from syne-tune.

amueller avatar amueller commented on June 6, 2024

Thank you for the quick reply Matthias! My launcher script is something like

import logging

from syne_tune import Tuner, StoppingCriterion
from syne_tune.backend import LocalBackend
from syne_tune.config_space import randint, loguniform, uniform, lograndint, choice
from syne_tune.optimizer.baselines import ASHA, MOBSTER
root = logging.getLogger()
root.setLevel(logging.DEBUG)

# hyperparameter search space to consider
config_space = {
    'learning-rate': loguniform(1e-5, 1e-1),
    'n-layers': randint(1, 10),
    'hidden-size': lograndint(4, 2048),
    'dropout-rate': uniform(0, 1),
    'weight_decay': loguniform(1e-7, 1e-1),
    'onehot': choice([True, False]),
    'epochs': 10000,
}

tuner = Tuner(
    trial_backend=LocalBackend(entry_point='train_parity.py'),
    scheduler=MOBSTER(
        config_space,
        metric='accuracy',
        resource_attr='epoch',
        max_resource_attr="epochs",
        search_options={'debug_log': False},
        mode='max',
    ),
    results_update_interval=5,
    stop_criterion=StoppingCriterion(max_wallclock_time=30 *60),
    n_workers=1,  # how many trials are evaluated in parallel
    tuner_name="parity-test"
)
tuner.run()

I think I'm confused where the names for the setups come from. Are the they the name of the scheduler classes?

from syne-tune.

amueller avatar amueller commented on June 6, 2024

any suggestions?

from syne-tune.

geoalgo avatar geoalgo commented on June 6, 2024

Hi Andreas,

Sorry for the delay, we just realize that Matthias was out of the office, let me try to help you on this.

This code assumes that experiments has been scheduled with python benchmarking/examples/benchmark_hypertune/launch_remote.py \ --experiment_tag docs-1 --random_seed 2965402734 --num_seeds 15, the names "ASHA" and "HYPERTUNE-INDEP" comes from this file https://syne-tune.readthedocs.io/en/latest/benchmarking/benchmark_hypertune.html which defines the experiments to run).

From your example, it seems you just want to plot the trials after an experiments, you can use the following code for instance (I will add a method in ExperimentResults to make this easier):

from syne_tune.experiments import load_experiment
import matplotlib.pyplot as plt
from syne_tune.constants import ST_TUNER_TIME

# Replace with your experiment name here
# In your case, it will start with "parity-test" and be suffixed with a time-stamp for unicity
expname = "funky-earthworm-ASHA-0-nas201-cifar10-2023-10-10-12-38-03-865"

exp = load_experiment(expname)

fig, ax = plt.subplots(1, 1, figsize=(12, 4))
df = exp.results
metric = exp.metric_names()[0]
for trial_id in sorted(df.trial_id.unique()):
    df_trial = df[df.trial_id == trial_id]
    df_trial.plot(x=ST_TUNER_TIME, y=metric, marker=".", ax=ax, legend=None, alpha=0.5)
df_stop = df[df['st_decision'] == "STOP"]
plt.scatter(df_stop[ST_TUNER_TIME], df_stop[metric], marker="x", color="red")
plt.xlabel("Wallclock time (s)")
plt.ylabel(metric)
plt.title("Trial value over time")

Screenshot 2023-10-10 at 14 45 03

from syne-tune.

amueller avatar amueller commented on June 6, 2024

perfect, thank you!

from syne-tune.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.