Comments (5)
Can you please show an example of how to specify the visualizer and visualizer-hook?
from powerful-benchmarker.
@dorcus01 This hasn't been implemented yet, so here's a hard-coded way of getting it done. Replace the last 2 lines of run.py with this:
from powerful_benchmarker.factories import TesterFactory
from powerful_benchmarker.api_parsers import BaseAPIParser
from easy_module_attribute_getter import utils as emag_utils
import copy
import umap
import os
import numpy as np
import matplotlib.pyplot as plt
from cycler import cycler
class TesterFactoryWithUMAP(TesterFactory):
def _create_general(self, tester_type, plots_folder, **kwargs):
tester, tester_params = self.getter.get("tester", yaml_dict=tester_type, return_uninitialized=True)
tester_params = copy.deepcopy(tester_params)
tester_params["accuracy_calculator"] = self.getter.get("accuracy_calculator", yaml_dict=tester_params["accuracy_calculator"])
tester_params["visualizer"] = umap.UMAP()
tester_params["visualizer_hook"] = self.visualizer_hook(plots_folder)
tester_params = emag_utils.merge_two_dicts(tester_params, kwargs)
return tester(**tester_params)
def visualizer_hook(self, plots_folder):
def actual_visualizer_hook(visualizer, embeddings, labels, split_name, keyname, epoch):
logging.info("UMAP plot for the {} split and label set {}".format(split_name, keyname))
label_set = np.unique(labels)
num_classes = len(label_set)
fig = plt.figure(figsize=(20,15))
plt.gca().set_prop_cycle(cycler("color", [plt.cm.nipy_spectral(i) for i in np.linspace(0, 0.9, num_classes)]))
for i in range(num_classes):
idx = labels == label_set[i]
plt.plot(embeddings[idx, 0], embeddings[idx, 1], ".", markersize=1)
plt.savefig(os.path.join(plots_folder, "{}_epoch{}.png".format(keyname, epoch)))
return actual_visualizer_hook
class APIMetricLossOnly(BaseAPIParser):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.sub_experiment_dirs["plots"] = os.path.join("%s", "%s", "plots")
def set_curr_folders(self):
folders = self.get_sub_experiment_dir_paths()[self.split_manager.curr_split_scheme_name]
self.model_folder, self.csv_folder, self.tensorboard_folder, self.plots_folder = folders["models"], folders["csvs"], folders["tensorboard"], folders["plots"]
def default_kwargs_tester(self):
kwargs_dict = super().default_kwargs_tester()
kwargs_dict["plots_folder"] = lambda: self.plots_folder
return kwargs_dict
r = runner(**(args.__dict__))
r.register("factory", TesterFactoryWithUMAP)
r.register("api_parser", APIMetricLossOnly)
r.run()
(You can replace UMAP with another visualizer if you want, like sklearn.manifold.TSNE)
Then on the command line append this flag:
--factories {tester~OVERRIDE~: {TesterFactoryWithUMAP: {}}}
Plots will be saved in png form inside a "plots" folder
from powerful-benchmarker.
from powerful-benchmarker.
I don't think that should matter, and the code worked for me on google colab. Can you paste in the complete error message, and the command line argument you used?
from powerful-benchmarker.
Thank you. The problem has been resolved.
from powerful-benchmarker.
Related Issues (20)
- validator_tests/delete_pkls: Change --validator flag to --prefix
- Use --extend-ignore in linter rule
- Default split manager HOT 4
- Get error when execute script in 'powerful-benchmarker' directory. HOT 1
- Whem MClassPerSampler is equal to 1, Error occurs HOT 1
- Can I add my own learning rate scheduler during training? HOT 3
- Stanford Online Product training scheme HOT 5
- Cross Batch Memory error HOT 4
- Make trained models available on torch.hub
- Test on benchmark Error HOT 1
- Set accuracy evaluator to only get precision_at_k=1 HOT 1
- Clarification on precision_at_k computation HOT 2
- Question on SoftTripleLoss Bayes hyperparameter tuning HOT 2
- Reproducing benchmark results only gives validation accuracy HOT 2
- The ~INT_BAYESIAN~ flag does not seem to work HOT 1
- Proxy NCA And Softmax Scale / Label Smoothing HOT 3
- Compatibility with latest pytorch-metric-learning library HOT 1
- Trouble Reproducing ArcFace Results HOT 5
- Pretraining on source HOT 3
- How to evaluate an experiment (domain adaptation) HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from powerful-benchmarker.