This repository contains the publishable code for CVPR 2021 paper TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture Search.
Thanks for your work on the benchmark. Truly needed!
I have some questions/problems:
In generating architectures from the configs and training them, the data is not provided, and several files are missing, such as: "tb101/code/experiments/final5k/train_filenames_final5k.json'"
On VEGA, it says: Raw images and labels should be downloaded from this [link](). train/val/test split is located in configs/dataset_split/final5k/
But all these files are not provided.
Can you please upload them? This is important to run experiments with performance estimation mechanisms that require training architectures.
I'm trying to reproduce the results of the Benchmark Algorithms(Table 4 in Section 5 of the Paper), could you please push the code about the transfer schemes to the repo?
I figured how to instantiate a macro model by creating a MacroNet with its architecture string (just MacroNet(arch.arch_str)).
How can I do this with the Micro nets?
Edit: You're supposed to use MacroNet() for these codes as well.
I'm trying to run training with train_a_net.sh, but there's a missing config.py that crashes the code. Could you please push the missing file to the repo?