Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratch, and testing them on a variety of datasets (transfer learning)
Thank you for this great job !
When I try to run expriments on my own device, I don't know where to get the checkpoint of pretrained models. For example, in fmow.yaml, the checkpoint_path is specified as '/u/scr/ananya/simclr_weights/mocotp_checkpoint_0200.pth.tar'. But I'm confused where to get this pth.tar file ?
Actually I found one checkpoint from https://github.com/facebookresearch/moco but it is pretrained on ImageNet. However, to reproduce LP results on FMoW, it seems that the checkpoint pretrained on unlabeled FMoW is needed. Could you provide the checkpoint?
Thanks !
what command is used to reproducing cifar-10->stl, cifar-10->cifar10.1 results that are in the paper table 1 for cifar-10 ?
i will appreciate if you can provide step by step approach
I was wondering if the hyper-parameter configurations which were used to report the experiments were available somewhere? If I understand correctly, the configuration files provided are not the final ones since,"run_adaptation_experiments runs a sweep over these configs and therefore modifies the hyperparameters on configs."
I'm having a bit of trouble replicating results (especially on Domainnet) and this would really help!