Coder Social home page Coder Social logo

psco's Introduction

Unsupervised Meta-learning via Few-shot Pseudo-supervised Contrastive Learning

PyTorch implementation for "Unsupervised Meta-learning via Few-shot Pseudo-supervised Contrastive Learning" (accepted Spotlight presentation in ICLR 2023)

TL;DR: Constructing online pseudo-tasks via momentum representations and applying contrastive learning improves the pseudo-labeling strategy progressively for meta-learning.

Install

conda create -n unsup_meta python=3.9
conda activate unsup_meta
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
conda install ignite -c pytorch
pip install packaging tensorboard sklearn

Download datasets

Meta-Training PsCo

Omniglot

python train.py --model psco --backbone conv4 --prediction --num-shots 1 \
    --dataset omniglot --datadir DATADIR \
    --logdir logs/omniglot/psco

miniImageNet

python train.py --model psco --backbone conv5 --prediction --num-shots 4 \
    --dataset miniimagenet --datadir DATADIR \
    --logdir logs/miniimagenet/psco

Meta-Testing PsCo

Standard few-shot classification (Table 1)

  • For Omniglot
python test.py --model psco --backbone conv4 --prediction --num-shots 1 \
    --ckpt logs/omniglot/psco/last.pth \
    --pretrained-dataset omniglot \
    --dataset omniglot --datadir [DATADIR] \
    --N 5 --K 1 --num-tasks 2000 \
    --eval-fewshot-metric supcon
  • For miniImageNet
python test.py --model psco --backbone conv5 --prediction --num-shots 4 \
    --ckpt logs/miniimagenet/psco/last.pth \
    --pretrained-dataset miniimagenet \
    --dataset miniimagenet --datadir [DATADIR] \
    --N 5 --K 1 --num-tasks 2000 \
    --eval-fewshot-metric supcon

Cross-domain few-shot classification with miniImageNet pretrained (Table 2)

  • miniImageNet to [DATASET]
python test.py --model psco --backbone conv5 --prediction --num-shots 4 \
    --ckpt logs/miniimagenet/psco/last.pth \
    --pretrained-dataset miniimagenet \
    --dataset [DATASET] --datadir [DATADIR] \
    --N 5 --K 5 --num-tasks 2000 \
    --eval-fewshot-metric ft-supcon
  • [DATASET] list
    • cub200 (For CUB200)
    • cars (For Cars)
    • places (For Places)
    • plantae (For Plantae)
    • cropdiseases (For CropDiseases)
    • eurosat (For EuroSAT)
    • isic (For ISIC)
    • chestx (For ChestX)

psco's People

Contributors

huiwon-jang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

psco's Issues

What is vinyals?

Trying to train using the omniglot dataset right now, getting an error because I can't find vinyals_x_.json as specified in

        with open(os.path.join(self.root, f'vinyals_{self.split}_labels.json'), mode='r') as f:
            dir_list = json.load(f)

What is this supposed to be?

No such file or directory: '***/omniglot/vinyals_train_labels.json'

Hi,thanks for your great work!
When i run your code , I meet a problem:"No such file or directory: '***/omniglot/vinyals_train_labels.json'.But I can't find this file.And could you please share the link of downloading miniimagenet and other cross domain dataset?
Thanks a lot!

AttributeError: module 'torch.ao' has no attribute 'nn'

Thank you for good work.
(unsup_meta) /mnt/workspace/code/ICLR2023/PsCo> CUDA_DEVICES_VISIBLE="4,5" python train.py --model psco --backbone conv4 --prediction --num-shots 1 --dataset omniglot --datadir /mnt/workspace/code/ICLR2023/PsCo/data --logdir logs/omniglot/psco
/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/image.so: undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE'If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source?
warn(
Traceback (most recent call last):
File "/mnt/workspace/code/ICLR2023/PsCo/train.py", line 12, in
import utils
File "/mnt/workspace/code/ICLR2023/PsCo/utils.py", line 17, in
import models
File "/mnt/workspace/code/ICLR2023/PsCo/models.py", line 4, in
import torchvision
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/init.py", line 6, in
from torchvision import datasets, io, models, ops, transforms, utils
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/models/init.py", line 17, in
from . import detection, optical_flow, quantization, segmentation, video
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/models/quantization/init.py", line 3, in
from .mobilenet import *
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/models/quantization/mobilenet.py", line 1, in
from .mobilenetv2 import * # noqa: F401, F403
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torchvision/models/quantization/mobilenetv2.py", line 5, in
from torch.ao.quantization import DeQuantStub, QuantStub
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torch/ao/quantization/init.py", line 3, in
from .fake_quantize import * # noqa: F403
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torch/ao/quantization/fake_quantize.py", line 8, in
from torch.ao.quantization.observer import (
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torch/ao/quantization/observer.py", line 15, in
from torch.ao.quantization.utils import (
File "/mnt/workspace/workgroup/yfchen/anaconda3/envs/unsup_meta/lib/python3.9/site-packages/torch/ao/quantization/utils.py", line 655, in
) -> torch.ao.nn.quantizable.LSTM:
AttributeError: module 'torch.ao' has no attribute 'nn'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.