Coder Social home page Coder Social logo

deepglint / unicom Goto Github PK

View Code? Open in Web Editor NEW
197.0 9.0 15.0 679 KB

universal visual model trained on LAION-400M

Home Page: https://arxiv.org/pdf/2304.05884.pdf

Python 89.20% Shell 10.80%
large-sacle-pretrained-model universal-image-retrieval iclr2023 laion400m vision-transformer distributed-training retrieval-anything in-shop

unicom's Introduction

UNICOM

[paper] [gdrive]

For image representation:

  1. ImageNet pretraining is not universal enough to generalize to diverse open-world objects.
  2. Supervised learning is not scalable because manual annotation of large-scale training data is time-consuming, costly, and even infeasible.
  3. Instance discrimination method (e.g., CLIP) can hardly encode the semantic structure of training data, because instance-wise contrastive learning always treats two samples as a negative pair, regardless of their semantic similarity.

UNICOM demonstrates superior performance in image retrieval, thanks to its ability to cluster 400000000 images into 1000000 pseudo classes using joint textual and visual features extracted by the CLIP model. Additionally, our use of a margin-based softmax loss (ArcFace) and random partial class/feature (PartialFC) selections enhances the robustness and compactness of the feature embedding. Our method outperforms state-of-the-art unsupervised and supervised image retrieval approaches, making it a powerful tool for researchers and practitioners in the field.

The model unicom was pre-trained on laion400M, and in the future, we will release the model trained on laion2B.

Usage

First, install PyTorch 1.12 (or later) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:

pip install torch torchvision
pip install tqdm timm
pip install git+https://github.com/deepglint/unicom.git

API

The unicom module provides the following methods:

unicom.available_models()

Returns the names of the available unicom models.

unicom.load(name)

Returns the model and the TorchVision transform needed by the model, specified by the model name returned by unicom.available_models(). It will download the model as necessary.

Results and Evaluation

Result Transfer-Learning on ImageNet1K

Dataset ViT-B/32@384px ViT-B/16@384px ViT-L/14@518px
ImageNet1k 83.6 85.9 88.3

Result KNN on ImageNet1K

Dataset ViT-B/32 ViT-B/16 ViT-L/14 ViT-L/14@336px
ImageNet1K 74.5 78.8 81.2 81.6

Result of Supervised Image Retrieval

Dataset ViT-B/32 ViT-B/16 ViT-L/14 ViT-L/14@336px
SOP 87.1 88.8 89.9 91.2
In-Shop 94.8 95.5 96.0 96.7
INaturalist 72.8 82.5 85.4 88.9

Result of Zero-Shot Image Retrieval

Dataset ViT-B/32 ViT-B/16 ViT-L/14 ViT-L/14@336px
CUB 83.7 86.5 88.5 89.2
Cars 95.9 96.8 96.9 97.3
SOP 70.0 70.4 72.7 74.5
In-Shop 72.8 74.6 83.6 86.7
INaturalist 64.6 73.6 77.1 81.0

Eval Image Retrieval

Zero-Shot CUB Dataset with a Single GPU.

torchrun retrieval.py --eval --dataset cub --model_name ViT-B/32

Zero-Shot CUB Dataset with 8 GPUs.

torchrun --nproc_per_node 8 retrieval.py --eval --dataset cub --model_name ViT-B/32

Eval KNN

torchrun --nproc_per_node 8 knn.py --train-dataset /imagenet/train/ --val-dataset /imagenet/val/ --num-workers 4 --model-name ViT-B/32

Vis ZeroShot Retrieval

1. Food-101

image

2. Describable Textures Dataset

image

Citation

@inproceedings{anxiang_2023_unicom,
  title={Unicom: Universal and Compact Representation Learning for Image Retrieval},
  author={An, Xiang and Deng, Jiankang and Yang, Kaicheng and Li, Jiawei and Feng, Ziyong and Guo, Jia and Yang, Jing and Liu, Tongliang},
  booktitle={ICLR},
  year={2023}
}
@inproceedings{deng2019arcface,
  title={Arcface: Additive angular margin loss for deep face recognition},
  author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos},
  booktitle={CVPR},
  pages={4690--4699},
  year={2019}
}
@inproceedings{anxiang_2022_partialfc,
    author={An, Xiang and Deng, Jiankang and Guo, Jia and Feng, Ziyong and Zhu, XuHan and Yang, Jing and Liu, Tongliang},
    title={Killing Two Birds With One Stone: Efficient and Robust Training of Face Recognition CNNs by Partial FC},
    booktitle={CVPR},
    year={2022},
    pages={4042-4051}
}

unicom's People

Contributors

anxiangsir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unicom's Issues

关于数据集制作

您好!我想请问一下在新的小数据集上finetune时可能会遗忘学过的LAION 400M,导致finetune后的模型泛化性能下降。所以我计划在finetune时也加入部分LAION 400M数据,但是我使用LAION 400M聚类到1M时的类id可能和您训练时的不同,这是否会产生冲突呢?请问数据集的这些信息或者原始的制作方法您可以公布下吗?非常感谢~

Bug in CombinedMarginLoss implementation

Hi @anxiangsir,

Thanks for sharing your work.

I have a question about the forward pass in CombinedMarginLoss when running sop_vit_b_16.sh as an example. In this case, self.m1 = 1.0, self.m2 = 0.25, and self.m3 = 0.0, But I think with torch.no_grad(), the gradients won't be propagated correctly, right?

It also seems that the implementation of CombinedMarginLoss is adapted from the insightface repo, and its previous version (without torch.no_grad()) makes more sense here: deepinsight/insightface@657ae30

Some issues raised for the same query: deepinsight/insightface#2218, deepinsight/insightface#2255, deepinsight/insightface#2309

Why do we need torch.no_grad() here?

Training mode

I understand that there is an evaluation mode for evaluating the image retrieval using: torchrun` retrieval.py --eval --dataset cub --model_name ViT-B/32
Is there a mode for training the models or should the scripts be executed manually?
Thanks in advance

Inter-class Prototypes

Is there a part of the code that creates the prototypes W and the negative prototypes? Thank you in advance

Substantially more parameters than OpenCLIP, SWAG and Timm's ViT models.

Hello,

Many thanks for sharing your interesting work. I noticed that the projection head of your models is substantially bigger than SWAG (Singh et al., CVPR 2022), OpenCLIP models and Timm's implementation of ViT that is used in recall@k surrogate (Patel et al., CVPR 2022). I ran a quick parameter counter for these models following the RS@k implementation, that is, with a layer norm and linear projection. Here are the counts:

ViT-B/32 Timm: 87850496
ViT-B/32 CLIP: 87849728
ViT-B/32 UNICOM: 117118464
ViT-B/16 Timm: 86193920
ViT-B/16 CLIP: 86193152
ViT-B/16 UNICOM: 202363136
ViT-B/16 SWAG: 86193920

It is clear that the UNICOM model has substantially higher number of parameters than the baselines used for the comparison. With this in mind, are the comparisons fair at all?

x= x.cpu() in retrival.py (around line 636)get error with nontype

I printed x after the for image, label in dataloader: line and the for loop line were completed, and I got the result of None, and I found that for image, label in dataloader: did not start the loop, so what is the reason, I Try to check the dataloader, there is an output, but the for loop is not started.

error:
x = x.cpu()
AttributeError: 'NoneType' object has no attribute 'cpu'

code:
print('dataset_this_rank',dataset_this_rank)
dataloader = DataLoader(dataset_this_rank, **kwargs)
x = None
y_np = []
idx = 0
print('dataloader',dataloader)
print('before: ', x)

for image, label in dataloader:
    print('entry!')
    image = image.cuda()
    embedding = model(image)
    embedding_size: int = embedding.size(1)
    if x is None:
        print(x)
        size = [len(dataset_this_rank), embedding_size]
        x = torch.zeros(*size, device=image.device)
    print('!',x)
    x[idx:idx + embedding.size(0)] = embedding
    y_np.append(np.array(label))
    idx += embedding.size(0)
print('after: ', x)
x = x.cpu()

dataset: own food dataset, use sop loading style
envs: torch:1.12.1,cuda:11.4

Flattened features of VIT

The paper states [section 4.1] that all experiments use the same architecture designs in CLIP, but after checking out the code I noticed that:
1 - There is no cls_token embedding in unicom VIT models,
2 - Output features of the VITs are neither pooled features of the blocks nor the cls_token, and are actually flattened and then passed to mlp layers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.