Coder Social home page Coder Social logo

algolzw / daclip-uir Goto Github PK

View Code? Open in Web Editor NEW
541.0 8.0 28.0 17.44 MB

[ICLR 2024] Controlling Vision-Language Models for Universal Image Restoration. 5th place in the NTIRE 2024 Restore Any Image Model in the Wild Challenge.

Home Page: https://algolzw.github.io/daclip-uir

License: MIT License

Shell 0.28% Python 99.67% Makefile 0.05%
diffusion-models image-restoration prompt vision-language face-inpainting image-deblurring image-dehazing image-denoising image-deraining image-desnowing

daclip-uir's Introduction

Controlling Vision-Language Models for Universal Image Restoration
Official PyTorch Implementation of DA-CLIP.

Project Page | Paper | Model Card 🤗

Open In Colab Hugging Face Replicate

daclip

Our follow-up work Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models (CVPRW 2024) presents a posterior sampling for better image generation and handles real-world mixed-degradation images similar to Real-ESRGAN.

Updates

[2024.04.16] Our follow-up paper "Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models" is on ArXiv now!
[2024.04.15] Updated a wild-IR model for real-world degradations and the posterior sampling for better image generation. The pretrained weights wild-ir.pth and wild-daclip_ViT-L-14.pt are also provided for wild-ir.
[2024.01.20] 🎉🎉🎉 Our DA-CLIP paper was accepted by ICLR 2024 🎉🎉🎉 We further provide a more robust model in the model card.
[2023.10.25] Added dataset links for training and testing.
[2023.10.13] Added the Replicate demo and api🔥. Thanks to @chenxwh!!! We updated the Hugging Face demo🔥 and online Colab demo🔥. Thanks to @fffiloni and @camenduru !!! We also made a Model Card in Hugging Face 🤗 and provided more examples for testing.
[2023.10.09] The pretrained weights of DA-CLIP and the Universal IR model are released in link1 and link2, respectively. In addition, we also provide a Gradio app file for the case that you want to test your own images.

How to Run the Code?

Dependencies

  • OS: Ubuntu 20.04
  • nvidia:
    • cuda: 11.4
  • python 3.8

Install

We advise you first create a virtual environment with:

python3 -m venv .env
source .env/bin/activate
pip install -U pip
pip install -r requirements.txt

DA-CLIP Usage

Get into the universal-image-restoration directory and run:

import torch
from PIL import Image
import open_clip

checkpoint = 'pretrained/daclip_ViT-B-32.pt'
model, preprocess = open_clip.create_model_from_pretrained('daclip_ViT-B-32', pretrained=checkpoint)
tokenizer = open_clip.get_tokenizer('ViT-B-32')

image = preprocess(Image.open("haze_01.png")).unsqueeze(0)
degradations = ['motion-blurry','hazy','jpeg-compressed','low-light','noisy','raindrop','rainy','shadowed','snowy','uncompleted']
text = tokenizer(degradations)

with torch.no_grad(), torch.cuda.amp.autocast():
    text_features = model.encode_text(text)
    image_features, degra_features = model.encode_image(image, control=True)
    degra_features /= degra_features.norm(dim=-1, keepdim=True)
    text_features /= text_features.norm(dim=-1, keepdim=True)

    text_probs = (100.0 * degra_features @ text_features.T).softmax(dim=-1)
    index = torch.argmax(text_probs[0])

print(f"Task: {task_name}: {degradations[index]} - {text_probs[0][index]}")

Dataset Preparation

Preparing the train and test datasets following our paper Dataset Construction section as:

#### for training dataset ####
#### (uncompleted means inpainting) ####
datasets/universal/train
|--motion-blurry
|  |--LQ/*.png
|  |--GT/*.png
|--hazy
|--jpeg-compressed
|--low-light
|--noisy
|--raindrop
|--rainy
|--shadowed
|--snowy
|--uncompleted

#### for testing dataset ####
#### (the same structure as train) ####
datasets/universal/val
...

#### for clean captions ####
datasets/universal/daclip_train.csv
datasets/universal/daclip_val.csv

Then get into the universal-image-restoration/config/daclip-sde directory and modify the dataset paths in option files in options/train.yml and options/test.yml.

You can add more tasks or datasets to both train and val directories and add the degradation word to distortion.

Dataset Links

Degradation motion-blurry hazy jpeg-compressed* low-light noisy* (same to jpeg)
Datasets Gopro RESIDE-6k DIV2K+Flickr2K LOL DIV2K+Flickr2K
Degradation raindrop rainy shadowed snowy uncompleted
Datasets RainDrop Rain100H: train, test SRD Snow100K CelebaHQ-256

You should only extract the train datasets for training, and all validation datasets can be downloaded in the Google drive. For jpeg and noisy datasets, you can generate LQ images using this script.

Training

DA-CLIP

See DA-CLIP.md for details.

Universal Image Restoration

The main code for training is in universal-image-restoration/config/daclip-sde and the core network for DA-CLIP is in universal-image-restoration/open_clip/daclip_model.py.

  • Put the pretrained DA-CLIP weights to pretrained directory and check the daclip path.

  • You can then train the model following below bash scripts:

cd universal-image-restoration/config/daclip-sde

# For single GPU:
python3 train.py -opt=options/train.yml

# For distributed training, need to change the gpu_ids in option file
python3 -m torch.distributed.launch --nproc_per_node=2 --master_poer=4321 train.py -opt=options/train.yml --launcher pytorch

The models and training logs will save in log/universal-ir. You can print your log at time by running tail -f log/universal-ir/train_universal-ir_***.log -n 100.

The same training steps can be used for image restoration in the wild (wild-ir).

Pretrained Models

Model Name Description GoogleDrive HuggingFace
DA-CLIP Degradation-aware CLIP model download download
Universal-IR DA-CLIP based universal image restoration model download download
DA-CLIP-mix Degradation-aware CLIP model (add Gaussian blur + face inpainting and Gaussian blur + Rainy) download download
Universal-IR-mix DA-CLIP based universal image restoration model (add robust training and mix-degradations) download download
Wild-DA-CLIP Degradation-aware CLIP model in the wild (ViT-L-14) download download
Wild-IR DA-CLIP based image restoration model in the wild download download

Evaluation

To evalute our method on image restoration, please modify the benchmark path and model path and run

cd universal-image-restoration/config/universal-ir
python test.py -opt=options/test.yml

Gradio

Here we provide an app.py file for testing your own images. Before that, you need to download the pretrained weights (DA-CLIP and UIR) and modify the model path in options/test.yml. Then by simply running python app.py, you can open http://localhost:7860 to test the model. (We also provide several images with different degradations in the images dir). We also provide more examples from our test dataset in the google drive.

The same steps can be used for image restoration in the wild (wild-ir).

Results

daclip

Unified Image Restoration (click to expand)

daclip

Degradation-Specific Restoration (click to expand)

daclip

Image Restoration in the wild (click to expand)

daclip

Notice!!

🙁 In testing we found that the current pretrained model is still difficult to process some real-world images which might have distribution shifts with our training dataset (captured from different devices or with different resolutions or degradations). We regard it as a future work and will try to make our model more practical! We also encourage users who are interested in our work to train their own models with larger dataset and more degradation types.

🙁 BTW, we also found that directly resizing input images will lead a poor performance for most tasks. We could try to add the resize step into the training but it always destroys the image quality due to interpolation.

🙁 For the inpainting task our current model only supports face inpainting due to the dataset limitation. We provide our mask examples and you can use the generate_masked_face script to generate uncompleted faces.


Acknowledgment: Our DA-CLIP is based on IR-SDE and open_clip. Thanks for their code!

Contact

If you have any question, please contact: [email protected]

Citations

If our code helps your research or work, please consider citing our paper. The following are BibTeX references:

@article{luo2023controlling,
  title={Controlling Vision-Language Models for Universal Image Restoration},
  author={Luo, Ziwei and Gustafsson, Fredrik K and Zhao, Zheng and Sj{\"o}lund, Jens and Sch{\"o}n, Thomas B},
  journal={arXiv preprint arXiv:2310.01018},
  year={2023}
}

@article{luo2024photo,
  title={Photo-Realistic Image Restoration in the Wild with Controlled Vision-Language Models},
  author={Luo, Ziwei and Gustafsson, Fredrik K and Zhao, Zheng and Sj{\"o}lund, Jens and Sch{\"o}n, Thomas B},
  journal={arXiv preprint arXiv:2404.09732},
  year={2024}
}

--- Thanks for your interest! ---

statistics

visitors

daclip-uir's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

daclip-uir's Issues

How to use trained parameters for testing?

I am using Ubuntu 22.04

I want to train this network on my own images.
After training(e.g. running train.py), I get two files lastest_EMA.pth and latest_G.pth, I would like to ask how to use them for testing? I can not figure out how to read those two files from test.py. Could you please give some tutorials on loading the parameters? Thank you!

About training

感谢作者的这篇优秀工作。我在复现的时候遇到一个问题。在训练过程中的验证阶段会爆显存,可以通过修改参数来避免爆显存吗
屏幕截图 2023-11-25 233658

Not able to train daclip in my dateset

This is a very outstanding job!!
whenI use 256 * 256*3 images for training daclip,the following issues will occur

File "main.py", line 495, in
main(sys.argv[1:])
File "main.py", line 423, in main
train_one_epoch(model, data, loss, epoch, optimizer, scaler, scheduler, dist_model, args, tb_writer=writer)
File "/data_160TB/2022/panxudong/code/daclip-uir-main/da-clip/src/training/train.py", line 106, in train_one_epoch
losses = loss(**model_out, output_dict=True)
File "/data_160TB/2022/panxudong/.conda/envs/py8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data_160TB/2022/panxudong/code/daclip-uir-main/da-clip/src/open_clip/loss.py", line 190, in forward
clip_loss = super().forward(image_features, text_features, logit_scale)
File "/data_160TB/2022/panxudong/code/daclip-uir-main/da-clip/src/open_clip/loss.py", line 122, in forward
logits_per_image, logits_per_text = self.get_logits(image_features, text_features, logit_scale)
File "/data_160TB/2022/panxudong/code/daclip-uir-main/da-clip/src/open_clip/loss.py", line 115, in get_logits
logits_per_image = logit_scale * image_features @ text_features.T
RuntimeError: The size of tensor a (4) must match the size of tensor b (512) at non-singleton dimension 1

Hosting models on Hugging Face Hub

Hi there! Thanks for sharing the code of DA-CLIP!

Would you be interested in hosting the models on the Hugging Face Hub? (hf.co/models). The current model weights are in Google Drive, which is hard for users to discover externally. By hosting the models on the Hub, you can document them with model cards, get download stats, and can use programmatic access to download the models directly without the users having to download them manually. That would also simplify the process of training and evaluation. Here is a guide in case you're interested

About uncompleted data

uncompleted 数据集有30000张,请问哪些数据是用于训练,哪些用于测试?有详细的ID嘛?

test a image

Hi, How should I open the URL address
The output is
(DA-clip) root@autodl-container-4d6411b93c-2cb1d3b0:~/autodl-tmp/daclip-uir-main/daclip-uir-main/universal-image-restoration/config/daclip-sde# python app.py
export CUDA_VISIBLE_DEVICES=0
OrderedDict([('name', 'Test'), ('mode', 'LQGT'), ('dataroot_GT', 'datasets/universal/deg_type/GT'), ('dataroot_LQ', 'datasets/universal/deg_type/LQ')])
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().

size mismatch between the provided pretrained model and the current model

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

RuntimeError: Error(s) in loading state_dict for ConditionalUNet:
size mismatch for downs.3.3.weight: copying a param with shape torch.Size([512, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for ups.0.0.mlp.1.weight: copying a param with shape torch.Size([1024, 256]) from checkpoint, the shape in current model is torch.Size([512, 256]).
size mismatch for ups.0.0.mlp.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.0.block1.proj.weight: copying a param with shape torch.Size([512, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
size mismatch for ups.0.0.block2.proj.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for ups.0.0.res_conv.weight: copying a param with shape torch.Size([512, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]).
size mismatch for ups.0.1.mlp.1.weight: copying a param with shape torch.Size([1024, 256]) from checkpoint, the shape in current model is torch.Size([512, 256]).
size mismatch for ups.0.1.mlp.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.1.block1.proj.weight: copying a param with shape torch.Size([512, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
size mismatch for ups.0.1.block2.proj.weight: copying a param with shape torch.Size([512, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for ups.0.1.res_conv.weight: copying a param with shape torch.Size([512, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]).
size mismatch for ups.0.2.fn.fn.norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.0.2.fn.fn.norm.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.0.2.fn.fn.proj_in.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]
size mismatch for ups.0.2.fn.fn.proj_in.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.0.2.fn.fn.transformer_blocks.0.attn1.to_q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Siz6, 256]).
size mismatch for ups.0.2.fn.fn.transformer_blocks.0.attn1.to_k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Siz6, 256]).
size mismatch for ups.0.2.fn.fn.transformer_blocks.0.attn1.to_v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Siz6, 256]).
size mismatch for ups.0.2.fn.fn.transformer_blocks.0.attn1.to_out.0.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch([256, 256]).

How do you integrate DA-CLIP to NAFNet?

Hi, very appreciate your open-source work and it gives us great insights.

I noticed that in the page 7 of your paper it mentions: "Moreover, we further integrate1 DA-CLIP into an MSE-based network NAFNet (Chen et al., 2022) as a variant of our method.", may I know how do you implement it since there is no cross-attention modules in NAFNet?

Thank you so much and wish you have a bright future on your research path.

snowy train set selection.

Hello, in order to reproduce the training process, I would like to ask you how you selected the 1872 training sets and 601 test sets from 100k snowy pictures. It would be great if you could give a sample list. Thank you again for sharing your excellent research results.

best wishes

Degra_loss during training for customized datasets

Hi! Firstly, great thanks for your outstanding and inspiring works!

When I tried to train daclip for my datasets, the Contrastive_loss decreased normally and converged to ~0.15. However, the Degra_loss decreased only in first 2 epochs and then maintained ~5.7 which is relative huge compared to Contrastive_loss.

Does this situation also occur by your training steps? I would be appreciated if you could also show the loss curves so that I could compare it with mine.

NAFNet和DA-CLIP的结合和论文内容的一些问题

你好,我读了论文 就如何将NAFNet和DA-CLIP结合和论文内容有些问题:
1,看了前面的issues的回答 找到了codes/config/deraining/models/modules/DenoisingNAFNet_arch.py。但是在ConditionalNAFNet类里面没看到使用image_context或是degra_context的地方,还是不太懂是怎么结合的。
2, 论文里的对特定的修复任务的实验有使用prompt embedding模块么?还是只使用了cross-attention?

非常感谢!

Cannot Get Decent Results in Unseen Images

Hi,

I am testing unseen images on the pretrained model daclip_ViT-B-32.pt and it seems that model is not working well. There is no or very slight changes in the images. As an example, here is the original image and the output image, respectively.
scratch_removal_automatic_1
image

Is there a problem related to the pretrained model or is this a normal behavior on unseen images due to the limitations of the training dataset of the pretrained model?

Thanks.

csv

(daclip) root@autodl-container-fe1f429743-4d498b12:~# python /root/autodl-fs/daclip-uir-main/scripts/generate_captions.py
Loading caption model blip-large...
Loading CLIP model ViT-L-14/openai...
Traceback (most recent call last):
File "/root/autodl-fs/daclip-uir-main/scripts/generate_captions.py", line 72, in
ci = Interrogator(Config(clip_model_name="ViT-L-14/openai",
File "/root/miniconda3/envs/daclip/lib/python3.10/site-packages/clip_interrogator/clip_interrogator.py", line 72, in init
self.load_clip_model()
File "/root/miniconda3/envs/daclip/lib/python3.10/site-packages/clip_interrogator/clip_interrogator.py", line 106, in load_clip_model
self.clip_model, _, self.clip_preprocess = open_clip.create_model_and_transforms(
File "/root/miniconda3/envs/daclip/lib/python3.10/site-packages/open_clip/factory.py", line 384, in create_model_and_transforms
model = create_model(
File "/root/miniconda3/envs/daclip/lib/python3.10/site-packages/open_clip/factory.py", line 290, in create_model
load_checkpoint(model, checkpoint_path)
File "/root/miniconda3/envs/daclip/lib/python3.10/site-packages/open_clip/factory.py", line 161, in load_checkpoint
incompatible_keys = model.load_state_dict(state_dict, strict=strict)
File "/root/miniconda3/envs/daclip/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIP:生成csv文件时出错

Rain100H dataset

Hi, would you please check the link of this dataset Rain100H. I checked but the link is wrong.

about Snowy dataset

downlaod from the link of the Snowy train dataset, there are 50,000 images under gt folder, where can I get the "subset" that contained 1872 images for trainning ?

Some questions about your paper

Hi, sorry for the distrubance again, I got some ambiguous points after reading your paper:

  1. Do you train CLIP Controller and Restoration Model separtely or train they at the same time?
  2. I saw you introduce learnable prompt at this line, which is smart. However, I notice you incorporate prompt_embedding by t = t + prompt_embedding, my question is why you integrate degradation type into time step, instead of by cross attention like this x = attn(x, context=image_context).
  3. For the NAFNet there's no time step, how did you integrate prompt_embedding into NAFNet?

Something I didn't find answer in your paper (or I missed), sorry for interrupting you. Thank you for your great work.

Cannot Generate Clean Captions With BLIP

AssertionError: /content/drive/MyDrive/datasets/universal/train/uncompleted/GT is not a valid directory
There doesn't seem to be a GT directory in the dataset you provided, what should I do?

How to use latest robust model

Hi, thank you very much for your research it is very interesting.
i was wondering how to use the latest model as the old code doesn't work with it

checkpoint = 'pretrained/daclip_ViT-B-32_mix.pt'
model, preprocess = open_clip.create_model_from_pretrained('daclip_ViT-B-32', pretrained=checkpoint)

it cannot load the model because it gives an error
thank you!

how to use cpu/mps?

Hey there! I'm on Apple M2 Mac.
How can I use mps/cpu device only?

When I'm running example, getting this error:

raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

some question about paper

Thank you for your excellent work. I have a question about the content of your paper. What does gradient flow(in Fig.2a) mean and what does it do? I can't find its meaning elsewhere in the paper.

如何用自己的灰度图数据集进行训练?🥲

我的数据集是(1,256,256)的灰度图,在训练DACLIP的时候会报如下的错误:🥲🥲🥲

我把如下路径中的conv1中的in_channel从3改为了1
image

image

结果会报错:
RuntimeError: Error(s) in loading state_dict for CLIP: size mismatch for visual.conv1.weight: copying a param with shape torch.Size([768, 3, 32, 32]) from checkpoint, the shape in current model is torch.Size([768, 1, 32, 32]).

请问如何解决呀?

bugs in daclip-uir/universal-image-restoration/config/wild-ir/test.py ?

change

# clip_model, _preprocess = clip.load("ViT-B/32", device=device)
if opt['path']['daclip'] is not None:
    clip_model, preprocess = open_clip.create_model_from_pretrained('daclip_ViT-B-32', pretrained=opt['path']['daclip'])
else:
    clip_model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_s34b_b79k')
tokenizer = open_clip.get_tokenizer('ViT-B-32')
clip_model = clip_model.to(device)

to

# clip_model, _preprocess = clip.load("ViT-B/32", device=device)
if opt['path']['daclip'] is not None:
    clip_model, preprocess = open_clip.create_model_from_pretrained('daclip_ViT-L-14', pretrained=opt['path']['daclip'])
else:
    clip_model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_s34b_b79k')
tokenizer = open_clip.get_tokenizer('ViT-L-14')
clip_model = clip_model.to(device)

Cannot install on WSL2

On Windows WSL2, I follow these instructions: https://github.com/Algolzw/daclip-uir#install
First 3 steps are working as expected

python3 -m venv .env
source .env/bin/activate
pip install -U pip

When I run the 4th step, pip install -r requirements.txt, I get this error -
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

Dependencies / Requirements / Windows

Hi... Firstly, it's a fantastic work and impressive results! The google colab gradio is very easy to use, thank you!

I'd be very grateful to know, as I'm on windows if you've any plans to slim down the dependencies a little bit? It's easy enough to install cuda toolkit separately, drop triton and remove a few version requirements, but, if you could slim it down a bit I for one would be very grateful!

task_name

Hello! What does it mean to achieve task_name in print(f"Task: {task_name}: {index]} - {text_probs[0][index]}")?

Seems to be working well only on test images

I'm using Google Colab with gradio to test the project, and it seems it doesn't do almost anything on my custom images (I tried different degradations in my examples), it only adds some noise and that's it. It works very well only for the test images provided in the project. Is there possibly a mistake in app.py script or anywhere else in the pipeline for testing with custom images? Or maybe a pretrained model is so limited?

Not able to test on an image

I get this error when I run the code given in the readme file for testing on an image

Traceback (most recent call last):
File "run.py", line 6, in
model, preprocess = open_clip.create_model_from_pretrained('daclip_ViT-B-32', pretrained=checkpoint)
File "/home/vinayak/.local/lib/python3.8/site-packages/open_clip/factory.py", line 437, in create_model_from_pretrained
model = create_model(
File "/home/vinayak/.local/lib/python3.8/site-packages/open_clip/factory.py", line 215, in create_model
raise RuntimeError(f'Model config for {model_name} not found.')
RuntimeError: Model config for daclip_ViT-B-32 not found.

Please let us know what has to be done?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.