Coder Social home page Coder Social logo

nithin-gk / uniteandconquer Goto Github PK

View Code? Open in Web Editor NEW
31.0 5.0 3.0 6.71 MB

[CVPR '23] Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models

Home Page: https://nithin-gk.github.io/projectpages/Multidiff/index.html

License: Apache License 2.0

Python 98.48% Jupyter Notebook 1.42% Shell 0.10%
celeba-hq-dataset diffusion-models face-generation face-synthesis ffhq-dataset imagenet multimodal multimodal-deep-learning text-to-image multimodal-generation

uniteandconquer's Introduction

Unite and Conquer (CVPR 2023)

This repository contains the implementation of the paper:

Unite and Conquer: Plug & Play Multi-Modal Synthesis using Diffusion Models
Nithin Gopalakrishnan Nair, Chaminda Bandara, Vishal M Patel

IEEE/CVF International Conference on Computer Vision (CVPR), 2023

From VIU Lab, Johns Hopkins University

[Paper] | [Project Page] | [Video]

Keywords: Multimodal Generation, Semantic Face Generation, Multimodal Face generation, Text to image generation, Diffusion based Face Generation, Text to Image Generation, Text to Face Generation

Applications

We propose Unite and Conquer, where users can use multiple modalities for face and generic image generation. (a) Face Generation. Given multi-modal controls, We can generate high-quality images consistent with the input conditions. (b) Text and Class guided Genearation. Users can input generic scenes and Imagenet classes to generate images with unlikely scenarios.


Contributions:

  • We propose a diffusion-based solution for image generation under the presence of multimodal priors.
  • We tackle the problem of need for paired data for multimodal synthesis by deriving upon the flexible property of diffusion models.
  • Unlike existing methods, our method is easily scalable and can be incorporated with off-the-shelf models to add additional constraints

Environment setup

conda env create -f environment.yml

Pretrained models:

Please download the pretrained models using

python utils/download_models.py

Training On custom datasets

Data Preparation- Training

You can train on any custom datasets by arranging the data in the following format. Please note that you can choose to give either one or more modalities

    ├── data 
    |   ├── images
    |   └── hair_masks 
    |   └── face_masks

The training command for including semantic masks as different modalities along with text is

export PYTHONPATH=$PYTHONPATH:$(pwd)
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --large_size 256  --small_size 256 --learn_sigma True --noise_schedule linear --num_channels 192 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
CUDA_VISIBLE_DEVICES="0" NCCL_P2P_DISABLE=1  torchrun --nproc_per_node=1 --master_port=4326 scripts/train.py $MODEL_FLAGS

Testing On custom datasets

Data Preparation- Testing

You can test on any custom datasets by arranging the data in the following format. Please note that you can choose to give either one or more modalities

    ├── data 
    |   ├── face_map 
    |   └── hair_map  
    |   └── sketches
    |   └── text.txt

text.txt should be of the format

image_name : Text caption

An example would be:

0.jpg : This person wears eyeglasses

Testing code

Test on custom data using:

python test_multimodal_face.py --data_path /path/to/data --face_map --hair_map --text

Please set the flags you need for the generation.

Instructions for Interactive Demo

Demo

python gradio_set.py

Once you perform the above steps, the models will automatically get downloaded to your Directory. One that is finished, the models will be automatically downloaded you will get a local demo link which can be used to tey the generation models on your own. More details about internal components of the code will be uploaded shortyl

Citation

  1. If you use our work, please use the following citation
@inproceedings{nair2023unite,
  title={Unite and Conquer: Plug \& Play Multi-Modal Synthesis Using Diffusion Models},
  author={Nair, Nithin Gopalakrishnan and Bandara, Wele Gedara Chaminda and Patel, Vishal M},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6070--6079},
  year={2023}
}

This code is reliant on:

https://github.com/openai/guided-diffusion
https://github.com/openai/glide-text2im
https://github.com/FacePerceiver/facer

uniteandconquer's People

Contributors

nithin-gk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

uniteandconquer's Issues

The form of 'text.txt'

When I use my custom dataset, is there any form for 'text,txt' ?

Thanks in advance :)

How to create our own face and hair map?

Dear author, thank you for your codes very much. The facial segmentation mask is from the CelebA-HQ-Mask dataset. But you only provide part of the data. Could you please provide the codes that make the face and hair map?

having sketch input only

I want to raise this issue because I wanted to ask if we can only enter a sketch image as our input because this case gives an error and I fixed it, so if we think it's a problem I will be more than happy to commit but if you think it's not an error and we should not just input a sketch image then sorry for this mistake

Training Code

Hello,
Are you planning to release the instructions for training code?
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.