Coder Social home page Coder Social logo

megvii-research / hidiffusion Goto Github PK

View Code? Open in Web Editor NEW
700.0 6.0 38.0 17.08 MB

[ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!

License: Apache License 2.0

Jupyter Notebook 64.78% Python 35.22%

hidiffusion's Introduction

๐Ÿ’ก HiDiffusion: Unlocking Higher-Resolution Creativity and Efficiency in Pretrained Diffusion Models

Shen Zhang, Zhaowei Chen, Zhenyu Zhao, Yuhao Chen, Yao Tang, Jiajun Liang

โ€‚ โ€‚ โ€‚ โ€‚

(Select HiDiffusion samples for various diffusion models, resolutions, and aspect ratios.)

๐Ÿ‘‰ Why HiDiffusion

  • A training-free method that increases the resolution and speed of pretrained diffusion models.
  • Designed as a plug-and-play implementation. It can be integrated into diffusion pipelines by only adding a single line of code!
  • Supports various tasks, including text-to-image, image-to-image, inpainting.

(Faster, and better image details.)


(2K results of ControlNet and inpainting tasks.)

๐Ÿ”ฅ Update

  • 2024.7.3 - ๐Ÿ’ฅ Accepted by ECCV 2024!

  • 2024.6.19 - ๐Ÿ’ฅ Integrated into OpenBayes, see the demo. Thank OpenBayes team!

  • 2024.6.16 - ๐Ÿ’ฅ Support PyTorch 2.X.

  • 2024.6.16 - ๐Ÿ’ฅ Fix non-square generation issue. Now HiDiffusion supports more image sizes and aspect ratios.

  • 2024.5.7 - ๐Ÿ’ฅ Support image-to-image task, see here.

  • 2024.4.16 - ๐Ÿ’ฅ Release source code.

๐Ÿ“ข Supported Models

Note: HiDiffusion also supports the downstream diffusion models based on these repositories, such as Ghibli-Diffusion, Playground, etc.

๐Ÿ’ฃ Supported Tasks

  • โœ… Text-to-image
  • โœ… ControlNet, including text-to-image, image-to-image
  • โœ… Inpainting

๐Ÿ”Ž Main Requirements

This repository is tested on

  • Python==3.8
  • torch>=1.13.1
  • diffusers>=0.25.0
  • transformers
  • accelerate
  • xformers

๐Ÿ”‘ Install HiDiffusion

After installing the packages in the main requirements, install HiDiffusion:

pip3 install hidiffusion

Installing from source

Alternatively, you can install from github source. Clone the repository and install:

git clone https://github.com/megvii-model/HiDiffusion.git
cd HiDiffusion
python3 setup.py install

๐Ÿš€ Usage

Generating outputs with HiDiffusion is super easy based on ๐Ÿค— diffusers. You just need to add a single line of code.

Text-to-image generation

Stable Diffusion XL

from hidiffusion import apply_hidiffusion, remove_hidiffusion
from diffusers import StableDiffusionXLPipeline, DDIMScheduler
import torch
pretrain_model = "stabilityai/stable-diffusion-xl-base-1.0"
scheduler = DDIMScheduler.from_pretrained(pretrain_model, subfolder="scheduler")
pipe = StableDiffusionXLPipeline.from_pretrained(pretrain_model, scheduler = scheduler, torch_dtype=torch.float16, variant="fp16").to("cuda")

# # Optional. enable_xformers_memory_efficient_attention can save memory usage and increase inference speed. enable_model_cpu_offload and enable_vae_tiling can save memory usage.
# pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_model_cpu_offload()
# pipe.enable_vae_tiling()

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

prompt = "Standing tall amidst the ruins, a stone golem awakens, vines and flowers sprouting from the crevices in its body."
negative_prompt = "blurry, ugly, duplicate, poorly drawn face, deformed, mosaic, artifacts, bad limbs"
image = pipe(prompt, guidance_scale=7.5, height=2048, width=2048, eta=1.0, negative_prompt=negative_prompt).images[0]
image.save(f"golem.jpg")
Output:

Set height = 4096, width = 4096, and you can get output with 4096x4096 resolution.

Stable Diffusion XL Turbo

from hidiffusion import apply_hidiffusion, remove_hidiffusion
from diffusers import AutoPipelineForText2Image
import torch
pretrain_model = "stabilityai/sdxl-turbo"
pipe = AutoPipelineForText2Image.from_pretrained(pretrain_model, torch_dtype=torch.float16, variant="fp16").to('cuda')

# # Optional. enable_xformers_memory_efficient_attention can save memory usage and increase inference speed. enable_model_cpu_offload and enable_vae_tiling can save memory usage.
# pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_model_cpu_offload()
# pipe.enable_vae_tiling()

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

prompt = "In the depths of a mystical forest, a robotic owl with night vision lenses for eyes watches over the nocturnal creatures."
image = pipe(prompt, num_inference_steps=4, height=1024, width=1024, guidance_scale=0.0).images[0]
image.save(f"./owl.jpg")
Output:

Stable Diffusion v2-1

from hidiffusion import apply_hidiffusion, remove_hidiffusion
from diffusers import DiffusionPipeline, DDIMScheduler
import torch
pretrain_model = "stabilityai/stable-diffusion-2-1-base"
scheduler = DDIMScheduler.from_pretrained(pretrain_model, subfolder="scheduler")
pipe = DiffusionPipeline.from_pretrained(pretrain_model, scheduler = scheduler, torch_dtype=torch.float16).to("cuda")

# # Optional. enable_xformers_memory_efficient_attention can save memory usage and increase inference speed. enable_model_cpu_offload and enable_vae_tiling can save memory usage.
# pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_model_cpu_offload()
# pipe.enable_vae_tiling()

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

prompt = "An adorable happy brown border collie sitting on a bed, high detail."
negative_prompt = "ugly, tiling, out of frame, poorly drawn face, extra limbs, disfigured, deformed, body out of frame, blurry, bad anatomy, blurred, artifacts, bad proportions."
image = pipe(prompt, guidance_scale=7.5, height=1024, width=1024, eta=1.0, negative_prompt=negative_prompt).images[0]
image.save(f"collie.jpg")
Output:

Set height = 2048, width = 2048, and you can get output with 2048x2048 resolution.

Stable Diffusion v1-5

from hidiffusion import apply_hidiffusion, remove_hidiffusion
from diffusers import DiffusionPipeline, DDIMScheduler
import torch
pretrain_model = "runwayml/stable-diffusion-v1-5"
scheduler = DDIMScheduler.from_pretrained(pretrain_model, subfolder="scheduler")
pipe = DiffusionPipeline.from_pretrained(pretrain_model, scheduler = scheduler, torch_dtype=torch.float16).to("cuda")

# # Optional. enable_xformers_memory_efficient_attention can save memory usage and increase inference speed. enable_model_cpu_offload and enable_vae_tiling can save memory usage.
# pipe.enable_xformers_memory_efficient_attention()
# pipe.enable_model_cpu_offload()
# pipe.enable_vae_tiling()

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

prompt = "thick strokes, bright colors, an exotic fox, cute, chibi kawaii. detailed fur, hyperdetailed , big reflective eyes, fairytale, artstation,centered composition, perfect composition, centered, vibrant colors, muted colors, high detailed, 8k."
negative_prompt = "ugly, tiling, poorly drawn face, out of frame, disfigured, deformed, blurry, bad anatomy, blurred."
image = pipe(prompt, guidance_scale=7.5, height=1024, width=1024, eta=1.0, negative_prompt=negative_prompt).images[0]
image.save(f"fox.jpg")
Output:

Set height = 2048, width = 2048, and you can get output with 2048x2048 resolution.

Remove HiDiffusion

If you want to remove HiDiiffusion, simply use remove_hidiffusion(pipe).

ControlNet

Text-to-image generation

from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, DDIMScheduler
import numpy as np
import torch
import cv2
from PIL import Image
from hidiffusion import apply_hidiffusion, remove_hidiffusion

# load Yoshua_Bengio.jpg in the assets file.
path = './assets/Yoshua_Bengio.jpg'
image = Image.open(path)
# get canny image
image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

# initialize the models and pipeline
controlnet_conditioning_scale = 0.5  # recommended for good generalization
controlnet = ControlNetModel.from_pretrained(
    "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
)
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16,
    scheduler = scheduler
)

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

pipe.enable_model_cpu_offload()
pipe.enable_xformers_memory_efficient_attention()

prompt = "The Joker, high face detail, high detail, muted color, 8k"
negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic."

image = pipe(
    prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image,
    height=2048, width=2048, guidance_scale=7.5, negative_prompt = negative_prompt, eta=1.0
).images[0]

image.save('joker.jpg')
Output:

Image-to-image generation

import torch
import numpy as np
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, DDIMScheduler
from hidiffusion import apply_hidiffusion, remove_hidiffusion
import cv2 

controlnet = ControlNetModel.from_pretrained(
    "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")

pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    controlnet=controlnet,
    scheduler = scheduler,
    torch_dtype=torch.float16,
).to("cuda")

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

pipe.enable_model_cpu_offload()
pipe.enable_xformers_memory_efficient_attention()

path = './assets/lara.jpeg'
ori_image = Image.open(path)
# get canny image
image = np.array(ori_image)
image = cv2.Canny(image, 50, 120)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

controlnet_conditioning_scale = 0.5  # recommended for good generalization
prompt = "Lara Croft with brown hair, and is wearing a tank top, a brown backpack. The room is dark and has an old-fashioned decor with a patterned floor and a wall featuring a design with arches and a dark area on the right side, muted color, high detail, 8k high definition award winning"
negative_prompt = "underexposed, poorly drawn hands, duplicate hands, overexposed, bad art, beginner, amateur, abstract, disfigured, deformed, close up, weird colors, watermark"

image = pipe(prompt,
    image=ori_image,
    control_image=canny_image,
    height=1536,
    width=2048,
    strength=0.99,
    num_inference_steps=50,
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    guidance_scale=12.5,
    negative_prompt = negative_prompt,
    eta=1.0
).images[0]

image.save("lara.jpg")
Output:

Inpainting

import torch
from diffusers import AutoPipelineForInpainting, DDIMScheduler
from diffusers.utils import load_image
from hidiffusion import apply_hidiffusion, remove_hidiffusion
from PIL import Image

scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
pipeline = AutoPipelineForInpainting.from_pretrained(
    "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16", 
    scheduler=scheduler
)

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipeline)

pipeline.enable_model_cpu_offload()
# remove following line if xFormers is not installed
pipeline.enable_xformers_memory_efficient_attention()

# load base and mask image
img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
init_image = load_image(img_url)
# load mask_image.jpg in the assets file.
mask_image = Image.open("./assets/mask_image.png")

prompt =  "A steampunk explorer in a leather aviator cap and goggles, with a brass telescope in hand, stands amidst towering ancient trees, their roots entwined with intricate gears and pipes."

negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, height=2048, width=2048, strength=0.85, guidance_scale=12.5, negative_prompt = negative_prompt, eta=1.0).images[0]
image.save('steampunk_explorer.jpg')
Output:

Integration into downstream models

HiDiffusion supports models based on supported models, such as Ghibli-Diffusion, Playground, etc.

Ghibli-Diffusion

from diffusers import StableDiffusionPipeline
import torch
from hidiffusion import apply_hidiffusion, remove_hidiffusion

model_id = "nitrosocke/Ghibli-Diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

prompt = "ghibli style magical princess with golden hair"
negative_prompt="blurry, ugly, duplicate, poorly drawn face, deformed, mosaic, artifacts, bad limbs"
image = pipe(prompt, height=1024, width=1024, eta=1.0, negative_prompt=negative_prompt).images[0]

image.save("./magical_princess.jpg")
Output:

Playground

from diffusers import DiffusionPipeline
import torch
from hidiffusion import apply_hidiffusion, remove_hidiffusion

pipe = DiffusionPipeline.from_pretrained(
    "playgroundai/playground-v2-1024px-aesthetic",
    torch_dtype=torch.float16,
    use_safetensors=True,
    add_watermarker=False,
    variant="fp16"
)
pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

prompt = "The little girl riding a bike, in a beautiful anime scene by Hayao Miyazaki: a snowy Tokyo city with massive Miyazaki clouds floating in the blue sky, enchanting snowscapes of the city with bright sunlight, Miyazaki's landscape imagery, Japanese art"
negative_prompt="blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
image  = pipe(prompt=prompt, guidance_scale=3.0, height=2048, width=2048, negative_prompt=negative_prompt).images[0]
image.save('girl.jpg')

Note: You may change guidance scale from 3.0 to 5.0 and design appropriate negative prompt to generate satisfactory results.

Output:

๐Ÿ™ Acknowledgements

This codebase is based on tomesd and diffusers. Thanks!

๐ŸŽ“ Citation

@article{zhang2023hidiffusion,
  title={HiDiffusion: Unlocking Higher-Resolution Creativity and Efficiency in Pretrained Diffusion Models},
  author={Zhang, Shen and Chen, Zhaowei and Zhao, Zhenyu and Chen, Yuhao and Tang, Yao and Liang, Jiajun},
  journal={arXiv preprint arXiv:2311.17528},
  year={2023}
}

hidiffusion's People

Contributors

shenzhang-shin avatar idostyle avatar megvii-model avatar

Stargazers

 avatar Xu Cao avatar Hyungwook Choi avatar Tang avatar Aryan avatar T avatar  avatar Yantao Xie avatar  avatar Jesse Holden avatar Gero Doll avatar Surya Penmetsa avatar  avatar Paul Mulvaney avatar Hanwen avatar Tianhe Wu avatar Weixia Zhang avatar Zhiheng Li avatar Chaofeng Chen avatar Yingshuang Zou avatar fighting! avatar Jie Yang avatar tao.qin avatar Tianxiao avatar Yabo Zhang avatar  avatar long_time_no_see avatar  avatar  avatar Raphaรซl avatar Hongwei Yi avatar Denis avatar WenchuanYang avatar  avatar  avatar Yongsen Mao avatar Muhammad Usama avatar ribincao avatar  avatar Ge Wu avatar Yan avatar Yu Zihao avatar  avatar  avatar Adrian Johnston avatar zhouxinge avatar Yuan-Man avatar  avatar  avatar  avatar  avatar menorki manil avatar Dzmitry Pranchuk avatar Lumen2088 avatar Mathilda avatar Friedrich-Julius_Heitzer_10031247 avatar buchylx avatar fire23 avatar  avatar Xiwen Wu avatar 0xhephaistos avatar Zichong Chen avatar  avatar  avatar Dev Sisudo avatar WendaShi avatar Zigeng Chen avatar  avatar dkluffy avatar Samir avatar  avatar  avatar chq avatar  avatar  avatar rouwan96 avatar Nugroho Wicaksono avatar  avatar  avatar  avatar  avatar Sumin avatar Jangho Park avatar  avatar  avatar wingwu avatar Imran Akbar avatar funnycat avatar ์‹ ๋™ํ˜‘(Donghyeop Shin) avatar ใ…Žใ…Ž avatar Yuliang (Zack) Zou avatar Taehoon Kim avatar Gordon_cv avatar  avatar Cengizhan Yurdakul avatar Florian Bruggisser avatar Guan Shanyan avatar Derrick avatar Yuliang Xiu avatar  avatar

Watchers

Yantao Xie avatar Qubitium-ModelCloud avatar Alex Genovese avatar Stefano Balocco avatar  avatar Zichong Chen avatar

hidiffusion's Issues

non-square aspect ratio results in failure

hidiffusion works perfectly with square images, for example using sd15 model and resolution 720x720 or 1280x1280 is fine.
but settings resolution to 720x1280 or 1280x720 results in immediate failure:

โ”‚ /home/vlado/.local/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:978 in __call__                                                                                                                                                                                                                โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚    977 โ”‚   โ”‚   โ”‚   โ”‚   # predict the noise residual                                                                                                                                                                                                                                                                                              โ”‚
โ”‚ โฑ  978 โ”‚   โ”‚   โ”‚   โ”‚   noise_pred = self.unet(                                                                                                                                                                                                                                                                                                   โ”‚
โ”‚    979 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   latent_model_input,                                                                                                                                                                                                                                                                                                   โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚ /home/vlado/.local/lib/python3.11/site-packages/torch/nn/modules/module.py:1532 in _wrapped_call_impl                                                                                                                                                                                                                                            โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   1531 โ”‚   โ”‚   else:                                                                                                                                                                                                                                                                                                                             โ”‚
โ”‚ โฑ 1532 โ”‚   โ”‚   โ”‚   return self._call_impl(*args, **kwargs)                                                                                                                                                                                                                                                                                       โ”‚
โ”‚   1533                                                                                                                                                                                                                                                                                                                                           โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚ /home/vlado/.local/lib/python3.11/site-packages/torch/nn/modules/module.py:1582 in _call_impl                                                                                                                                                                                                                                                    โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   1581 โ”‚   โ”‚   โ”‚                                                                                                                                                                                                                                                                                                                                 โ”‚
โ”‚ โฑ 1582 โ”‚   โ”‚   โ”‚   result = forward_call(*args, **kwargs)                                                                                                                                                                                                                                                                                        โ”‚
โ”‚   1583 โ”‚   โ”‚   โ”‚   if _global_forward_hooks or self._forward_hooks:                                                                                                                                                                                                                                                                              โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚ /home/vlado/.local/lib/python3.11/site-packages/diffusers/models/unets/unet_2d_condition.py:1281 in forward                                                                                                                                                                                                                                      โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   1280 โ”‚   โ”‚   โ”‚   if hasattr(upsample_block, "has_cross_attention") and upsample_block.has_cross_attention:                                                                                                                                                                                                                                     โ”‚
โ”‚ โฑ 1281 โ”‚   โ”‚   โ”‚   โ”‚   sample = upsample_block(                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   1282 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   hidden_states=sample,                                                                                                                                                                                                                                                                                                 โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚ /home/vlado/.local/lib/python3.11/site-packages/torch/nn/modules/module.py:1532 in _wrapped_call_impl                                                                                                                                                                                                                                            โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   1531 โ”‚   โ”‚   else:                                                                                                                                                                                                                                                                                                                             โ”‚
โ”‚ โฑ 1532 โ”‚   โ”‚   โ”‚   return self._call_impl(*args, **kwargs)                                                                                                                                                                                                                                                                                       โ”‚
โ”‚   1533                                                                                                                                                                                                                                                                                                                                           โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚ /home/vlado/.local/lib/python3.11/site-packages/torch/nn/modules/module.py:1541 in _call_impl                                                                                                                                                                                                                                                    โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   1540 โ”‚   โ”‚   โ”‚   โ”‚   or _global_forward_hooks or _global_forward_pre_hooks):                                                                                                                                                                                                                                                                   โ”‚
โ”‚ โฑ 1541 โ”‚   โ”‚   โ”‚   return forward_call(*args, **kwargs)                                                                                                                                                                                                                                                                                          โ”‚
โ”‚   1542                                                                                                                                                                                                                                                                                                                                           โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚ /home/vlado/.local/lib/python3.11/site-packages/diffusers/models/unets/unet_2d_blocks.py:2521 in forward                                                                                                                                                                                                                                         โ”‚
โ”‚                                                                                                                                                                                                                                                                                                                                                  โ”‚
โ”‚   2520 โ”‚   โ”‚   โ”‚                                                                                                                                                                                                                                                                                                                                 โ”‚
โ”‚ โฑ 2521 โ”‚   โ”‚   โ”‚   hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)                                                                                                                                                                                                                                                          โ”‚
โ”‚   2522                                                                                                                                                                                                                                                                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 320 but got size 160 for tensor number 1 in the list.

non-square aspect ratio results in failure (although have installed all the suitable version)

I have installed all the package and the version is the same as you show. But I still have this error.

(hi_diff) (base) [root@node58 HiDiffusion-main]# conda list

packages in environment at /root/miniconda3/envs/hi_diff:
Name Version Build Channel
_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
_openmp_mutex 5.1 1_gnu https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
accelerate 0.18.0 pypi_0 pypi
ca-certificates 2024.3.11 h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
certifi 2024.2.2 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
diffusers 0.27.0 pypi_0 pypi
filelock 3.14.0 pypi_0 pypi
fsspec 2024.3.1 pypi_0 pypi
hidiffusion 0.1.6 pypi_0 pypi
huggingface-hub 0.22.2 pypi_0 pypi
idna 3.7 pypi_0 pypi
importlib-metadata 7.1.0 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libffi 3.4.4 h6a678d5_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgcc-ng 11.2.0 h1234567_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libgomp 11.2.0 h1234567_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
libstdcxx-ng 11.2.0 h1234567_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
mypy-extensions 1.0.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
numpy 1.24.4 pypi_0 pypi
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
openssl 3.0.13 h7f8727e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
packaging 24.0 pypi_0 pypi
pillow 10.3.0 pypi_0 pypi
pip 23.3.1 py38h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
psutil 5.9.8 pypi_0 pypi
pyre-extensions 0.0.23 pypi_0 pypi
python 3.8.19 h955ad1f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
pyyaml 6.0.1 pypi_0 pypi
readline 8.2 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
regex 2024.4.28 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
safetensors 0.4.3 pypi_0 pypi
setuptools 68.2.2 py38h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sqlite 3.41.2 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tk 8.6.12 h1ccaba5_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
tokenizers 0.13.3 pypi_0 pypi
torch 1.13.1 pypi_0 pypi
tqdm 4.66.2 pypi_0 pypi
transformers 4.27.4 pypi_0 pypi
typing-extensions 4.11.0 pypi_0 pypi
typing-inspect 0.9.0 pypi_0 pypi
urllib3 2.2.1 pypi_0 pypi
wheel 0.41.2 py38h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xformers 0.0.16rc425 pypi_0 pypi
xz 5.4.6 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zipp 3.18.1 pypi_0 pypi
zlib 1.2.13 h5eee18b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main

error
File "test.py", line 53, in
image = pipe(prompt, guidance_scale=7.5, height=h, width=w, eta=1.0, generator=generator).images[0]
File "/root/miniconda3/envs/hi_diff/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/envs/hi_diff/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 971, in call
noise_pred = self.unet(
File "/root/miniconda3/envs/hi_diff/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "/root/miniconda3/envs/hi_diff/lib/python3.8/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1284, in forward
sample = upsample_block(
File "/root/miniconda3/envs/hi_diff/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/hi_diff/lib/python3.8/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 2512, in forward
hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 15 for tensor number 1 in the list.

Training

I was just wondering if you think your two interesting innovations RAU-Net and MSW-MSA would also have any benefit during fine tuning? Do you think the model would train faster or generalise better, or need only to be trained on smaller crops and downsampled images rather than high resolution images? If so that could be quite a boost to training!

I skimmed the paper but it was focused on the improvement during inference.

Problem at high resolutions using PuLID embeddings

I'm using a slightly modified version of the original PuLID. The change is to use a custom SDXL model, in my case Juggernaut-XL-v9. If you use a square resolution, everything works fine at low resolution. However, having tested the 2048x2048 resolution, the face literally blurred out of shape in the final image.
Unfortunately, I can't provide this image right now. But you can easily replicate it by taking my cli-version of PuLID and running it with the command python app.py --save_dir โ€˜{OUTPUT_DIR}/PuLID_outputโ€™ --save_file_name โ€˜{filename}โ€™ --face_img โ€˜{face_img}โ€™ --h 2048 --w 2048.
Also, if you want to examine this code, the first thing you should look at is this file: https://github.com/self-destruction/PuLID_cli/blob/main/pulid/pipeline.py. As you can see, I have temporarily commented out the using of hidiffusion there.
Thanks for your work!

Can we add version support for diffusers 0.272?

just control net pipeline has the following error message
image
The bottom progress is that I am closing โ€œapply_hidiffusion(pipe)โ€, the regular Control net pipeline process

I fix this issues
If โ€œapply_hidiffusion(pipe)โ€ is below pipe.enable_xformers_memory_efficient_attention(),pipe.enable_model_cpu_offload(),pipe.enable_vae_tiling()
, using controllnet will result in vector errors. If apply_hidiffusion(pipe) is mentioned above, HiDiffusion. py, the module will import the missing information and add it will done

a little speed up

I test sdxl (2048*2048) on A100 40G๏ผŒ when open the hidiffusion , the end to end time is 20s;
If I close the hidiffsuion ,the end to end time is 22s;

I close the hidiffsuion by commet the 2 code in the hidffsuion.py

norm_hidden_states = window_partition(norm_hidden_states, widow_size, shift_size, H, W)
attn_output = window_reverse(attn_output, widow_size, H, W, shift_size)

Looking forward to it! When will there be a webui interface?

็Žฐๅœจ็š„a1111่ถŠๆฅ่ถŠ็ฌจ้‡๏ผŒcomfyuiๅˆๅคชๅคๆ‚๏ผˆๅฏนไบŽ่ฎพ่ฎกๅธˆๆฅ่ฏด้žๅธธไธๅ‹ๅฅฝ๏ผ‰๏ผŒforgeๅš่ฟ‡ไธ€ไบ›ๅŠชๅŠ›๏ผŒไฝ†ไป้€ƒไธๅ‡บa1111ๅŽŸๆœฌ็š„้€ป่พ‘๏ผŒๅฝ“ๆˆ‘็œ‹ๅˆฐ[HiDiffusionๅฏไปฅ่ฎฉsd1.5ไธญ็”Ÿๆˆ็š„2048ๅ›พๅƒ๏ผŒๅˆๅฟซๅˆๆธ…ๆ™ฐ๏ผŒ่ฟ™ๅคชไปคไบบๅ…ดๅฅ‹ไบ†๏ผไฝ†ไปฃ็ ไป็„ถ่ฎฉ้žๅผ€ๅ‘ไบบๅ‘˜ๆœ›่€Œๅดๆญฅ๏ผŒไป€ไนˆๆ—ถๅ€™ไผšๆœ‰ไธ€ไธชwebui็š„็•Œ้ขๅ‘ข๏ผŸ้žๅธธๆœŸๅพ…๏ผ

hidiffusion.py fixes

You define the name_or_path where model.name_or_path is not known but then you reference model.name_or_path anyway which breaks it for non-default models.

Line 1831:

'is_inpainting_task': 'inpainting' in model.name_or_path, 'is_playground': 'playground' in model.name_or_path}

Should be:

'is_inpainting_task': 'inpainting' in name_or_path, 'is_playground': 'playground' in name_or_path}

You also have a typo in a variable name which should be supported_official_model

It is working with this fix although I did notice you do not get the same image when you use the same seed twice. Not sure if this is expected behavior or not.

Support LoRA๏ผŸ

Dear developer,

Thanks for your nice project.

Does this project support LoRA (such as LCM-LoRA)?
I read the paper and find that the dimension of UNet blocks is changed.
So I am not sure whether it is compatible with LoRA?
Could you please explain it?

Best wishes.

Can't detect HiDiffusion package

I am on Windows, trying to use HiDiffusion. After installing the Pip package, my system is unable to resolve the hidiffusion import, when trying to follow the instructions on the Readme.

Within my IDE:

Import "hidiffusion" could not be resolved

Example from my console:

F:\Localllm\AlternateInference> pip install hidiffusion --no-cache-dir
WARNING: Skipping C:\Users---\anaconda3\Lib\site-packages\torch-2.2.0.dist-info due to invalid metadata entry 'name'
Collecting hidiffusion
Downloading hidiffusion-0.1.5-py3-none-any.whl.metadata (15 kB)
Downloading hidiffusion-0.1.5-py3-none-any.whl (32 kB)
WARNING: Skipping C:\Users---\anaconda3\Lib\site-packages\torch-2.2.0.dist-info due to invalid metadata entry 'name'
Installing collected packages: hidiffusion
Successfully installed hidiffusion-0.1.5
PS F:\Localllm\AlternateInference> & C:/Users/--/AppData/Local/Programs/Python/Python310/python.exe f:/Localllm/AlternateInference/inference_engine/image_inference.py
Traceback (most recent call last):
File "f:\Localllm\AlternateInference\inference_engine\image_inference.py", line 1, in
from hidiffusion import apply_hidiffusion, remove_hidiffusion
ModuleNotFoundError: No module named 'hidiffusion'

I have attempted to use pyenv, and conda environments, as well as setting the python version to 3.8, as opposed to my system default of 3.10. Here we have the end of the install from a requirements.txt I put up from the requirements listed on the instructions. I have marked where hidiffusion is among these items. I have attempted to restart the IDE, the terminal, and my machine, and haven't seen a change with my system being unable to detect hidifussion, despite being installed via pip:

Installing collected packages: mpmath, hidiffusion, zipp, urllib3, typing-extensions, sympy, safetensors, regex, pyyaml, psutil, Pillow, packaging, numpy, networkx, MarkupSafe, idna, fsspec, filelock, colorama, charset-normalizer, certifi, tqdm, requests, jinja2, importlib-metadata, torch, huggingface-hub, xformers, tokenizers, diffusers, accelerate, transformers
Successfully installed MarkupSafe-2.1.5 Pillow-10.3.0 accelerate-0.29.3 certifi-2024.2.2 charset-normalizer-3.3.2 colorama-0.4.6 diffusers-0.27.2 filelock-3.13.4 fsspec-2024.3.1 hidiffusion-0.1.5 huggingface-hub-0.22.2 idna-3.7 importlib-metadata-7.1.0 jinja2-3.1.3 mpmath-1.3.0 networkx-3.1 numpy-1.24.4 packaging-24.0 psutil-5.9.8 pyyaml-6.0.1 regex-2024.4.16 requests-2.31.0 safetensors-0.4.3 sympy-1.12 tokenizers-0.19.1 torch-2.2.2 tqdm-4.66.2 transformers-4.40.1 typing-extensions-4.11.0 urllib3-2.2.1 xformers-0.0.26.dev778 zipp-3.18.1

(imageinference) F:\Localllm\AlternateInference\inference_engine>

After working through trying to get some updates going to see if it was just some environment issues, I attempted to run it again, and this time got:

(imageinference) F:\Localllm\AlternateInference\inference_engine>python image_inference.py
Traceback (most recent call last):
File "image_inference.py", line 1, in
from hidiffusion import apply_hidiffusion, remove_hidiffusion
File "C:\Users---\anaconda3\envs\imageinference\lib\site-packages\hidiffusion_init_.py", line 1, in
from .hidiffusion import apply_hidiffusion, remove_hidiffusion
File "C:\Users---\anaconda3\envs\imageinference\lib\site-packages\hidiffusion\hidiffusion.py", line 9, in
from diffusers.utils import USE_PEFT_BACKEND, replace_example_docstring
ImportError: cannot import name 'USE_PEFT_BACKEND' from 'diffusers.utils' (C:\Users---\anaconda3\envs\imageinference\lib\site-packages\diffusers\utils_init_.py)

Am I missing something?

'is_inpainting_task': 'inpainting' in model.name_or_path

Traceback (most recent call last):
File "/home/zznet/workspace/ai-pipe/prod-ai-pipe-replace-model/runpod_app.py", line 14, in
from pipe import text2img, getText2imgPipe, set_sampler, compel
File "/home/zznet/workspace/ai-pipe/prod-ai-pipe-replace-model/pipe.py", line 34, in
apply_hidiffusion(text2imgPipe)
File "/home/zznet/.local/lib/python3.10/site-packages/hidiffusion/hidiffusion.py", line 1959, in apply_hidiffusion
'is_inpainting_task': 'inpainting' in model.name_or_path,
TypeError: argument of type 'NoneType' is not iterable

Do you support ControlNet in SD 1.5?

It is supported in SDXL, but it doesn't seem to be supported in SD 1.5 according to the code.

If it's not supported, I'm also curious if there are any plans to support it in the future.

image to image SD3: NameError: name 'scale_lora_layers' is not defined

when i run the basic demo code for image to image

import torch
import numpy as np
from PIL import Image
from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, DDIMScheduler
from hidiffusion import apply_hidiffusion, remove_hidiffusion
import cv2 

controlnet = ControlNetModel.from_pretrained(
    "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")

pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    controlnet=controlnet,
    scheduler = scheduler,
    torch_dtype=torch.float16,
).to("cuda")

# Apply hidiffusion with a single line of code.
apply_hidiffusion(pipe)

pipe.enable_model_cpu_offload()
pipe.enable_xformers_memory_efficient_attention()

path = './assets/lara.jpeg'
ori_image = Image.open(path)
# get canny image
image = np.array(ori_image)
image = cv2.Canny(image, 50, 120)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)

controlnet_conditioning_scale = 0.5  # recommended for good generalization
prompt = "Lara Croft with brown hair, and is wearing a tank top, a brown backpack. The room is dark and has an old-fashioned decor with a patterned floor and a wall featuring a design with arches and a dark area on the right side, muted color, high detail, 8k high definition award winning"
negative_prompt = "underexposed, poorly drawn hands, duplicate hands, overexposed, bad art, beginner, amateur, abstract, disfigured, deformed, close up, weird colors, watermark"

image = pipe(prompt,
    image=ori_image,
    control_image=canny_image,
    height=1536,
    width=2048,
    strength=0.99,
    num_inference_steps=50,
    controlnet_conditioning_scale=controlnet_conditioning_scale,
    guidance_scale=12.5,
    negative_prompt = negative_prompt,
    eta=1.0
).images[0]

image.save("lara.jpg")

i get

Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [05:23<00:00, 46.17s/it]
  0%|                                                                                          | 0/49 [00:02<?, ?it/s]
Traceback (most recent call last):
  File "D:\code\hidiff\image2image.py", line 39, in <module>
    image = pipe(prompt,
            ^^^^^^^^^^^^
  File "f:\.conda\env\c215\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "f:\.conda\env\c215\Lib\site-packages\hidiffusion\hidiffusion.py", line 775, in __call__
    noise_pred = self.unet(
                 ^^^^^^^^^^
  File "f:\.conda\env\c215\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "f:\.conda\env\c215\Lib\site-packages\torch\nn\modules\module.py", line 1582, in _call_impl
    result = forward_call(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "f:\.conda\env\c215\Lib\site-packages\accelerate\hooks.py", line 166, in new_forward
    output = module._old_forward(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "f:\.conda\env\c215\Lib\site-packages\hidiffusion\hidiffusion.py", line 1117, in forward
    scale_lora_layers(self, lora_scale)
    ^^^^^^^^^^^^^^^^^
NameError: name 'scale_lora_layers' is not defined

same error when using controlnet.

text to image works fine on sd3.

What about the optionnal code?

To install HiDiffusion with SDXL I see this:

# Optional. enable_xformers_memory_efficient_attention can save memory usage and increase inference speed. enable_model_cpu_offload and enable_vae_tiling can save memory usage.

pipe.enable_xformers_memory_efficient_attention()

pipe.enable_model_cpu_offload()

pipe.enable_vae_tiling()

What are the differences?
Is the speed gain correlated to a decrease of quality?
Do we have this possibility with other models like Playground?

I'm building a colab.

FYI: Full Integration

I like HiDiffusion project a lot, results are VERY good!
So i've integrated it into SD.Next (with full credits)

Also added a bit more tunables instead of using fixed values (enable/disable raunet, msw-msa, set attention steps, tune t1/t2 values)
Attaching example XYZ grid created with different T1/T2 ratios for SD15 at 1k and 2k and SDXL at 2k:

sdnext-hidiff-sd15-1k-compressed

sdnext-hidiff-sd15-2k-compressed

sdnext-hidiff-sdxl-2k-compressed

USE_PEFT_BACKEND

nice work!
I want to know if USE_PEFT_BACKEND is important and indispensable?It appears in the following two places:
if USE_PEFT_BACKEND: scale_lora_layers(self, lora_scale)
and
if USE_PEFT_BACKEND: unscale_lora_layers(self, lora_scale)
My diffusers version is 0.19.3 ,which can not support PEFT,but it is difficult to update diffusers on my code.
Any help would be greatly appreciated!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.