Comments (17)
but I used XL and got it
NameError: name 'scale_lora_layers' is not defined @ShenZhang-Shin
from hidiffusion.
thanks @ShenZhang-Shin , amazing, put it together as a Space demo here https://huggingface.co/spaces/radames/Enhance-This-HiDiffusion-SDXL using the latest controlnet SDXL https://huggingface.co/TheMistoAI/MistoLine
QuickTime.Player.mp4
from hidiffusion.
thanks @ShenZhang-Shin , amazing, put it together as a Space demo here https://huggingface.co/spaces/radames/Enhance-This-HiDiffusion-SDXL using the latest controlnet SDXL https://huggingface.co/TheMistoAI/MistoLine
QuickTime.Player.mp4
Nice work. However, I find that the output image is not as good as my code output, such as Lara. There might be some problems to fix.
from hidiffusion.
very cool! I'm using this new controlnet net model and a custom sobel operator to generate the canny , https://huggingface.co/TheMistoAI/MistoLine , it would produce different outputs for sure
Wow, A new controlnet that Controls Every Line! Thanks for sharing. I will be going to use it with hidiffusion.
from hidiffusion.
Please wait, we will soon support it.
from hidiffusion.
+1
from hidiffusion.
this will be amazing! just testing with pure controlnet without the reference image is suboptimal
from hidiffusion.
Is controlnet only supported in the XL version?
I used version 1.5 and got it
File "/root/anaconda3/envs/stable_fast/lib/python3.10/site-packages/hidiffusion/hidiffusion.py", line 383, in call
self.check_inputs(
TypeError: StableDiffusionControlNetPipeline.check_inputs() takes from 4 to 13 positional arguments but 17 were given
from hidiffusion.
Yes, We now only support XL+controlnet.
from hidiffusion.
but I used XL and got it NameError: name 'scale_lora_layers' is not defined @ShenZhang-Shin
Do you run hidiffusion with diffusers==0.27.0 or diffusers==0.25.0 ? We have tested the two versions and it works fine.
scale_lora_layers belongs to diffusers. Other versions may meet incompatibility
from hidiffusion.
Now we support image2image, welcome to try it to generate impressive images !
Here is an example.
Given a blurry image, hidiffusion can output a 2K clear image.
from hidiffusion.
但是我使用了XL并得到了NameError:未定义名称“scale_lora_layers”
您是否使用 diffusers==0.27.0 或 diffusers==0.25.0 运行 hidiffusion ?我们已经测试了这两个版本,它运行良好。 scale_lora_layers属于扩散器。其他版本可能不兼容
I use XL-lightning, I wonder if that's the reason
from hidiffusion.
Please provide your code with XL-lightning, I will reproduce it and check the reason
from hidiffusion.
thanks @ShenZhang-Shin , amazing, put it together as a Space demo here https://huggingface.co/spaces/radames/Enhance-This-HiDiffusion-SDXL using the latest controlnet SDXL https://huggingface.co/TheMistoAI/MistoLine
QuickTime.Player.mp4
The lara output with the same prompt and guidance scale on my machine. The output is better, especially in the background.
prompt: photography of lara croft 8k high definition award winning
negative prompt: underexposed, poorly drawn hands, duplicate hands, bad limbs, overexposed, bad art, beginner, amateur, abstract, disfigured, deformed
strength: 0.99
controlnet_conditioning_scale: 0.5
guidance_scale: 8.5
from hidiffusion.
very cool! I'm using this new controlnet net model and a custom sobel operator to generate the canny , https://huggingface.co/TheMistoAI/MistoLine , it would produce different outputs for sure
from hidiffusion.
Please provide your code with XL-lightning, I will reproduce it and check the reason
class Xl_make():
def __init__(self):
self.controlnet_canny = ControlNetModel.from_pretrained(
"/home/lora_test/model/canny_xl", use_safetensors=True, torch_dtype=torch.float16
)
self.pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained(
"/home/lora_test/model/dreamshaper_xl_lightning",
torch_dtype=torch.float16,
controlnet=self.controlnet_canny,
use_safetensors=True,
).to('cuda')
self.pipe.scheduler = DPMSolverMultistepScheduler.from_config(
self.pipe.scheduler.config,
algorithm_type="sde-dpmsolver++"
)
apply_hidiffusion(self.pipe)
def make(self, image):
width, height = image.size
if width < 832 or height < 832:
image = image.resize((width*2, height*2))
width, height = image.size
if width > 832 or height > 832:
image = image.resize((int(832 / max(width, height) * width), int(832 / max(width, height) * height)))
width, height = image.size
image = image.resize((width, height))
canny_img = np.array(image)
low_threshold = 100
high_threshold = 200
canny = cv2.Canny(canny_img, low_threshold, high_threshold)
canny = Image.fromarray(canny)
new_img = self.pipe(
prompt='oil painting,Masterpiece,best quality,nature,in a meadow,lake,sky'
negative_prompt='bad quality, worst quality, text, signature, watermark, extra limbs,blurred, watermark, signature, low contrast, low resolution',
image=image,
control_image=canny,
guidance_scale=4,
strength=0.6
num_inference_steps=10
controlnet_conditioning_scale=1.0,
generator=torch.manual_seed(42),
control_guidance_start=0.0,
control_guidance_end=0.6,
clip_skip=2,
).images[0]
new_img.save('/home/sdmade_material/new_img/new1.jpg')
new_img.save(f'/home/sdmade_material/new_img/new1.5.jpg')
from hidiffusion.
Please provide your code with XL-lightning, I will reproduce it and check the reason
class Xl_make(): def __init__(self): self.controlnet_canny = ControlNetModel.from_pretrained( "/home/lora_test/model/canny_xl", use_safetensors=True, torch_dtype=torch.float16 ) self.pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( "/home/lora_test/model/dreamshaper_xl_lightning", torch_dtype=torch.float16, controlnet=self.controlnet_canny, use_safetensors=True, ).to('cuda') self.pipe.scheduler = DPMSolverMultistepScheduler.from_config( self.pipe.scheduler.config, algorithm_type="sde-dpmsolver++" ) apply_hidiffusion(self.pipe) def make(self, image): width, height = image.size if width < 832 or height < 832: image = image.resize((width*2, height*2)) width, height = image.size if width > 832 or height > 832: image = image.resize((int(832 / max(width, height) * width), int(832 / max(width, height) * height))) width, height = image.size image = image.resize((width, height)) canny_img = np.array(image) low_threshold = 100 high_threshold = 200 canny = cv2.Canny(canny_img, low_threshold, high_threshold) canny = Image.fromarray(canny) new_img = self.pipe( prompt='oil painting,Masterpiece,best quality,nature,in a meadow,lake,sky' negative_prompt='bad quality, worst quality, text, signature, watermark, extra limbs,blurred, watermark, signature, low contrast, low resolution', image=image, control_image=canny, guidance_scale=4, strength=0.6 num_inference_steps=10 controlnet_conditioning_scale=1.0, generator=torch.manual_seed(42), control_guidance_start=0.0, control_guidance_end=0.6, clip_skip=2, ).images[0] new_img.save('/home/sdmade_material/new_img/new1.jpg') new_img.save(f'/home/sdmade_material/new_img/new1.5.jpg')
I see, man. I will reproduce it and fix the problem
from hidiffusion.
Related Issues (20)
- FYI: Full Integration HOT 4
- non-square aspect ratio results in failure (although have installed all the suitable version) HOT 7
- What about the optionnal code? HOT 1
- Do you support ControlNet in SD 1.5? HOT 3
- Dose it support ControlNet inpaint in SDXL? HOT 6
- !!!!Replace the prompt in your example.ipynb!!! HOT 1
- Can we add version support for diffusers 0.272? HOT 1
- sdxl version doesn't work on non-square image HOT 3
- Problem at high resolutions using PuLID embeddings HOT 1
- USE_PEFT_BACKEND HOT 2
- Support for SD 1.5 Controlnet + Inpainting HOT 1
- 'is_inpainting_task': 'inpainting' in model.name_or_path HOT 5
- does it work better than upscale + hires + tile? HOT 1
- Support LoRA?
- image to image SD3: NameError: name 'scale_lora_layers' is not defined HOT 3
- 'StableDiffusionPipeline' object has no attribute '_num_timesteps' HOT 1
- would you team support it in a1111 webui officially? HOT 1
- a little speed up
- official support for ComfyUI please HOT 2
- Got NoneType error HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hidiffusion.