Coder Social home page Coder Social logo

Comments (5)

dogboydog avatar dogboydog commented on August 17, 2024 10

Was getting the same but I think the README example does not show the correct config to use?
Here is an example that worked for me without noise. Download the 512-base-ema.ckpt from here

python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img ~/Pictures/example.png --ckpt 512-base-ema.ckpt --config configs/stable-diffusion/v2-inference.yaml

Or with the 768 ckpt:
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img ~/Pictures/example.png --ckpt 768-v-ema.ckpt --config configs/stable-diffusion/v2-inference-v.yaml

from stablediffusion.

b0bsl3d avatar b0bsl3d commented on August 17, 2024 2

try changing you sampler from PLMS to DPM and see if you get sane results. I worked for me. you can do that at the command line by adding "--dpm" to your txt2img line

from stablediffusion.

b0bsl3d avatar b0bsl3d commented on August 17, 2024

Sorry, I missed you saying img2img in title. yes, doesn't look like img2img allows you to specify another sampler. if you look in the code (scripts\img2img.py) you should see:

 device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = model.to(device)

**sampler = DDIMSampler(model)**

I guess you could experiment with substituting another sampler invocation there - if you are getting the excessive noise from img2img, but I also wonder if its another problem. I have not seen oddities from img2img, only txt2img with PLMS sampler.

from stablediffusion.

andresberejnoi avatar andresberejnoi commented on August 17, 2024

try changing you sampler from PLMS to DPM and see if you get sane results. I worked for me. you can do that at the command line by adding "--dpm" to your txt2img line

Do you know why the PLMS sampler does not work in this case? I also found this same issue with the txt2img.py script. I get noise unless I use DPM or the default ones.

from stablediffusion.

fahmidme avatar fahmidme commented on August 17, 2024

Try adding the config flag --config configs/stable-diffusion/v2-inference.yaml. It made the output much better for me.

from stablediffusion.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.