Coder Social home page Coder Social logo

luchengthu / dpm-solver Goto Github PK

View Code? Open in Web Editor NEW
1.5K 1.5K 118.0 65 MB

Official code for "DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps" (Neurips 2022 Oral)

License: MIT License

Python 100.00%
diffusion-models machine-learning score-based-generative-models stable-diffusion

dpm-solver's People

Contributors

luchengthu avatar pcuenca avatar shaoshitong avatar xiang-cd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dpm-solver's Issues

With all due respect which part of your model is this going to be added to, the training or the testing part?

noise_schedule = NoiseScheduleVP(schedule='linear', continuous_beta_0=0.1, continuous_beta_1=20.)
            model_fn = GaussianDiffusion()
            # model_fn = model_wrapper(
            #     diffusion_model,
            #     noise_schedule,
            #     model_type="noise",  # or "x_start" or "v" or "score"
            #     model_kwargs={},
            # )
            dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver++")
            x_sample = dpm_solver.sample(
                x_T, //what is this?
                t_end=1e-4,
                steps=20,
                order=3,
                skip_type="time_uniform",
                method="multistep",
            )

how to set parameters when sampling

how to set the following parameters in "Example: Classifier Guidance Sampling by DPM-Solver":

model_kwargs = {...}

condition = ...

betas = ....

classifier = ...

classifier_kwargs = {...}

sometimes, conditon is included in model_kwargs...
and how to set classifier_kwargs and classifier?
betas? (the definition inside the model?)

The results are blurry images with vanilla ddpm

Dear authors,
Thank you for generously sharing your great work!
I used dpm-solver to accelerate vanilla ddpm for image purification.And if I set timesteps OF DDPM as 500,with my pretrained model,I can gradually reverse the image to one that's close to the original one:
image
However,when I used dpm-solver,the results are blurry:
image

Settings:
512*512 celebahq;
betas are from 0.0001 to 01004004 with 500 steps,so x_start=1,x_end=1/500;
self.model:the same one in your responsitory,whose path is "dpm-solver/examples/ddpm_and_guided-diffusion/models
/diffusion.py"

noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas) model_fn = model_wrapper( self.model, noise_schedule, model_type="noise", model_kwargs={}, guidance_type="uncond" )

self.dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver++") x_sample = self.dpm_solver.sample( x, steps=20, order=3, skip_type="time_uniform", method="singlestep"
I tried singlestep,multistep,steps ranging [10,1000],orders as 1,2 or 3.Nothing worked.
Thanks for your kind help!

vanilla DDPM with cosine beta schedule, obtain results worse than DDIM

Hi,
Thank you for your excellent codes and detailed documentation on how to incorporate DPM-solver in our own project!

I try to substitute DDIM with DPM-solver but fail to obtain comparable results.

Training details of my diffusion model:
(1) Dataset: CelebA-HQ 256x256
(2) Vanilla DDPM ( L2 Loss, predict noise), T=1000, UNet, trained in raw pixel space no latent space used.
(3) Beta schedules: Cosine schedule (according to Improved Denoising Diffusion Probabilistic Models)

Code snippet that uses DPM-solver in my project:

model = diffusion_model.model      # nn.Module, takes 256x256x3 images as input and predicts noise
betas = diffusion_model.betas      # cosine schedule

noise_schedule = NoiseScheduleVP(schedule='discrete',betas=betas)

model_kwargs = {}
model_fn = model_wrapper(
      model,
      noise_schedule,
      model_type="noise",  # or "x_start" or "v" or "score"
      model_kwargs=model_kwargs,
      guidance_type="uncond",
)

x_T = torch.randn((4, 3, 224, 224), device = device)
dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver")
x_sample = dpm_solver.sample(
      x_T,
      steps=25,
      order=3,
      skip_type="time_uniform",
      method="singlestep",
      denoise_to_zero= False,
)
img = unnormalize_to_zero_to_one(x_sample)

Result sampled by DDIM after 500 steps:
ddim_sample

Result sampled by DPM-solver after 25 steps with cosine schedule (schedule used in training) betas:
dpm_solver_cosine

Result sampled by DPM-solver after 25 steps with linear schedule betas:
dpm_solver

I have tried to tune parameters of DPM-solver, e.g. multi-step instead of single-step, more iterative steps, but neither works.
Is this result from cosine schedule used when training diffusion model? Could you please give any suggestions on possible improvements? Thank you for your attention!

Support for Classifier-free ?

Thanks for your impressive work and huge contribution !!!
From your README.md, it can be seen that Classifier Guidance has been supported.
It would be greatful if you could support classifier-free guidance, as it is widely used in many famous work.

Multiple Guidance in Sampling?

Thanks for your excellent work!

I am wondering if DPM Solver supports applying multiple guidance techniques simultaneously, such as both Classifier-free guidance and Classifier guidance, or using multiple guidance techniques with varying guided scales.And are there any plans for implementing these features?

Question regarding likelihood evaluation

Hi Cheng,

Congrads on your impactful works.

I'm pretty new to this field and want to know if evaluating the negative log-likelihood $(-\log P_\theta ( x_0 | c ) )$ really makes sense to you. I want to compute the exact likelihood of an image being sampled from the denoising process parameterized by $\theta$. I was wondering if this is possible with an ODE solver like DPM-solver.

Thanks.

Unsatisfactory result.

I have trained a 1000-step diffusion model, and get fine results using 1000-step reverse process.
This is the original imgs (the two are concatenated):
ori2
and this is the generated imgs:
generate2

However, when using dpm_solver, i get unsatisfactory or even worse results.
Here is the 20-step img:
dpm-20

100-step:
dpm

What happended and what should i do ?

Unconditional sampling for latent diffusion

Thanks for the nice work. The current code works for conditional sampling for latent diffusion. Is is also possible to provide script for unconditional sampling for latent diffusion?

Thanks

an error happens in "x0 = (x - sigma_t * noise) / alpha_t"

here, the channel of estimated noise is 6, while x is 3, so an error happens in "x0 = (x - sigma_t * noise) / alpha_t"

def data_prediction_fn(self, x, t):
        """
        Return the data prediction model (with corrector).
        """
        noise = self.noise_prediction_fn(x, t)
        alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)
        x0 = (x - sigma_t * noise) / alpha_t
        if self.correcting_x0_fn is not None:
            x0 = self.correcting_x0_fn(x0, t)
        return x0

any tips?

Comparison or support of elucidated DDPM

Tero Karras earlier this year published a sufficient improvement to DDPM https://arxiv.org/abs/2206.00364 which he called elucidated DDPM. Here it already gets very impressive results with as little as 32 sampling steps. How do you think DPM-Solve compares to it?

And would it provide any benefit at all on top of it?

How to generate new image?

Hi teacher Lu. I am new to pytorch. I would like to know how to use dpm-solver in a pre-trained model(DDPM) to generate new images . Can you show me an example?

Minor Error in get_orders_and_timesteps_for_singlestep_solver, TypeError: cumsum() received an invalid combination of arguments

Hi, thanks for releasing your excellent work.
I am using this code on my own diffusion model. when I was trying 'singlestep' + 'time_uniform' , the following error occurs:

File "xxx/dpm_solver_pytorch.py", line 495, in get_orders_and_timesteps_for_singlestep_solver
timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[torch.cumsum(torch.tensor([0,] + orders)).to(device)]
TypeError: cumsum() received an invalid combination of arguments - got (Tensor), but expected one of:
*(Tensor input, int dim, *, torch.dtype dtype, Tensor out)
*(Tensor input, name dim, *, torch.dtype dtype, Tensor out)

I check the code dpm_solver_pytorch.py at line #495 seems like torch.cumsum(input, dim) requires an explicit argument dim=0.
Maybe this is a minor error?

line #495 should be
timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[torch.cumsum(torch.tensor([0,] + orders), dim=0).to(device)]

Sampled image tend to be dimmer than ddim

Thanks for sharing your works! I try to apply dpm-solver to the ddim code on my own dataset. However, compared to the original ddim, the images sample from dpm is way more dimmer than those sampled from ddim. The examples are followed:
DPM:
5
DDIM:
20

The sampling procedure of DPM is indeed faster than ddim.

the code is the same as the example
noise_schedule = NoiseScheduleVP(schedule='linear') model_fn = model_wrapper( model, noise_schedule, is_cond_classifier=False, total_N=1000, model_kwargs={} ) dpm_solver = DPM_Solver(model_fn, noise_schedule) self.sample_dpm(dpm_solver)

`def sample_dpm(self, dpm_solver):
config = self.config
total_n_samples = 50000
img_id = len(glob.glob(f"{self.args.image_folder}/*"))
n_rounds = (total_n_samples - img_id) // config.sampling.batch_size

    with torch.no_grad():
        for _ in tqdm.tqdm(
            range(n_rounds), desc="Generating image samples for FID evaluation."):
            n = config.sampling.batch_size
            x = torch.randn(
                n,
                config.data.channels,
                config.data.image_size,
                config.data.image_size,
                device=self.device,
            )
            x_sample =dpm_solver.sample(x, steps=self.args.timesteps, eps=1e-4, adaptive_step_size=False, fast_version=True,) 
            for i in range(n):
                tvu.save_image(x_sample[i], os.path.join(self.args.image_folder, f"{img_id}.png"))
            img_id += 1`

Could you kindly show me whether there is something wrong?

Score model dosen't usse model_wrapper

You stated in README that the model_fn's output should be noise for dpm solver. However, the example in example_v2/score_sde_pytorch doesn't use model_wrapper to turn the score into noise. Is it wrong or do I misunderstood something?
I would appreciate it a lot if you could reply to me.

explicitly predict x0 based on xt and trained model

Hi @LuChengTHU ,

Thanks for your great work and detailed documents.

I have difficulty finding the predicted x0 in the dpmsolver. Would it be possible for you to point it out?

https://github.com/LuChengTHU/dpm-solver/blob/main/dpm_solver_pytorch.py

From x_t to x_{t-1}, the mean is actually the weighted sum between x_t and predicted x_0 (based on x_t).
Although we can obtain a simpler formula for x_t-1 by plugging x_0 into the mean equation (\mu), the predicted x_0 is very important when solving some inverse problems because we can incorporate reference information by perturbing the predicted x_0. For example https://ddrm-ml.github.io/

This is the reason that I want to find the code of the predicted x_0. Any guidance would be highly appreciated:)

image

Minor error in the code's comment?

Thanks for releasing this great work! I think there is a typo here. It should be

alphas_cumprod = cumprod((1 - betas))

instead of

alphas_cumprod = cumprod(betas)

a complete demo?

nice job! can provide a complete demo? load a basic diffusion model, and then sample.

VESDE support?

Thanks for you great work! This DPM solver can speed up VPSDE sampling quite a lot.

Notice that VESDE has $f(t)=0$, and $\sigma_t$ with exponential structure, it should be easier to get a exponentially weighted integral of $\epsilon_\theta(x_t,t)$:
$\int_{\mu_s}^{\mu_t} \sigma_{min}e^{\mu} \hat{\epsilon_\theta} (x_\mu,\mu) d \mu$
which is equivalent to the 2nd term on the rhs of (3.4) in the paper.

With minor modification on the NoiseScheduleVP (set $\alpha_t = 1$ for all formulas in the paper and calculate the inverse lambda), I expect the solver can be adapted to VESDE.

However, when I try to apply it with VESDE, I can't get proper results (still pure noise).

Have you tried VESDE before and encounter similar problem?
Or do I get something wrong here?

Ps: With adapter DPM solver, I get $t\approx 1$ in the first few steps (which is what we expect for VESDE as mentioned in https://github.com/yang-song/score_sde_pytorch/issues/28), but still get $x_0$ is pure noise with very large variance)

only noise after sampling

ddpm.txt
hi , I work on a job to generate font using tranditional ddpm model (Discrete time).

I followed the steps in my attach file but get only noise after all tries( I get right result from 1000 iterations).
can you help me to find what is wrong in my code.

Many thanks.

Some tinny code wrong writing

In "example_v2/stable-diffusion/ldm/models/diffusion/dpm_solver/sampler.py: Line 5", "from .solver import NoiseScheduleVP, model_wrapper, DPM_Solver" should be "from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver".

Use dpm-solver in OpenAI guided-dffusion and get errors

Hi, I add a dpm sample function of class GaussianDiffusion in https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/gaussian_diffusion.py
But I keep getting errors like the tensor not matching.

RuntimeError: The size of tensor a (3) must match the size of tensor b (6) at non-singleton dimension 1

It is hard for me to understand the theory of dpm currently.
Could anyone identify the problem?

   
 def dpm_sample_loop(self,
        model,
        shape,
        noise=None,
        clip_denoised=True,
        denoised_fn=None,
        cond_fn=None,
        model_kwargs=None,
        device=None,
        progress=True):

        final = {}

        if device is None:
            device = next(model.parameters()).device
        assert isinstance(shape, (tuple, list))
        if noise is not None:
            img = noise
        else:
            img = th.randn(*shape, device=device)

        ## 1. Define the noise schedule. dpmsolver++
        noise_schedule = NoiseScheduleVP(schedule='discrete', betas=th.from_numpy(self.betas))

        ## 2. Convert your discrete-time `model` to the continuous-time
        ## noise prediction model. Here is an example for a diffusion model
        ## `model` with the noise prediction type ("noise") .

        model_fn = model_wrapper(
            model,
            noise_schedule,
            model_type="noise",  # or "x_start" or "v" or "score"
            model_kwargs=model_kwargs,
        )

        dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver++")

        ## You can use steps = 10, 12, 15, 20, 25, 50, 100.
        ## Empirically, we find that steps in [10, 20] can generate quite good samples.
        ## And steps = 20 can almost converge.
        sample = dpm_solver.sample(
            img,
            steps=20,
            order=3,
            skip_type="time_uniform",
            method="multistep",
        )
        sample[:, -1, :, :] = norm(sample[:, -1, :, :])
        final["sample"] = sample
        return final["sample"]

README for example_v1

Hi,
thanks a lot for the interesting work. I'm wondering when you would like to update the instructions for example_v1. Is there any estimated timeline?
Thank you!

Possible to support img_callback and alternating prompts?

Hi, thanks for making your code work with Stable Diffusion.

I have a couple of requests if possible.

  1. Could you support the img_callback parameter? It seems to work okay in my limited testing. I'm using the version of your code which has been included with Stable Diffusion 2.0, but just backported to 1.x (it works fine with no changes).

Supporting the img_callback function allows us to render the current state every N frames. I'm using multistep and found adding code like this after multistep_dpm_solver_update works.

                x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order,
                                                     solver_type=solver_type)
                if img_callback: img_callback(x, step)
  1. Could you support the ability to alternate/cycle through prompts? This one seems a bit trickier and I haven't been able to code it myself yet, but was able to do it for DDIM.

For example, I may have three seperate prompts, A, B, C

On Step 1, A is used
On Step 2, B is used
On Step 3, C is used
On Step 4, A is used
.... and so on, cycling through the list of prompts/conds

This is one of the workarounds people use to get around the 75/77 token limit for both cond and ucond.

This is what my code looks like in the ddim sampler, as you can see I essentially do x_cond = cond[step % len(cond)] to find which cond in the list to use on each step.

    for i, step in enumerate(iterator):
        index = total_steps - i - 1
        ts = torch.full((b,), step, device=device, dtype=torch.long)

        if mask is not None:
            assert x0 is not None
            img_orig = self.model.q_sample(x0, ts)  # TODO: deterministic forward pass?
            img = img_orig * mask + (1. - mask) * img

        cond_idx     = i % len(cond)
        neg_cond_idx = i % len(unconditional_conditioning)

        x_cond       = cond[cond_idx]
        x_uc         = unconditional_conditioning[neg_cond_idx]

        outs = self.p_sample_ddim(img, x_cond, ts, index=index, use_original_steps=ddim_use_original_steps,
                                  quantize_denoised=quantize_denoised, temperature=temperature,
                                  noise_dropout=noise_dropout, score_corrector=score_corrector,
                                  corrector_kwargs=corrector_kwargs,
                                  unconditional_guidance_scale=unconditional_guidance_scale,
                                  unconditional_conditioning=x_uc)
        img, pred_x0 = outs
        if callback: callback(i)
        if img_callback: img_callback(pred_x0, i)

        if index % log_every_t == 0 or index == total_steps - 1:
            intermediates['x_inter'].append(img)
            intermediates['pred_x0'].append(pred_x0)

Thanks

Worse in high guidance scale

Dear authors,
Thank you for sharing this great work and open source!
When following this work, I found that DPM-solver++ performs worse than DDIM under high classifier guidance scale, eg, scale=8 or 16. Concretely, under scale=8, the performance of DPM-solver++ approaches DDIM, and under scale=16, the performance of DDIM > DPM-solver++ > UniPC. Same issues alsp appy to UniPC.

Config: DPM-solver++ 2M, 50k samples

This contradicts the reported results in the paper.
Thank you for your explanation and help.

Is this error at single step 1st-order code?

Hi, thanks for your great work.

I try to use your single-step solver with 1st order in score-sde-pytorch.
But I got the error message like:
File ~/dpm-solver/examples/score_sde_pytorch/notebooks/../dpm_solver.py:1230, in DPM_Solver.sample(self, x, steps, t_start, t_end, order, skip_type, method, lower_order_final, denoise_to_zero, solver_type, atol, rtol, return_intermediate)
1228 timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device)
1229 for step, order in enumerate(orders):
-> 1230 s, t = timesteps_outer[step], timesteps_outer[step + 1]
1231 timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=s.item(), t_0=t.item(), N=order, device=device)
1232 lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner)
``
IndexError: index 2 is out of bounds for dimension 0 with size 2

In your documentation 3.1., you said
If order == 1:
K = steps.

However in your code (dpm-solver/examples/score_sde_pytorch/dpm_solver.py, line 538), the K when the order is 1 is set to 1 like.
537 elif order == 1:
538 K = 1
Is it error, or is there any problem with my setting?

Derivation of equation (A.3) in the paper

In the DPM-Solver (not the ++) paper, Appendix A.3, equation (A.3) mysteriously emerges from nowhere. The DPM-Solver paper cites the SDE paper, "Score-Based Generative Modeling through Stochastic Differential Equations", as the source of that equation, but I find no clue of how it can be derived from the SDE paper. Can you provide some hints or am I missing something?

Results of Imagenet256 using DPM++ not reproducible

Problem
In the DPM++ paper, the FID of parameters [s=8.0 | NFE=25] for Imagenet256 benchmark is 8.39; yet, this repo does not produce an approximate number (the result shows about FID ~= 200).

Env
GPU: NVIDIA RTX 2080Ti * 4
Pytorch==1.13.1
cuda-11.7

config/imagenet256_guide.yml

# the rest of options is the same
sampling:
+batch_size: 10
+fid_total_samples: 10000

What I did
I simply ran ddpm_and_guided-diffusion/sample.sh and commented out CIFAR10/Imagenet64 to only execute Imagenet256

Question
Are there any parameters should be changed? I checked everything that mentioned in the paper or README in this repo.

how to replace ddim with dpm-solver in the origin stable diffusion? (img2img)

i had tried to replace

z_enc = sampler.stochastic_encode(init_latent, torch.tensor([t_enc]*batch_size).to(device)) 
samples = sampler.decode(z_enc, c, t_enc, unconditional_guidance_scale=opt.scale,
                                                    unconditional_conditioning=uc)

with

noised_sample = sampler.stochastic_encode(init_latent, opt.strength)
samples, _ = sampler.sample(
                                                opt.ddim_steps,
                                                noised_sample.shape[0],
                                                noised_sample.shape[1:],
                                                conditioning=c,
                                                unconditional_guidance_scale=opt.scale,
                                                unconditional_conditioning=uc,
                                                method="multistep",
                                                order=2,
                                                lower_order_final=False,
                                                t_start=sampler.ratio_to_time(opt.strength),
                                                x_T=noised_sample,
                                            )

but the results don' well.(same step not better than ddim) . Is there any other better way?

problem with order 2 CIFAR-10 (VP deep continuous-time model )

hello,thanks to your wonderful work!i run your code with CIFAR-10 (VP deep continuous-time model ),dpms-solver order2 recently, i try so many times but still get wrong fid with the order2(table 6) result,could u give me some help such as the implement of hyperparameter? thank u so much!!!! please save the child ~~~~~~~~~~~

Only noise after sampling in example notebook

example.txt

Hi, thanks for this significant contribution!

I'm trying to get familiarized with the API following the example you've provided on the readme.

However, so far I only get random noise after sampling. Am I doing something wrong?

Best regards

Reproduce the results in Table 3 - Appendix. E

Dear authors,

I try to reproduce the results in Table3 of Appendix E (the last row, 2.59 (NFE=51)). I download the checkpoint-8-VP-deep-continuous and set the parameters as those in E.3. However, I cannot get the 2.59 results. My result is much higher around 4.1. Besides, I also test the DDPM-checkpoint-14. My result is also very high, >= 5.

I checked the codes multiple times, but still cannot get the results.... so... if possible, can you upload a complete script (including parameter setting) that produces the result in Table 3, Appendix E ?

Thank you very much.

Noisy results with "order == 1" (trying to replicate DDIM resutls)

Hi authors,

Thank you for the nice paper and clear code and documentation!!

I am trying DPM-Solver in my project for sampling acceleration. Previously, I can obtain reasonable results with DDIM (step=10, 100, ...), but the results I obtained with dpm-solver are pretty bad. Could you give some suggestions on the implementation?

Here are the details of my model:
(1) Training: DDPM ( L1 Loss, predict noise), T=1000, UNet with additional condition inputs, trained on audio data.
(2) Beta schedules: Sigmoid schedule (according to(https://arxiv.org/abs/2212.11972))

Code snippet that uses DPM-solver in my project:

    self.betas = sigmoid_beta_schedule(timesteps=1000)
    self.noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas)
    self.model_fn = model_wrapper(
        self.net,
        self.noise_schedule,
        model_type="noise",  # or "x_start" or "v" or "score"
        model_kwargs={},
    )
    self.dpm_solver = DPM_Solver(self.model_fn, self.noise_schedule, algorithm_type="dpmsolver")

After the definition:

    x_T = torch.randn(input.shape, device = "cuda")
    pred = self.dpm_solver.sample(
        x_T,
        condition,
        steps=20,
        order=1,
        skip_type="time_uniform",
        method="singlestep",
    )
   pred = unnormalize_to_zero_to_one(pred)

Thanks a lot!

About the value range of x_sample

Thanks for your work.

When I tried to generated images (unconditional sampling) with DPM-Solver,I found that torch.max(x_sampled) = 224 and toch.min(x_sampled) = -190. It is not in [-1, 1]. Why is that? The diffusion project I used is from https://github.com/lucidrains/denoising-diffusion-pytorch, and the settings with your DPM-Solver is as follws:


Settings

from dpm_solver_pytorch import NoiseScheduleVP, model_wrapper, DPM_Solver

model = ....
model_kwargs = {}
x_T = torch.randn(16, 3, 256, 256)
betas = cosine_beta_schedule(250)

noise_schedule = NoiseScheduleVP(schedule='discrete', betas=betas)

model_fn = model_wrapper(
model,
noise_schedule,
model_type="noise",
model_kwargs=model_kwargs,
)

dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver")

x_sample = dpm_solver.sample(
x_T,
steps=20,
order=3,
skip_type="time_uniform",
method="singlestep",
)

Thanks again and look forward to your reply!

No randomness is added during the sampling process?

Hey fellows, really good work and thanks for sharing the code. However, I've got a question about the sampling process. It seems like no randomness is added during the sampling process, like the z in the following equation.
image
I've checked the code and it seems like add_noise function defined in DPM_Solver class has never been used. Why is the case? Could you very kindly explain this? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.