当前位置:AIGC资讯 > AIGC > 正文

Stable Diffusion之Scheduler模块比对生成结果

项目场景:

替换Stable Diffusion的Scheduler模块并对结果进行分析

diffusers包含多个用于扩散过程的预置scheduler function,用于接收经过训练的模型的输出,扩散过程正在迭代的样本,以及返回去噪样本的时间步长。在其他扩散模型又被称为采样器。

Schedulers

Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.

adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images. for inference, the scheduler defines how to update a sample based on an output from a pretrained model.

Schedulers are often defined by a noise schedule and an update rule to solve the differential equation solution.

加载 Stable Diffusion pipeline

from huggingface_hub import login
from diffusers import DiffusionPipeline
import torch

# first we need to login with our access token
login()

# Now we can download the pipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
# Move it to GPU
pipeline.to("cuda")
# Access the scheduler
pipeline.scheduler

StableDiffusion模型各组件默认配置

StableDiffusionPipeline {
“_class_name”: “StableDiffusionPipeline”,
“_diffusers_version”: “0.11.1”,
“feature_extractor”: [
“transformers”,
“CLIPFeatureExtractor”
],
“requires_safety_checker”: true,
“safety_checker”: [
“stable_diffusion”,
“StableDiffusionSafetyChecker”
],
“scheduler”: [
“diffusers”,
“PNDMScheduler”
],
“text_encoder”: [
“transformers”,
“CLIPTextModel”
],
“tokenizer”: [
“transformers”,
“CLIPTokenizer”
],
“unet”: [
“diffusers”,
“UNet2DConditionModel”
],
“vae”: [
“diffusers”,
“AutoencoderKL”
]
}

StableDiffusion的scheduler模块默认配置

PNDMScheduler {
“_class_name”: “PNDMScheduler”,
“_diffusers_version”: “0.11.1”,
“beta_end”: 0.012,
“beta_schedule”: “scaled_linear”,
“beta_start”: 0.00085,
“clip_sample”: false,
“num_train_timesteps”: 1000,
“prediction_type”: “epsilon”,
“set_alpha_to_one”: false,
“skip_prk_steps”: true,
“steps_offset”: 1,
“trained_betas”: null
}

实验思路

我们可以看到StableDiffusion的调度器类型是PNDMScheduler。现在我想比较这个调度器与其他调度器的性能。首先,我们定义了一个prompt,我们将在这个prompt上测试所有不同的调度器。为保证生成图像具有高度相似性,先用random seed创建了一个生成器,它将确保我们可以生成类似的图像:

prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition."

generator = torch.Generator(device="cuda").manual_seed(8)
image = pipeline(prompt, generator=generator).images[0]
image

每个调度器的属性schedulermix .compatibles 定义了所有兼容的调度器。查看Stable Diffusion管道的所有可用的、兼容的调度程序。

pipeline.scheduler.compatibles

[diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler,
diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler,
diffusers.schedulers.scheduling_ddim.DDIMScheduler,
diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler,
diffusers.schedulers.scheduling_ddpm.DDPMScheduler,
diffusers.schedulers.scheduling_pndm.PNDMScheduler,
diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler,
diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler,
diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler]

PNDMScheduler类的配置

pipeline.scheduler.config

FrozenDict([(‘num_train_timesteps’, 1000),
(‘beta_start’, 0.00085),
(‘beta_end’, 0.012),
(‘beta_schedule’, ‘scaled_linear’),
(‘trained_betas’, None),
(‘skip_prk_steps’, True),
(‘set_alpha_to_one’, False),
(‘prediction_type’, ‘epsilon’),
(‘steps_offset’, 1),
(‘_class_name’, ‘PNDMScheduler’),
(‘_diffusers_version’, ‘0.11.1’),
(‘clip_sample’, False)])

使用此配置实例化一个与管道兼容的不同类的调度器。这里将调度程序更改为DDIMScheduler。并重新运行pipeline比较生成质量

from diffusers import DDIMScheduler

pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)

generator = torch.Generator(device="cuda").manual_seed(8)
image = pipeline(prompt, generator=generator).images[0]
image

换成采用更少timestep的采样器LMSDiscreteScheduler

from diffusers import LMSDiscreteScheduler

pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config)

generator = torch.Generator(device="cuda").manual_seed(8)
image = pipeline(prompt, generator=generator).images[0]
image

DPMSolverMultistepScheduler提供了最好权衡生成速度/生成质量的Scheduler,可以运行只有20步。

from diffusers import DPMSolverMultistepScheduler

pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)

generator = torch.Generator(device="cuda").manual_seed(8)
image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0]
image

采用不同的scheduler用于文本引导图像翻译效果:

大多数图像看起来非常相似,质量也非常相似,但DPMSolver所需生成时间最短,可在20个·inference_step生成结果。调度器的选择还要取决于具体的用例。

更新时间 2023-11-12