文章目录
01 使用 02 Stable Diffusion 的工作原理 The autoencoder (VAE) The U-Net The Text-encoder Latent Diffusion 又快又高效的原因 Stable Diffusion 的推断过程 03 编写你自己的inference pipeline参考链接:
https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work
在这篇文章中,我们想展示如何使用Stable Diffusion with the ? Diffusers library,,解释模型是如何工作的,最后深入探讨扩散器是如何允许自定义图像生成pipeline的。
如果你对扩散模型完全陌生,我们建议你阅读下面的博客文章:
The Annotated Diffusion Model Getting started with ? Diffusers01 使用
首先,你应该安装diffusers==0.10.2:
pip install diffusers==0.10.2 transformers scipy ftfy accelerate
The Stable Diffusion model可以在推理中运行,只需使用StableDiffusionPipeline管道的几行。通过简单的from_pretrained函数调用,pipeline设置了从文本生成图像所需的一切。
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
pipe.to("cuda")
如果你受到GPU内存的限制,并且可用的GPU RAM少于10GB,请确保以float16精度加载StableDiffusionPipeline,而不是如上所述的默认float32精度。
你可以通过从fp16分支加载权重并告诉扩散器期望权重为float16精度来实现:
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16)
要运行管道,只需定义prompt和调用pipe
prompt = "a photograph of an astronaut riding a horse"
image = pipe(prompt).images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")
如果您想要确定性输出,您可以设置一个随机种子并将生成器传递给管道。
import torch
generator = torch.Generator("cuda").manual_seed(1024)
image = pipe(prompt, guidance_scale=7.5, generator=generator).images[0]
可以使用num_inference_steps
参数更改推理步骤的数量。
接下来,让我们看看如何同时生成相同提示符的多个图像。首先,我们将创建一个image_grid函数,以帮助我们在网格中很好地可视化它们。
from PIL import Image
def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
我们可以通过简单地使用具有重复多次的相同提示符的列表来为同一个提示符生成多个图像。我们将把列表发送到管道,而不是之前使用的字符串。
num_images = 3
prompt = ["a photograph of an astronaut riding a horse"] * num_images
images = pipe(prompt).images
grid = image_grid(images, rows=1, cols=3)
# you can save the grid with
# grid.save(f"astronaut_rides_horse.png")
02 Stable Diffusion 的工作原理
Latent diffusion可以通过在较低维隐空间上应用扩散过程来减少内存和计算复杂度,而不是使用实际的像素空间。这是standard diffusion和Latent diffusion模型之间的关键区别:在Latent diffusion中,模型被训练成生成图像的latent(压缩)表示。
latent Diffusion model 三个主要组成部分
An autoencoder (VAE). A U-Net. A text-encoder, e.g. CLIP’s Text Encoder.The autoencoder (VAE)
VAE模型有两个部分,一个编码器和一个解码器。编码器用于将图像转换为低维潜在表示,这将作为U-Net模型的输入。相反,解码器将潜在的表征转换回图像。
The U-Net
U-Net有一个编码器部分和一个解码器部分,两者都由ResNet块组成。编码器将图像表示压缩为较低分辨率的图像表示,解码器将较低分辨率的图像表示解码回假定噪声较小的原始较高分辨率的图像表示。更具体地说,U-Net输出预测噪声残差,可用于计算预测的去噪图像表示。
为了防止U-Net在downsampling时丢失重要信息,通常在编码器的 downsampling resnet和解码器的upsampling resnet之间添加捷径连接。此外,Stable Diffusion U-Net能够通过交叉注意层调节文本嵌入的输出。交叉注意层通常在ResNet块之间添加到U-Net的编码器和解码器部分。
The Text-encoder
文本编码器负责转换输入提示符,例如:"An astronaut riding a horse"进入U-Net可以理解的嵌入空间。它通常是一个简单的基于转换器的编码器,它将一系列输入标记映射到一系列潜在的文本嵌入。
在训练期间,Stable Diffusion不训练文本编码器,而只是使用CLIP已经训练好的文本编码器CLIPTextModel。
Latent Diffusion 又快又高效的原因
在低纬空间操作,减少内存和计算需求。Stable Diffusion 的推断过程
以latent seed 和 text prompt 为输入 U-Net 迭代去噪 U-Netde 输出称为 noise residual,用来计算 denoised latent image representation.对于 Stable Diffusion,我们推荐:
PNDM scheduler (used by default) DDIM scheduler K-LMS scheduler03 编写你自己的inference pipeline
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler
# 1. Load the autoencoder model which will be used to decode the latents into image space.
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
# 2. Load the tokenizer and text encoder to tokenize and encode the text.
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
# 3. The UNet model for generating the latents.
unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet")
# 4. load the K-LMS scheduler with some fitting parameters
from diffusers import LMSDiscreteScheduler
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
# 5. move the models to GPU
torch_device = "cuda"
vae.to(torch_device)
text_encoder.to(torch_device)
unet.to(torch_device)
# 6. set parameters
prompt = ["a photograph of an astronaut riding a horse"]
height = 512 # default height of Stable Diffusion
width = 512 # default width of Stable Diffusion
num_inference_steps = 100 # Number of denoising steps
guidance_scale = 7.5 # Scale for classifier-free guidance
generator = torch.manual_seed(0) # Seed generator to create the inital latent noise
batch_size = len(prompt)
# 7. get the text_embeddings for the passed prompt
text_input = tokenizer(prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt")
text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0]
# 8. get the unconditional text embeddings for classifier-free guidance
# They need to have the same shape as the conditional text_embeddings (batch_size and seq_length)
max_length = text_input.input_ids.shape[-1]
uncond_input = tokenizer(
[""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
)
uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0]
# 9. concatenate both text_embeddings and uncond_embeddings into a single batch to avoid doing two forward passes
text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
# 10. generate the initial random noise
latents = torch.randn(
(batch_size, unet.in_channels, height // 8, width // 8),
generator=generator,
)
latents = latents.to(torch_device)
# 11. initialize the scheduler with our chosen num_inference_steps.
scheduler.set_timesteps(num_inference_steps)
# 12. The K-LMS scheduler needs to multiply the latents by its sigma values. Let's do this here:
latents = latents * scheduler.init_noise_sigma
# 13. write the denoising loop
from tqdm.auto import tqdm
scheduler.set_timesteps(num_inference_steps)
for t in tqdm(scheduler.timesteps):
# expand the latents if we are doing classifier-free guidance to avoid doing two forward passes.
latent_model_input = torch.cat([latents] * 2)
latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t)
# predict the noise residual
with torch.no_grad():
noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
# perform guidance
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
# compute the previous noisy sample x_t -> x_t-1
latents = scheduler.step(noise_pred, t, latents).prev_sample
# 14. use the vae to decode the generated latents back into the image
latents = 1 / 0.18215 * latents
with torch.no_grad():
image = vae.decode(latents).sample
# 15. convert the image to PIL so we can display or save it
image = (image / 2 + 0.5).clamp(0, 1)
image = image.detach().cpu().permute(0, 2, 3, 1).numpy()
images = (image * 255).round().astype("uint8")
pil_images = [Image.fromarray(image) for image in images]
Citation:
@article{patil2022stable,
author = {Patil, Suraj and Cuenca, Pedro and Lambert, Nathan and von Platen, Patrick},
title = {Stable Diffusion with ? Diffusers},
journal = {Hugging Face Blog},
year = {2022},
note = {https://huggingface.co/blog/rlhf},
}