当前位置:AIGC资讯 > AIGC > 正文

Stable Diffusion WebUI使用python调用WD 1.4标签器(tagger),获取并处理标签tag权重

Stable Diffusion为秋叶大佬的绘世2.2.4
webUI api后台:http://127.0.0.1:7860/docs

数据获取结果,python代码在文章尾部

1girl: 0.9883618950843811, 98%
solo: 0.9468605518341064, 94%
horns: 0.9203381538391113, 92%
braid: 0.7536494731903076, 75%
brown_hair: 0.7361204624176025, 73%
sensitive: 0.7181869745254517, 71%
looking_at_viewer: 0.6558270454406738, 65%
long_hair: 0.6555134654045105, 65%
portrait: 0.5619801878929138, 56%
hair_ornament: 0.5276427268981934, 52%
lips: 0.5271897912025452, 52%
realistic: 0.47530364990234375, 47%
brown_eyes: 0.44382530450820923, 44%
fur_trim: 0.44058263301849365, 44%
red_hair: 0.4004508852958679, 40%
upper_body: 0.39194822311401367, 39%
mole: 0.35748565196990967, 35%
general: 0.2813188433647156, 28%
questionable: 0.004140794277191162, 0%
explicit: 0.0005668997764587402, 0%

使用/tagger/v1/interrogate,先使用get方法获取model模组有十多个,然后把json_data提交上去就可以了。记得把图片转码为base64。本文章仅用于测试,请仔细阅读api docs,model和threshold按照需求调整即可


import requests
import base64
from collections import OrderedDict
from PIL import Image


url = 'http://127.0.0.1:7860/tagger/v1/interrogate'
image_path = 'D:/code/image/6.jpg'
model = 'wd14-convnext'
threshold = 0.35

#确认照片为上传照片
image = Image.open(image_path)
image.show()

# 将图片转换为Base64字符串
with open(image_path, 'rb') as file:
    image_data = file.read()
    base64_image = base64.b64encode(image_data).decode('utf-8')

# 构建请求体的JSON数据
data = {
    "image": base64_image,
    "model": model,
    "threshold": threshold
}

# 发送POST请求
response = requests.post(url, json=data)

# 检查响应状态码
if response.status_code == 200:
    json_data = response.json()
    # 处理返回的JSON数据
    caption_dict = json_data['caption']
    sorted_items = sorted(caption_dict.items(), key=lambda x: x[1], reverse=True)
    output = '\n'.join([f'{k}: {v}, {int(v * 100)}%' for k, v in sorted_items])
    print(output)
 
else:
    print('Error:', response.status_code)
    print('Response body:', response.text)

更新时间 2023-11-11