🛠️MatrixTools
Stable Diffusion本地部署教程
返回教程列表

Stable Diffusion本地部署教程

详细讲解如何在本地搭建Stable Diffusion环境,免费使用AI图像生成

MatrixTools
2024年8月14日
精选教程

教程简介

详细讲解如何在本地搭建Stable Diffusion环境,免费使用AI图像生成

Stable Diffusion本地部署AI绘画开源
📖

教程详细内容

深度解析每个关键概念,配合实际案例帮助理解

Stable Diffusion 本地部署完全指南:从零开始构建你的AI绘画工作室

引言

在AI绘画领域,Stable Diffusion无疑是最受欢迎的开源工具之一。与在线平台相比,本地部署Stable Diffusion具有诸多优势:无需担心隐私泄露、可以自由使用各种模型、不受使用限制等。本指南将带你从零开始,完整搭建一个功能强大的Stable Diffusion本地环境。

系统要求

最低配置要求

  • GPU: NVIDIA GTX 1060 6GB或更高(AMD GPU支持有限)
  • 内存: 16GB RAM(推荐32GB)
  • 存储: 至少50GB可用空间
  • 操作系统: Windows 10/11、macOS、Linux

推荐配置

  • GPU: NVIDIA RTX 3060 12GB或更高
  • 内存: 32GB RAM
  • 存储: 100GB+ SSD空间
  • CPU: Intel i5-8400或AMD Ryzen 5 2600或更高

安装方法对比

方法难度适用人群优缺点
AUTOMATIC1111⭐⭐初学者易用,社区活跃,插件丰富
ComfyUI⭐⭐⭐进阶用户节点化操作,性能优秀
源码安装⭐⭐⭐⭐开发者完全自定义,需要技术基础
InvokeAI⭐⭐设计师界面友好,功能专业

方法一:AUTOMATIC1111 WebUI 安装

Windows 安装步骤

1. 安装依赖环境

首先安装Python 3.10.6(重要:版本需精确匹配):

# 从官网下载Python 3.10.6
https://www.python.org/downloads/release/python-3106/

# 安装时确保勾选"Add Python to PATH"

安装Git:

# 下载Git for Windows
https://git-scm.com/download/win

2. 下载AUTOMATIC1111

# 克隆仓库
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

# 运行安装脚本
webui-user.bat

3. 配置启动参数

编辑 webui-user.bat 文件:

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers --opt-sdp-attention --api

call webui.bat

常用启动参数说明:

  • --xformers: 启用xformers优化(显著提升性能)
  • --opt-sdp-attention: 优化注意力机制
  • --api: 启用API接口
  • --share: 创建公共链接
  • --listen: 允许网络访问
  • --port 7860: 指定端口

Linux/macOS 安装步骤

# 安装依赖
sudo apt update && sudo apt install wget git python3 python3-venv

# 克隆仓库
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui

# 运行安装脚本
./webui.sh

方法二:ComfyUI 安装

为什么选择ComfyUI?

ComfyUI采用节点化工作流,具有以下优势:

  • 内存效率更高
  • 支持复杂的工作流组合
  • 实时预览中间结果
  • 更好的批处理支持

安装步骤

# 克隆ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI

# 安装依赖
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

# 启动ComfyUI
python main.py

ComfyUI Manager 插件安装

cd ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git

模型下载与管理

基础模型(Checkpoint)

主流模型推荐

  1. 真实风格模型

    • Realistic Vision V5.1
    • ChilloutMix
    • Photon
  2. 动漫风格模型

    • Anything V5
    • CounterfeitXL
    • AnimePastelDream
  3. 艺术风格模型

    • DreamShaper
    • Deliberate
    • MajicMix

下载方式

方法1:HuggingFace下载

# 使用huggingface-hub
pip install huggingface_hub
python -c "
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id='runwayml/stable-diffusion-v1-5', filename='v1-5-pruned-emaonly.safetensors', local_dir='./models/Stable-diffusion/')
"

方法2:Civitai下载

# 直接下载链接示例
wget https://civitai.com/api/download/models/XXX -O models/Stable-diffusion/model_name.safetensors

模型存放路径

stable-diffusion-webui/
├── models/
│   ├── Stable-diffusion/     # 主模型
│   ├── Lora/                 # LoRA模型
│   ├── VAE/                  # VAE模型
│   ├── ControlNet/           # ControlNet模型
│   └── embeddings/           # Embedding文件

VAE模型配置

VAE(Variational Autoencoder)负责图像编码解码,对最终效果影响很大:

# 下载推荐VAE
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -O models/VAE/vae-ft-mse-840000-ema-pruned.safetensors

高级配置优化

GPU显存优化

低显存显卡优化(4-6GB)

编辑启动参数:

set COMMANDLINE_ARGS=--medvram --xformers --opt-split-attention

极低显存显卡优化(2-4GB)

set COMMANDLINE_ARGS=--lowvram --xformers --opt-split-attention-v1

性能监控脚本

创建 monitor.py 监控脚本:

import psutil
import GPUtil
import time

def monitor_system():
    while True:
        # CPU监控
        cpu_percent = psutil.cpu_percent(interval=1)

        # 内存监控
        memory = psutil.virtual_memory()
        memory_percent = memory.percent

        # GPU监控
        gpus = GPUtil.getGPUs()
        if gpus:
            gpu = gpus[0]
            gpu_load = gpu.load * 100
            gpu_memory = gpu.memoryUtil * 100
            gpu_temp = gpu.temperature

            print(f"CPU: {cpu_percent}% | RAM: {memory_percent}% | GPU: {gpu_load}% | VRAM: {gpu_memory}% | GPU温度: {gpu_temp}°C")

        time.sleep(2)

if __name__ == "__main__":
    monitor_system()

自动启动脚本

创建 auto_start.bat

@echo off
echo 正在启动Stable Diffusion WebUI...

cd /d "C:\path\to\stable-diffusion-webui"

REM 检查GPU可用性
nvidia-smi >nul 2>&1
if %errorlevel% neq 0 (
    echo 警告:未检测到NVIDIA GPU,使用CPU模式
    set COMMANDLINE_ARGS=--use-cpu all
) else (
    echo 检测到NVIDIA GPU,使用GPU加速
    set COMMANDLINE_ARGS=--xformers --opt-sdp-attention
)

REM 启动WebUI
call webui.bat

pause

插件生态系统

必装插件推荐

1. ControlNet

功能:精确控制图像生成

# 安装方法
git clone https://github.com/Mikubill/sd-webui-controlnet.git extensions/sd-webui-controlnet

配置ControlNet模型:

# 下载预处理器模型
import urllib.request
import os

models = [
    "control_v11p_sd15_canny.pth",
    "control_v11p_sd15_depth.pth",
    "control_v11p_sd15_openpose.pth"
]

for model in models:
    url = f"https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/{model}"
    urllib.request.urlretrieve(url, f"extensions/sd-webui-controlnet/models/{model}")

2. Deforum

功能:生成动画视频

git clone https://github.com/deforum-art/deforum-for-automatic1111-webui.git extensions/deforum

3. Additional Networks

功能:支持更多网络类型(LoRA、LyCORIS等)

git clone https://github.com/kohya-ss/sd-webui-additional-networks.git extensions/sd-webui-additional-networks

自定义插件开发

创建简单的WebUI插件:

# extensions/my_plugin/scripts/my_plugin.py
import gradio as gr
from modules import script_callbacks

def on_ui_tabs():
    with gr.Blocks(analytics_enabled=False) as ui_component:
        with gr.Row():
            input_text = gr.Textbox(label="输入文本")
            output_text = gr.Textbox(label="输出文本")

        def process_text(text):
            return f"处理后的文本: {text}"

        input_text.change(fn=process_text, inputs=input_text, outputs=output_text)

    return [(ui_component, "我的插件", "my_plugin")]

script_callbacks.on_ui_tabs(on_ui_tabs)

工作流程优化

自动化批处理脚本

创建批量生成脚本 batch_generate.py

import requests
import json
import time
import os

class SDAutomation:
    def __init__(self, base_url="http://127.0.0.1:7860"):
        self.base_url = base_url

    def generate_image(self, prompt, negative_prompt="", steps=20, cfg_scale=7):
        payload = {
            "prompt": prompt,
            "negative_prompt": negative_prompt,
            "steps": steps,
            "cfg_scale": cfg_scale,
            "width": 512,
            "height": 512,
            "sampler_name": "DPM++ 2M Karras"
        }

        response = requests.post(f"{self.base_url}/sdapi/v1/txt2img", json=payload)
        return response.json()

    def batch_generate(self, prompts_file, output_dir="outputs"):
        os.makedirs(output_dir, exist_ok=True)

        with open(prompts_file, 'r', encoding='utf-8') as f:
            prompts = f.readlines()

        for i, prompt in enumerate(prompts):
            prompt = prompt.strip()
            if not prompt:
                continue

            print(f"正在生成第 {i+1}/{len(prompts)} 张图片: {prompt[:50]}...")

            result = self.generate_image(prompt)

            # 保存图片
            import base64
            image_data = base64.b64decode(result['images'][0])

            filename = f"{output_dir}/image_{i+1:03d}.png"
            with open(filename, 'wb') as f:
                f.write(image_data)

            print(f"已保存: {filename}")
            time.sleep(1)  # 避免请求过于频繁

# 使用示例
if __name__ == "__main__":
    sd = SDAutomation()

    # 创建prompts.txt文件,每行一个提示词
    prompts = [
        "a beautiful landscape painting",
        "a cute cat sitting on a chair",
        "a futuristic city at sunset"
    ]

    with open("prompts.txt", "w", encoding="utf-8") as f:
        for prompt in prompts:
            f.write(prompt + "\n")

    sd.batch_generate("prompts.txt")

质量控制工作流

import cv2
import numpy as np
from PIL import Image
import requests

class ImageQualityControl:
    def __init__(self):
        self.quality_threshold = 0.7

    def calculate_image_quality(self, image_path):
        """计算图像质量分数"""
        img = cv2.imread(image_path)

        # 1. 清晰度检测(拉普拉斯方差)
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        sharpness = cv2.Laplacian(gray, cv2.CV_64F).var()

        # 2. 亮度检测
        brightness = np.mean(gray)

        # 3. 对比度检测
        contrast = gray.std()

        # 综合评分
        quality_score = min(1.0, (sharpness / 1000 + contrast / 100 +
                                 abs(brightness - 128) / 128) / 3)

        return {
            "quality_score": quality_score,
            "sharpness": sharpness,
            "brightness": brightness,
            "contrast": contrast
        }

    def auto_enhance(self, image_path):
        """自动增强图像质量"""
        img = cv2.imread(image_path)

        # 直方图均衡化
        lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
        lab[:,:,0] = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)).apply(lab[:,:,0])
        enhanced = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR)

        # 保存增强后的图像
        output_path = image_path.replace('.png', '_enhanced.png')
        cv2.imwrite(output_path, enhanced)

        return output_path

常见问题解决

内存不足问题

问题:CUDA out of memory 解决方案

# 方法1:降低分辨率
--width 448 --height 448

# 方法2:减少批次大小
--batch-size 1

# 方法3:启用CPU卸载
--cpu-offload

生成速度慢

优化策略

  1. 使用TensorRT加速
# 安装TensorRT
pip install tensorrt

# 启动参数添加
--use-tensorrt
  1. 模型量化
# 使用8位量化
import torch
from transformers import pipeline

model = pipeline("text-to-image", torch_dtype=torch.float16)
  1. 批处理优化
# 增加批次大小(如果显存允许)
--batch-size 4

模型兼容性问题

检查模型格式

import safetensors
import torch

def check_model_compatibility(model_path):
    try:
        if model_path.endswith('.safetensors'):
            model = safetensors.torch.load_file(model_path)
        else:
            model = torch.load(model_path, map_location='cpu')

        print("模型加载成功")
        print(f"模型大小: {len(model)} 个键")

        # 检查关键组件
        required_keys = ['model.diffusion_model', 'first_stage_model', 'cond_stage_model']
        for key in required_keys:
            if any(k.startswith(key) for k in model.keys()):
                print(f"✓ 找到 {key}")
            else:
                print(f"✗ 缺少 {key}")

    except Exception as e:
        print(f"模型加载失败: {e}")

# 使用示例
check_model_compatibility("models/Stable-diffusion/your_model.safetensors")

进阶应用

训练自定义LoRA模型

使用Kohya Scripts训练LoRA:

# 克隆Kohya Scripts
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts

# 安装依赖
pip install -r requirements.txt

# 准备训练数据
mkdir -p train_data/my_lora/images
# 将训练图片放入images文件夹

# 创建配置文件
cat > config.toml << EOF
[model]
pretrained_model_name_or_path = "models/Stable-diffusion/your_base_model.safetensors"
vae = "models/VAE/vae-ft-mse-840000-ema-pruned.safetensors"

[dataset]
train_data_dir = "train_data"
resolution = 512
batch_size = 1

[training]
max_train_epochs = 10
learning_rate = 1e-4
lr_scheduler = "cosine"
optimizer_type = "AdamW8bit"

[output]
output_dir = "outputs"
output_name = "my_lora"
save_model_as = "safetensors"
EOF

# 开始训练
python train_network.py --config config.toml

API接口开发

构建REST API服务:

from flask import Flask, request, jsonify
import requests
import base64
import io
from PIL import Image

app = Flask(__name__)

class SDAPIWrapper:
    def __init__(self, sd_url="http://127.0.0.1:7860"):
        self.sd_url = sd_url

    @app.route('/generate', methods=['POST'])
    def generate_image(self):
        data = request.json

        payload = {
            "prompt": data.get('prompt', ''),
            "negative_prompt": data.get('negative_prompt', ''),
            "steps": data.get('steps', 20),
            "cfg_scale": data.get('cfg_scale', 7),
            "width": data.get('width', 512),
            "height": data.get('height', 512),
            "sampler_name": data.get('sampler', 'DPM++ 2M Karras')
        }

        try:
            response = requests.post(f"{self.sd_url}/sdapi/v1/txt2img", json=payload)
            result = response.json()

            return jsonify({
                "success": True,
                "image": result['images'][0],
                "info": result.get('info', {})
            })
        except Exception as e:
            return jsonify({
                "success": False,
                "error": str(e)
            }), 500

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

部署与维护

Docker容器化部署

创建 Dockerfile

FROM nvidia/cuda:11.8-devel-ubuntu20.04

# 安装依赖
RUN apt-get update && apt-get install -y \
    python3 python3-pip git wget \
    && rm -rf /var/lib/apt/lists/*

# 安装Python依赖
COPY requirements.txt .
RUN pip3 install -r requirements.txt

# 克隆WebUI
RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git /app
WORKDIR /app

# 下载基础模型
RUN mkdir -p models/Stable-diffusion models/VAE
RUN wget -O models/Stable-diffusion/v1-5-pruned-emaonly.safetensors \
    https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors

# 启动脚本
COPY start.sh .
RUN chmod +x start.sh

EXPOSE 7860

CMD ["./start.sh"]

创建 docker-compose.yml

version: '3.8'

services:
  stable-diffusion:
    build: .
    ports:
      - "7860:7860"
    volumes:
      - ./models:/app/models
      - ./outputs:/app/outputs
    environment:
      - COMMANDLINE_ARGS=--listen --xformers
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

自动备份脚本

#!/bin/bash
# backup.sh - 自动备份重要文件

BACKUP_DIR="/backup/stable-diffusion"
DATE=$(date +%Y%m%d_%H%M%S)
SD_DIR="/path/to/stable-diffusion-webui"

# 创建备份目录
mkdir -p "$BACKUP_DIR/$DATE"

# 备份模型文件
echo "备份模型文件..."
rsync -av --progress "$SD_DIR/models/" "$BACKUP_DIR/$DATE/models/"

# 备份配置文件
echo "备份配置文件..."
cp "$SD_DIR/config.json" "$BACKUP_DIR/$DATE/"
cp "$SD_DIR/ui-config.json" "$BACKUP_DIR/$DATE/"

# 备份扩展
echo "备份扩展..."
rsync -av --progress "$SD_DIR/extensions/" "$BACKUP_DIR/$DATE/extensions/" \
  --exclude="*.git" --exclude="__pycache__"

# 压缩备份
echo "压缩备份..."
cd "$BACKUP_DIR"
tar -czf "sd_backup_$DATE.tar.gz" "$DATE"
rm -rf "$DATE"

# 清理旧备份(保留最近7天)
find "$BACKUP_DIR" -name "sd_backup_*.tar.gz" -mtime +7 -delete

echo "备份完成: sd_backup_$DATE.tar.gz"

性能监控和告警

创建监控脚本 monitor_sd.py

import psutil
import GPUtil
import smtplib
from email.mime.text import MIMEText
import time
import json
from datetime import datetime

class SDMonitor:
    def __init__(self, config_file="monitor_config.json"):
        with open(config_file, 'r') as f:
            self.config = json.load(f)

    def check_system_health(self):
        health_status = {
            "timestamp": datetime.now().isoformat(),
            "cpu_usage": psutil.cpu_percent(interval=1),
            "memory_usage": psutil.virtual_memory().percent,
            "disk_usage": psutil.disk_usage('/').percent,
            "gpu_info": []
        }

        # GPU监控
        gpus = GPUtil.getGPUs()
        for gpu in gpus:
            health_status["gpu_info"].append({
                "id": gpu.id,
                "name": gpu.name,
                "load": gpu.load * 100,
                "memory_usage": gpu.memoryUtil * 100,
                "temperature": gpu.temperature
            })

        return health_status

    def send_alert(self, message):
        """发送告警邮件"""
        if not self.config.get("email_alerts", {}).get("enabled", False):
            return

        smtp_config = self.config["email_alerts"]

        msg = MIMEText(message)
        msg['Subject'] = 'Stable Diffusion 系统告警'
        msg['From'] = smtp_config['from_email']
        msg['To'] = smtp_config['to_email']

        try:
            server = smtplib.SMTP(smtp_config['smtp_server'], smtp_config['smtp_port'])
            server.starttls()
            server.login(smtp_config['username'], smtp_config['password'])
            server.send_message(msg)
            server.quit()
        except Exception as e:
            print(f"发送邮件失败: {e}")

    def run_monitoring(self):
        while True:
            status = self.check_system_health()

            # 检查告警条件
            alerts = []

            if status["cpu_usage"] > self.config["thresholds"]["cpu_usage"]:
                alerts.append(f"CPU使用率过高: {status['cpu_usage']:.1f}%")

            if status["memory_usage"] > self.config["thresholds"]["memory_usage"]:
                alerts.append(f"内存使用率过高: {status['memory_usage']:.1f}%")

            for gpu in status["gpu_info"]:
                if gpu["temperature"] > self.config["thresholds"]["gpu_temperature"]:
                    alerts.append(f"GPU温度过高: {gpu['temperature']}°C")

                if gpu["memory_usage"] > self.config["thresholds"]["gpu_memory"]:
                    alerts.append(f"GPU显存使用率过高: {gpu['memory_usage']:.1f}%")

            # 发送告警
            if alerts:
                alert_message = "\n".join(alerts)
                self.send_alert(alert_message)
                print(f"告警: {alert_message}")

            # 记录日志
            print(f"[{status['timestamp']}] 系统正常运行")

            time.sleep(self.config["check_interval"])

# 配置文件示例 monitor_config.json
config_example = {
    "check_interval": 60,
    "thresholds": {
        "cpu_usage": 80,
        "memory_usage": 85,
        "gpu_temperature": 80,
        "gpu_memory": 90
    },
    "email_alerts": {
        "enabled": True,
        "smtp_server": "smtp.gmail.com",
        "smtp_port": 587,
        "username": "[email protected]",
        "password": "your_password",
        "from_email": "[email protected]",
        "to_email": "[email protected]"
    }
}

if __name__ == "__main__":
    # 创建配置文件
    with open("monitor_config.json", "w") as f:
        json.dump(config_example, f, indent=2)

    monitor = SDMonitor()
    monitor.run_monitoring()

总结

本指南详细介绍了Stable Diffusion的本地部署过程,从基础安装到高级优化,从插件生态到自动化工具。通过遵循这些步骤和最佳实践,你可以构建一个稳定、高效的AI绘画工作环境。

记住,Stable Diffusion的世界在不断发展,保持学习和实验的态度,定期更新你的知识和工具,才能充分发挥这个强大工具的潜力。

下一步建议

  1. 加入社区:参与Reddit、Discord等Stable Diffusion社区
  2. 持续学习:关注最新的模型和技术发展
  3. 实践创新:尝试训练自己的模型和开发新的工作流
  4. 分享经验:将你的发现和创作分享给社区

开始你的AI艺术创作之旅吧!

Stable Diffusion本地部署教程 | MatrixTools