当前位置:AIGC资讯 > AIGC > 正文

Llama 3 超级课堂 作业

1.web demo部署

环境配置

conda create -n llama3 python=3.10
conda activate llama3
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia

模型下载

git clone https://code.openxlab.org.cn/MrCat/Llama-3-8B-Instruct.git Meta-Llama-3-8B-Instruct
 
ln -s /root/share/new_models/meta-llama/Meta-Llama-3-8B-Instruct ~/model/Meta-Llama-3-8B-Instruct
 
cd ~
git clone https://github.com/SmartFlowAI/Llama3-Tutorial
cd ~
git clone -b v0.1.18 GitHub - InternLM/xtuner: An efficient, flexible and full-featured toolkit for fine-tuning large models (InternLM2, Llama3, Phi3, Qwen, Mistral, ...) //XTuner下载
cd XTuner
pip install -e .

运行

streamlit run ~/Llama3-Tutorial/tools/internstudio_web_demo.py \
  ~/model/Meta-Llama-3-8B-Instruct

结果如下      

2.认知微调

cd ~/Llama3-Tutorial

# 开始训练,使用 deepspeed 加速,A100 40G显存 耗时24分钟

xtuner train configs/assistant/llama3_8b_instruct_qlora_assistant.py --work-dir /root/llama3_pth

# Adapter PTH 转 HF 格式

xtuner convert pth_to_hf /root/llama3_pth/llama3_8b_instruct_qlora_assistant.py \

  /root/llama3_pth/iter_500.pth \

  /root/llama3_hf_adapter

# 模型合并

export MKL_SERVICE_FORCE_INTEL=1

xtuner convert merge /root/model/Meta-Llama-3-8B-Instruct \

  /root/llama3_hf_adapter\

  /root/llama3_hf_merged

完成合并,进入web

streamlit run ~/Llama3-Tutorial/tools/internstudio_web_demo.py \
  /root/llama3_hf_merged

结果如下:

 LMDeploy 高效部署 Llama3 实践

创建环境

conda create -n lmdeploy python=3.10
conda activate lmdeploy
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia

安装lmdeploy

pip install -U lmdeploy[all]

在终端运行

实现模型轻量化

改变cache-max-entry-count参数
lmdeploy chat /root/model/Meta-Llama-3-8B-Instruct/ --cache-max-entry-count 0.5

--cache-max-entry-count参数设置为0.01,约等于禁止KV Cache占用显存。

lmdeploy chat /root/model/Meta-Llama-3-8B-Instruct/ --cache-max-entry-count 0.01

2)使用W4A16量化

lmdeploy lite auto_awq \
   /root/model/Meta-Llama-3-8B-Instruct \
  --calib-dataset 'ptb' \
  --calib-samples 128 \
  --calib-seqlen 1024 \
  --w-bits 4 \
  --w-group-size 128 \
  --work-dir /root/model/Meta-Llama-3-8B-Instruct_4bit

运行:

lmdeploy chat /root/model/Meta-Llama-3-8B-Instruct_4bit --model-format awq --cache-max-entry-count 0.01

显存明显降低

3)LMDeploy服务

启动API服务器

lmdeploy serve api_server \
    /root/model/Meta-Llama-3-8B-Instruct \
    --model-format hf \
    --quant-policy 0 \
    --server-name 0.0.0.0 \
    --server-port 23333 \
    --tp 1

运行命令行客户端:

lmdeploy serve api_client http://localhost:23333

通过命令行窗口直接与模型对话

利用Gradio作为前端,启动网页客户端,网页客户端连接API服务器

lmdeploy serve gradio http://localhost:23333 \
    --server-name 0.0.0.0 \
    --server-port 6006

更新时间 2024-07-06