LLM微调大模型

第一步:先拉取官方仓库

git clone https://github.com/hiyouga/LlamaFactory.git

第二步:然后编辑文件夹下的uv配置文件,添加以下内容

1
2
3
4
5
6
7
8
9
10
11
12
[tool.uv.sources]
torch = [
{ index = "pytorch-cu124", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
]
torchvision = [
{ index = "pytorch-cu124", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
]

[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

然后执行uv sync

第三步:启动网页浏览器

uv run llamafactory-cli webui
浏览器打开http://localhost:7860/

第四步:下载数据集

下载数据集
NekoQA-10K
放到项目目录下面的/data文件夹📂下面
然后打开 dataset_info.json 并添加配置
image.png

1
2
3
4
5
6
7
"neko10k": {
"file_name": "NekoQA-10K.json",
"columns": {
"prompt": "instruction",
"response": "output"
}
}

第五集:进行微调

点击开始