1.下载qwen-1.5:
2.下载模型:
https://huggingface.co/Qwen
3. 安装各种库,tansformers....等等,注意版本;
4.写下自己的run脚本,下面举例:(模型地址记得要写全路径)
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen1.5-7B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenize