EADST

LLAMA Model Save with INT8 Format

LLAMA Model Save with INT8 Format

from transformers import BitsAndBytesConfig
from transformers import AutoModelForCausalLM

config = BitsAndBytesConfig(
    load_in_8bit=True,
)
path = "/home/llm/model/path/"
model = AutoModelForCausalLM.from_pretrained(path, device_map="cpu", quantization_config=config)
model.save_pretrained("model_save_folder-8bit")
相关标签
About Me
XD
Goals determine what you are going to be.
Category
标签云
站点统计

本站现有博文242篇,共被浏览291901

本站已经建立1782天!

热门文章
文章归档
回到顶部