EADST

Building llama.cpp

Building for CPU

The CPU build is straightforward and works on any system with a modern C++ compiler. Here's how to do it:

cmake -B build
cmake --build build --config Release

Building with CUDA

If you have an NVIDIA GPU, you can build llama.cpp with CUDA support for significantly faster inference:

cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j 32

-DGGML_CUDA=ON enables CUDA support

-j 32 enables parallel compilation with 32 threads to speed up the build process

Reference

Build llama.cpp locally

相关标签
About Me
XD
Goals determine what you are going to be.
Category
标签云
站点统计

本站现有博文275篇,共被浏览493163

本站已经建立2081天!

热门文章
文章归档
回到顶部