llama.cpp Installation – Ubuntu 24.04 (Latest: b7885 – Jan 30, 2026)

  1. Visit the releases page:
    https://github.com/ggml-org/llama.cpp/releases/latest
  2. Download the best match for your CPU:
    • llama-b7885-bin-ubuntu-x64.tar.gz — basic / most compatible
    • llama-b7885-bin-ubuntu-vulkan-x64.tar.gz — if you want Vulkan GPU support
  3. Extract & test:
mkdir -p ~/llama.cpp && cd ~/llama.cpp

wget https://github.com/ggml-org/llama.cpp/releases/download/b7885/llama-b7885-bin-ubuntu-x64.tar.gz
tar -xzf llama-b7885-bin-ubuntu-x64.tar.gz

./llama-cli --version

sudo apt update
sudo apt install -y git build-essential cmake ninja-build

git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp

make -j$(nproc)

# Or with CMake:
# cmake -B build -DCMAKE_BUILD_TYPE=Release
# cmake --build build --config Release -j$(nproc)

Install CUDA toolkit first (adjust version as needed):

# NVIDIA driver if needed
sudo ubuntu-drivers autoinstall

# CUDA repo setup
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install -y cuda-toolkit-12-6

nvcc --version

Then build llama.cpp:

cd llama.cpp
cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=all-major
cmake --build build --config Release -j$(nproc)

Run with -ngl 35 or higher to use GPU.

Download a GGUF model (examples from Hugging Face):

  • Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf
  • Qwen2.5-14B-Instruct-Q6_K.gguf
  • Gemma-2-27B-it-Q4_K_M.gguf

Interactive example:

./llama-cli -m models/Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf \
  --color --temp 0.7 --repeat_penalty 1.1 -c 8192 -n -1 -ngl 35 \
  -p "You are a helpful AI assistant."

API server mode:

./llama-server -m models/....gguf --host 0.0.0.0 --port 8080 -ngl 40 -c 32768
Quality, Reliability & Service
Thank You For Visiting
Brooks Computing Systems - Jacksonville
Visit https://bcs.archman.us