Change8

v0.30.0-rc15

Breaking Changes
📦 ollamaView on GitHub →
1 breaking3 features

Summary

This pre-release updates the architecture to directly support llama.cpp, enabling GGUF compatibility and leveraging MLX for Apple Silicon acceleration. Users are encouraged to test performance and stability.

⚠️ Breaking Changes

  • The underlying architecture has changed from relying on GGML to directly supporting llama.cpp, which may affect custom integrations or tooling relying on the previous internal structure.

Migration Steps

  1. When installing or updating, use the specific version tag provided in the installation scripts (e.g., OLLAMA_VERSION=0.30.0-rc15).

✨ New Features

  • Direct support for llama.cpp architecture.
  • Compatibility with the GGUF file format.
  • Use of MLX for accelerated model inference on Apple Silicon.