Migrating to llama.cpp b9019
Version b9019 introduces 1 breaking change. This guide details how to update your code.
Released: 5/4/2026
1
Breaking Changes
2
Migration Steps
7
Affected Symbols
⚠️ Check Your Code
If you use any of these symbols, you need to read this guide:
`load_hparams``load_tensors``build_graph``llm_arch_model_i``create_tensor_qkv``llama_model_base``LLAMA_LOAD_LOCALS`Breaking Changes
●Issue #1
The functions `load_hparams` and `load_tensors` have been moved from a general location to be defined per-model, which may break downstream code relying on their previous location.
Migration Steps
- 1Update code that calls `load_hparams` and `load_tensors` to reflect their new per-model definition location.
- 2Remove any migration scripts or ifdef blocks related to previous model loading logic, as the migration script was removed.
Release Summary
This release focuses heavily on internal refactoring, moving model loading utilities (`load_hparams`, `load_tensors`) to be model-specific, and introducing build graph capabilities. Numerous minor fixes and cleanups were also performed.
Need More Details?
View the full release notes and all changes for llama.cpp b9019.
View Full Changelog