b8412
📦 llama-cppView on GitHub →
🐛 1 fixes🔧 1 symbols
Summary
This release addresses a minor warning in the x86 CPU backend and provides updated binary distributions for macOS, Linux, Windows, and openEuler supporting various hardware accelerators like ROCm, Vulkan, and CUDA.
🐛 Bug Fixes
- Fixed an unused changemask warning in ggml-cpu/x86 repack operations (#20692).