b8688
📦 llama-cppView on GitHub →
🐛 1 fixes🔧 1 symbols
Summary
This release primarily fixes an incorrect compute capability constant for CDNA2 (gfx90a/MI210) within the ggml-cuda backend. It also provides updated pre-compiled binaries across multiple platforms.
🐛 Bug Fixes
- Fixed CDNA2 compute capability constant (GGML_CUDA_CC_CDNA2) for gfx90a (MI210) by setting it to 0x90a instead of 0x910.