b8223
📦 llama-cppView on GitHub →
✨ 1 features🐛 2 fixes🔧 1 symbols
Summary
This release introduces a memory check for fusion in ggml-cuda and includes fixes for NaN handling and a typo across the codebase.
✨ New Features
- Added memory check for fusion operations in ggml-cuda.
🐛 Bug Fixes
- Replaced NaNs with -FLT_MAX in relevant calculations.
- Fixed a typo.