b7646
📦 llama-cppView on GitHub →
✨ 1 features🔧 1 symbols
Summary
This release focuses on internal refactoring of CUDA graph usage within ggml-cuda, requiring users to switch from checking 'enabled' to using 'is_enabled()'.
Migration Steps
- If using CUDA graph features, update code to use is_enabled() instead of enabled.
✨ New Features
- Refactored CUDA graph usage in ggml-cuda.
🔧 Affected Symbols
ggml-cuda