b7601
📦 llama-cppView on GitHub →
🐛 1 fixes🔧 1 symbols
Summary
This release (b7601) focuses on cleaning up the CUDA initialization process by removing unnecessary log prints.
🐛 Bug Fixes
- Removed unnecessary print statements during ggml_cuda_init to reduce console noise.
🔧 Affected Symbols
ggml_cuda_init