b8363
📦 llama-cppView on GitHub →
🐛 1 fixes🔧 1 symbols
Summary
This release focuses on an optimization within ggml to prevent unnecessary CUDA context creation during device initialization. It also provides numerous pre-compiled binaries for diverse operating systems and hardware configurations.
🐛 Bug Fixes
- Avoided creating a CUDA context during device initialization in ggml.