Change8

b8936

📦 llama-cppView on GitHub →
1 features🔧 1 symbols

Summary

This release focuses on performance improvements, specifically optimizing the AVX2 Q6_K quantization path within the ggml-cpu backend. It also provides updated pre-compiled binaries for numerous operating systems and hardware configurations.

✨ New Features

  • Optimized AVX2 Q6_K quantization for ggml-cpu backend.

Affected Symbols