Change8

b7612

📦 llama-cppView on GitHub →
1 features🔧 1 symbols

Summary

This release focuses on internal graph optimizations by reducing topology branching. It includes a wide range of pre-built binaries for multiple operating systems and hardware accelerators including CUDA, Vulkan, and SYCL.

✨ New Features

  • Optimization of graph topology branching to improve performance or memory efficiency.

🔧 Affected Symbols

graph