b7687
📦 llama-cppView on GitHub →
Summary
This release includes updates to webgpu get_memory functionality and provides a comprehensive set of pre-built binaries for macOS, Linux, Windows, and openEuler across different architectures and hardware acceleration methods.