b8855
📦 llama-cppView on GitHub →
🐛 1 fixes🔧 2 symbols
Summary
This release primarily addresses a specific crash bug related to GLM-DSA models when using the vocab_only flag during tokenization. It also includes minor code simplification and review comment addressing.
🐛 Bug Fixes
- Fixed a crash in print_info for GLM-DSA models when vocab_only is set during llama-tokenize operations.