Change8

b7620

📦 llama-cppView on GitHub →
🐛 1 fixes🔧 2 symbols

Summary

This release (b7620) focuses on a fix for token padding reservation relative to the number of sequences in the context.

🐛 Bug Fixes

  • Fixed reserve token padding to n_seqs in context management (#18536)

🔧 Affected Symbols

llama_contextn_seqs