Change8

b8719

📦 llama-cppView on GitHub →
🐛 1 fixes🔧 2 symbols

Summary

This release addresses a significant memory leak in the optimization context freeing routine (ggml_opt_free) by ensuring the per-batch context copy (ctx_copy) is properly released.

🐛 Bug Fixes

  • Fixed a memory leak in ggml_opt_free where ctx_copy was not being freed, leading to a per-training-session leak of approximately 900 KB for typical GNN training sessions.

Affected Symbols