Change8

b8180

📦 llama-cppView on GitHub →
3 features🐛 3 fixes🔧 2 symbols

Summary

This release introduces model metadata loading from Hugging Face for testing purposes, along with optimizations for incremental downloading and fixes related to compilation conditions.

✨ New Features

  • Added model metadata loading from Hugging Face for use in tests.
  • Implemented incremental chunking for model downloads instead of full redownloads.
  • Added support for loading metadata from individual split files when dealing with split models, and avoiding mmproj loading.

🐛 Bug Fixes

  • Fixed caching issue related to model metadata loading.
  • Added a warning when incremental model metadata loading fails.
  • Fixed compilation only occurring when cpp-httplib has SSL support.

Affected Symbols