Change8

v0.13.1

📦 ollama
6 features🐛 4 fixes🔧 5 symbols

Summary

This release introduces support for Ministral-3 and Mistral-Large-3 models, adds tool calling for cogito-v2.1, and includes several fixes for CUDA detection and error reporting.

✨ New Features

  • Added support for Ministral-3 model family for edge deployment.
  • Added support for Mistral-Large-3 multimodal mixture-of-experts model.
  • Enabled Ollama's engine by default for nomic-embed-text.
  • Added tool calling support for cogito-v2.1.
  • Added thinking and tool parsing for cogito-v2.1.
  • Improved error rendering to provide clearer messages instead of 'Unmarshal:' errors.

🐛 Bug Fixes

  • Fixed CUDA VRAM discovery issues.
  • Fixed broken documentation link within the Ollama app.
  • Fixed issue where models were prematurely evicted on CPU-only systems.
  • Fixed CUDA GPU detection failure for older GPU models.

🔧 Affected Symbols

nomic-embed-textcogito-v2.1CUDAMinistral-3Mistral-Large-3