v0.14.0-rc3
📦 ollamaView on GitHub →
✨ 5 features🐛 3 fixes🔧 2 symbols
Summary
This release introduces experimental support for image generation models via MLX and enhances Anthropic API compatibility. It also adds model version requirements via the Modelfile and improves VRAM measurement accuracy.
Migration Steps
- If you rely on specific Ollama versioning for models, use the new `REQUIRES` command in your `Modelfile`.
✨ New Features
- Introduced experimental support for image generation models powered by MLX.
- The `ollama run --experimental` CLI now launches a new Ollama CLI featuring an agent loop and the `bash` tool.
- Added support for the Anthropic API's `/v1/messages` endpoint.
- A new `REQUIRES` command is available in `Modelfile` to specify the required Ollama version for a model.
- Ollama's application now highlights Swift source code.
🐛 Bug Fixes
- Fixed an issue where older models could cause an integer underflow during memory estimation on systems with low VRAM.
- Improved the accuracy of VRAM measurements specifically for AMD iGPUs.
- The system now returns an error when embeddings result in `NaN` or `-Inf`.