v0.15.5-rc4
📦 ollamaView on GitHub →
✨ 5 features
Summary
This release introduces two new models, GLM-OCR and Qwen3-Coder-Next, and enhances functionality with sub-agent support for launch commands and VRAM-aware context length defaulting.
Migration Steps
- The 'ollama signin' command now opens a browser window for authentication.
✨ New Features
- Added support for GLM-OCR, a multimodal OCR model.
- Added Qwen3-Coder-Next, a coding-focused language model.
- Introduced sub-agent support for 'ollama launch' for planning and research tasks.
- Enabled GLM-4.7-Flash support on the experimental MLX engine.
- Ollama now defaults context lengths based on available VRAM: 4k (< 24 GiB), 32k (24-48 GiB), and 262k (>= 48 GiB).