Change8

v0.12.5

Breaking Changes
📦 ollama
2 breaking2 features🐛 2 fixes🔧 5 symbols

Summary

This release introduces structured output support for thinking models and improves app startup behavior, while removing support for older macOS versions and specific AMD GPU architectures.

⚠️ Breaking Changes

  • macOS 12 Monterey and macOS 13 Ventura are no longer supported. Users must upgrade their OS to continue using the latest Ollama versions.
  • AMD gfx900 and gfx906 (MI50, MI60, etc) GPUs are no longer supported via ROCm. Users with these GPUs will need to wait for future Vulkan support or use an older version.

Migration Steps

  1. If using AMD gfx900/gfx906 GPUs, note that ROCm support is removed; monitor future releases for Vulkan support.

✨ New Features

  • Thinking models now support structured outputs when using the /api/chat API.
  • The Ollama app now waits until the Ollama service is fully running before allowing a conversation to start.

🐛 Bug Fixes

  • Fixed issue where setting "think": false would trigger an error instead of being silently ignored.
  • Fixed output issues specifically affecting deepseek-r1 models.

🔧 Affected Symbols

/api/chatdeepseek-r1ROCmgfx900gfx906