v3.2.0
Breaking Changes📦 localaiView on GitHub →
⚠ 2 breaking✨ 8 features🐛 1 fixes🔧 5 symbols
Summary
LocalAI 3.2.0 introduces a major architectural shift by separating inference backends from the core binary, resulting in a leaner application and enabling independent backend management. This release also adds automatic hardware detection for backend installation and expands model support significantly.
⚠️ Breaking Changes
- The core is now separated from all inference backends (llama.cpp, whisper.cpp, piper, stablediffusion-ggml, etc.). These are no longer bundled in the main binary.
- Existing models installed prior to v3.2.0 might not have a specific backend assigned and may require manual backend installation after upgrading for them to function.
Migration Steps
- If you have existing models installed before upgrading to 3.2.0, you may need to install the required backend manually for these models to work. Use the WebUI or the CLI command: `local-ai backends install <backend_name>`.
- For advanced use cases or to override auto-detection of hardware capabilities, use the LOCALAI_FORCE_META_BACKEND_CAPABILITY environment variable (options: default, nvidia, amd, intel).
✨ New Features
- Introduction of Modular Backends: Backends now live outside the main binary in a Backend Gallery, allowing independent updates.
- Drastically smaller LocalAI binary and container images due to backend separation.
- Smart Backend Installation: LocalAI automatically detects hardware (CPU, NVIDIA, AMD, Intel) and downloads the necessary backend upon model installation.
- Simplified Build Process due to the new modular architecture.
- Intel GPU Support for Whisper transcription via SYCL acceleration.
- Enhanced Realtime Audio support with speech started and stopped events.
- OpenAI-compatible support for the input_audio field in the /v1/chat/completions API for multimodal audio inputs.
- Massive Model Expansion: Over 50 new models added, including Qwen3, Gemma, Mistral, and Nemotron.
🐛 Bug Fixes
- Fixed an issue where the download status for backend images in the gallery was not correctly displayed.
🔧 Affected Symbols
llama.cppwhisper.cpppiperstablediffusion-ggml/v1/chat/completions