Change8

v3.1.0

Breaking Changes
📦 localaiView on GitHub →
4 breaking3 features🐛 1 fixes

Summary

LocalAI 3.1 introduces support for Gemma 3n models and streamlines the container image structure by removing bundled sources, significantly reducing image size. This release also features meta-packages for easier backend installation and highlights the integrated LocalAI ecosystem.

⚠️ Breaking Changes

  • Container images no longer bundle sources, significantly reducing image sizes. If rebuilding locally, follow the documentation to build from scratch.
  • Default model path in container images changed from /build/models to /models/.
  • Default backend path in container images changed from /build/backends to /backends/.
  • Container image tag naming for development builds has been standardized: \`gpu-nvidia-cuda11\` (was \`cublas-cuda11\`), \`gpu-nvidia-cuda12\` (was \`cublas-cuda12\`), \`gpu-intel-f16\` (was \`sycl-f16\`), and \`gpu-intel-f32\` (was \`sycl-f32\`).

Migration Steps

  1. If you rely on bundled sources in container images, you must now rebuild locally following the documentation.
  2. Update paths referencing model directories in container images from \`/build/models\` to \`/models/\`.
  3. Update paths referencing backend directories in container images from \`/build/backends\` to \`/backends/\`.
  4. Update any scripts or configurations referencing old development container image tags (e.g., \`cublas-cuda11\`) to the new standardized names (e.g., \`gpu-nvidia-cuda11\`).

✨ New Features

  • Added support for Gemma 3n models: \`gemma-3n-e2b-it\` and \`gemma-3n-e4b-it\` (text generation only).
  • Introduced meta-packages to the backend gallery, which automatically install the most suitable backend based on detected GPU.
  • LocalAGI has rejoined LocalAI, completing the LocalAI ecosystem stack for private AI operations.

🐛 Bug Fixes

  • Fixed an issue where dangling directories were not deleted if backend installation failed.