v5.6.0rc3
📦 invokeaiView on GitHub →
✨ 5 features🐛 8 fixes🔧 1 symbols
Summary
This release focuses heavily on memory management improvements, introducing Low-VRAM mode capabilities to run large models on constrained hardware, alongside significant enhancements to Workflow batch processing with new data types and grouping features.
Migration Steps
- To enable Low-VRAM mode (recommended for low VRAM systems, beneficial for most), add the following line to your `invokeai.yaml` file: `enable_partial_loading: true`.
- Windows users should also follow the guide to [disable the Nvidia sysmem fallback](https://invoke-ai.github.io/InvokeAI/features/low-vram/#disabling-nvidia-sysmem-fallback-windows-only).
✨ New Features
- Introduced Low-VRAM mode features including partial model loading, dynamic cache sizes, working memory adjustments, and keeping RAM copies of weights to support large models on low-VRAM GPUs.
- Added support for float, integer, and string batch data types in Workflows.
- Added batch data generators for floats and integers: Arithmetic Sequence, Linear Distribution, Uniform Random Distribution, and Parse String.
- Supported grouped (zipped) batches in Workflows, allowing related batch collections to be processed in parallel pairs instead of the Cartesian product.
- Added Noise and Blur filters to the Canvas.
🐛 Bug Fixes
- Fixed image quality degradation when inpainting an image repeatedly.
- Fixed issue where transparent Canvas filter previews blended incorrectly with the unfiltered parent layer.
- Fixed issue where excessively long board names could cause performance issues.
- Fixed error when using DPM++ schedulers with certain models.
- Fixed (hopefully) the application scrolling off screen when run via the launcher.
- Fixed link to Scale setting's support docs.
- Fixed launcher issue requiring re-installation on every start for some users.
- Fixed launcher issue causing systems with AMD GPUs to use CPU instead of GPU.
🔧 Affected Symbols
enable_partial_loading