v6.12.0rc1
📦 invokeaiView on GitHub →
✨ 13 features🐛 2 fixes🔧 5 symbols
Summary
InvokeAI v6.12.0rc1 introduces experimental multi-user support, enhances compatibility with Z-Image and FLUX.2 models, and adds significant new features to the Canvas and Model Manager utilities. This release also provides new remote control capabilities via a REST endpoint.
Migration Steps
- If using the gallery, go to image board settings and select "Use Paged Gallery View" to enable page-by-page navigation instead of infinite scrolling.
- Users with access to the source code repository can use `scripts/remove_orphaned_models.py` to clean up unused models from the command line.
- Users with access to the source code repository can use `scripts/gallery_maintenance.py` to clean up dangling and orphaned gallery images.
✨ New Features
- Experimental multi-user mode introduced, allowing separate user accounts with distinct image boards, images, and preferences.
- Enhanced support for Z-Image Base models, suitable for fine-tuning and LoRA training.
- Support added for various FLUX.2 Klein LoRA formats.
- Paged gallery browsing is available as an option, replacing infinite scrolling.
- Arrow key navigation is now functional in the gallery viewer and thumbnail selection.
- New Text tool added to Canvas for inserting and manipulating text layers.
- Linear and radial gradient tools added to Canvas, utilizing foreground/background colors and transparency.
- Invert button added for Regional Guidance Layers and Inpaint Masks to swap painted/unpainted regions.
- New 'Sync Models' button in Model Manager to detect and remove orphaned models from the database/disk.
- New 'Missing Files' filter option in Model Manager to identify and remove models referenced in the database but missing files.
- Model selection menus no longer display missing or broken models.
- New REST endpoint allows programmatic setting of Invoke's generation parameters (model, size, seed, steps, LoRAs, etc.).
- Option added in Model Manager details panel to force large text encoder models to run on CPU to preserve GPU VRAM.
🐛 Bug Fixes
- Workflow collection order is now preserved when passed to an iterator, ensuring predictable execution.
- Race condition in the Model Cache that prematurely removed the FLUX.2 Klein encoder from memory has been addressed via optimized cache locking.