Change8

v5.7.2rc2

📦 invokeaiView on GitHub →
3 features🐛 4 fixes🔧 1 symbols

Summary

This release introduces memory management improvements via an optional CUDA memory allocator setting and makes the enqueue operation non-blocking for better responsiveness. It also includes fixes for UI rendering, workflow downloads, and VAE memory estimation.

Migration Steps

  1. To utilize the CUDA memory allocator for potential VRAM reduction, add the `pytorch_cuda_alloc_conf` setting to your `invokeai.yaml` file (e.g., `pytorch_cuda_alloc_conf: "backend:cudaMallocAsync"`).
  2. Users are recommended to use the new Invoke Launcher for installation and updates, following the Quick Start guide.

✨ New Features

  • Added support for uploading WEBP images, which are converted to PNGs internally.
  • Introduced the `pytorch_cuda_alloc_conf` setting in `invokeai.yaml` to allow opting into CUDA's memory allocator for potentially reduced peak VRAM usage and improved performance.
  • Enqueue operation is now non-blocking, improving application responsiveness after clicking Invoke, especially for large batches.

🐛 Bug Fixes

  • Fixed rendering issues for "single or collection" field types in the Workflow Editor, resolving display problems for widgets like IP Adapter images and ControlNet control weights.
  • Corrected the download button in the Workflow Library list to download the intended workflow instead of the currently active one.
  • Reduced VAE VRAM usage estimates to mitigate slowdowns and Out-Of-Memory errors during the VAE decode step.
  • Fixed recursive cursor errors by migrating DB access from global mutex/long-lived cursors to WAL mode with short-lived cursors.

🔧 Affected Symbols

invokeai.yaml