v1.1.0
📦 huggingface-hubView on GitHub →
✨ 8 features🐛 2 fixes🔧 9 symbols
Summary
This release focuses on optimizing the file download experience through multi-threading and cleaner CLI output, while significantly expanding CLI capabilities with new commands for managing Inference Endpoints and verifying cache integrity. New support for WaveSpeedAI and image segmentation on fal are also introduced.
Migration Steps
- If you rely on verbose per-file logging in snapshot_download or hf download CLI, be aware that this is now hidden by default. The behavior is cleaner, but if specific per-file output was critical, you may need to adjust expectations.
- If you use the CLI, you can now install it via the new minimal 'hf' PyPI package: `pip install hf` (or use tools like uvx).
✨ New Features
- snapshot_download is now always multi-threaded for significant performance gains.
- Output for snapshot_download and hf download CLI is less verbose, with per-file logs hidden by default and progress bars consolidated.
- WaveSpeedAI is added as an official Inference Provider supporting text-to-image, image-to-image, text-to-video, and image-to-video tasks.
- Support for image-segmentation task added for fal Inference Provider, enabling use of RMBG v2.0.
- A new, minimal PyPI package named 'hf' is published for installing the 'hf' CLI tool.
- New 'hf endpoints' command group added to the CLI for deploying and managing Inference Endpoints.
- New 'hf cache verify' command added to check cached files against Hub checksums.
- 'hf cache ls' command enhanced with --sort (by accessed, modified, name, or size) and --limit options for better cache management.
🐛 Bug Fixes
- Fixed a bug in HfFileSystem where the instance cache broke when using multiprocessing with the 'fork' start method.
- Patched the CLI installer script to fix a bug for zsh users, ensuring correct operation across common shells.