2.5.1
📦 pytorch-lightningView on GitHub →
✨ 4 features🐛 10 fixes🔧 8 symbols
Summary
This release introduces enhancements for logging integrations like MLflow and CometML, allows customization of LightningCLI argument parsing, and fixes several bugs related to logging latency, checkpoint resumption, and logger behavior. Legacy support for `lightning run model` has been removed in favor of `fabric run`.
Migration Steps
- If you were using `lightning run model`, switch to using `fabric run` instead.
✨ New Features
- LightningCLI can now use a customized argument parser class.
- Added a new `checkpoint_path_prefix` parameter to the MLflow logger to control the artifact path for model checkpoints.
- CometML logger was updated to support the recent Comet SDK.
- Added logging support for a list of dicts in Lightning Fabric without collapsing to a single key.
🐛 Bug Fixes
- Fixed CSVLogger logging hyperparameters at every write, which was increasing latency.
- Fixed OverflowError when resuming from checkpoint with an iterable dataset.
- Fixed swapped `_R_co` and `_P` to prevent type error.
- Ensured `WandbLogger.experiment` is called first in `_call_setup_hook` to correctly sync tensorboard logs to wandb.
- Fixed TBPTT example.
- Fixed test compatibility as AdamW became a subclass of Adam.
- Fixed file extension of model checkpoints uploaded by NeptuneLogger.
- Reset trainer variable `should_stop` when `fit` is called.
- Fixed WandbLogger uploading models from all `ModelCheckpoint` callbacks, not just one.
- Fixed error when logging to a deleted MLFlow experiment.
🔧 Affected Symbols
LightningCLIMLflowLoggerCometML loggerCSVLoggerWandbLoggerNeptuneLoggerTrainer.should_stopModelCheckpoint