Change8

PyTorch

Data & ML

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Latest: v2.9.16 releases6 breaking changesView on GitHub →

Release History

v2.9.1Breaking12 fixes3 features
Nov 12, 2025

This maintenance release addresses critical regressions in PyTorch 2.9.0, specifically fixing memory issues in 3D convolutions, Inductor compilation bugs for Gemma/vLLM, and various distributed and numeric stability fixes.

v2.9.0Breaking1 fix7 features
Oct 15, 2025

PyTorch 2.9.0 introduces Python 3.10 as the minimum requirement, defaults the ONNX exporter to the Dynamo-based pipeline, and adds support for symmetric memory and FlexAttention on new hardware.

v2.8.0Breaking3 fixes10 features
Aug 6, 2025

PyTorch 2.8.0 introduces high-performance quantized LLM inference on Intel CPUs, SYCL support for CPP extensions, and stricter validation for autograd and torch.compile. It includes significant breaking changes regarding CUDA architecture support and internal configuration renames.

v2.7.1Breaking16 fixes3 features
Jun 4, 2025

This maintenance release focuses on fixing regressions and silent correctness issues across torch.compile, Distributed, and Flex Attention, while also improving wheel sizes and platform-specific compatibility for MacOS, Windows, and XPU.

v2.7.0Breaking1 fix9 features
Apr 23, 2025

PyTorch 2.7.0 introduces Blackwell support and FlexAttention optimizations while enforcing stricter C++ API visibility and Python limited API compliance. It marks a significant shift in ONNX and Export workflows by deprecating legacy capture methods in favor of the unified torch.export API.

v2.6.0Breaking10 features
Jan 29, 2025

PyTorch 2.6 introduces Python 3.13 support for torch.compile, FP16 support for X86 CPUs, and new AOTInductor packaging APIs. It includes a significant security change making torch.load use weights_only=True by default and deprecates the official Anaconda channel.