Change8

PyTorch

Data & ML

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Latest: v2.10.07 releases7 breaking changes13 common errorsView on GitHub

Release History

v2.10.0Breaking14 features
Jan 21, 2026

PyTorch 2.10 introduces Python 3.14 support for torch.compile, new features like combo-kernels fusion and LocalTensor for distributed debugging, and removes several deprecated or legacy functionalities across ONNX, Dataloader, and nn modules.

v2.9.1Breaking12 fixes3 features
Nov 12, 2025

This maintenance release addresses critical regressions in PyTorch 2.9.0, specifically fixing memory issues in 3D convolutions, Inductor compilation bugs for Gemma/vLLM, and various distributed and numeric stability fixes.

v2.9.0Breaking1 fix7 features
Oct 15, 2025

PyTorch 2.9.0 introduces Python 3.10 as the minimum requirement, defaults the ONNX exporter to the Dynamo-based pipeline, and adds support for symmetric memory and FlexAttention on new hardware.

v2.8.0Breaking3 fixes10 features
Aug 6, 2025

PyTorch 2.8.0 introduces high-performance quantized LLM inference on Intel CPUs, SYCL support for CPP extensions, and stricter validation for autograd and torch.compile. It includes significant breaking changes regarding CUDA architecture support and internal configuration renames.

v2.7.1Breaking16 fixes3 features
Jun 4, 2025

This maintenance release focuses on fixing regressions and silent correctness issues across torch.compile, Distributed, and Flex Attention, while also improving wheel sizes and platform-specific compatibility for MacOS, Windows, and XPU.

v2.7.0Breaking1 fix9 features
Apr 23, 2025

PyTorch 2.7.0 introduces Blackwell support and FlexAttention optimizations while enforcing stricter C++ API visibility and Python limited API compliance. It marks a significant shift in ONNX and Export workflows by deprecating legacy capture methods in favor of the unified torch.export API.

v2.6.0Breaking10 features
Jan 29, 2025

PyTorch 2.6 introduces Python 3.13 support for torch.compile, FP16 support for X86 CPUs, and new AOTInductor packaging APIs. It includes a significant security change making torch.load use weights_only=True by default and deprecates the official Anaconda channel.

Common Errors

ProcessRaisedException3 reports

ProcessRaisedException in PyTorch often arises from issues within multiprocessing contexts, specifically related to CUDA device handling or argument mismatches during distributed operations or within TorchInductor. Ensure CUDA devices are correctly initialized and visible to all processes, and verify that all function/class calls within multiprocessing conform to the expected argument count and types as defined by PyTorch or TorchInductor APIs, paying special attention to distributed configurations.

NotImplementedError3 reports

The "NotImplementedError" in PyTorch usually arises when a function or operation is called but the underlying implementation for a specific data type, device, or combination of features is missing. To fix this, either implement the missing functionality for the affected type/device, or, if that's not feasible, raise a `NotImplementedError` earlier with a more informative message to guide users away from unsupported usage. Consider dispatching to a suitable implementation based on the input types and devices to avoid the error.

TorchRuntimeError2 reports

TorchRuntimeError in compiled PyTorch code often arises from operations that produce strides or shapes incompatible with the expected memory layout, especially when using `torch.compile`. Inspect custom operators, view/reshape operations, and ensure data contiguity using `.contiguous()` before such operations, paying attention to how `torch.compile` may optimize memory layouts differently than eager mode. If fake tensors are involved, check for discrepancies in shape/stride handling between meta and actual device execution by adding guards around problematic operations for different compilation modes.

NoValidChoicesError2 reports

The "NoValidChoicesError" in PyTorch Inductor usually indicates that no viable backend implementations (e.g., GEMM, convolution) are found that satisfy all constraints for a given operation, often due to unsupported data types, shapes, or hardware features. To fix this, either rewrite the operation using supported data types/shapes/layouts, or investigate and potentially enable/implement a missing backend implementation in Inductor that fulfills the requirements (often requires understanding Inductor's code generation).

InternalTorchDynamoError2 reports

InternalTorchDynamoError often arises from unsupported Python language features or overly complex control flow within your PyTorch model that Dynamo's tracing mechanism struggles to handle. To fix it, simplify your model's code by breaking down large functions, removing unsupported operations like in-place updates within loops, and making control flow more explicit using standard PyTorch operations instead of complex Python logic, or disable dynamo for the given section with `torch._dynamo.disable`. Consider filing a bug report with a minimal repro if the error persists, as it could indicate an actual Dynamo issue.

RefResolutionError2 reports

The "RefResolutionError" in PyTorch usually arises because a model configuration or checkpoint references a module attribute that no longer exists or has been renamed, often after a library update. Resolve this by inspecting the loaded model's state_dict and the model definition in your code to identify the missing or renamed attribute and update the model definition or checkpoint loading logic accordingly. Specifically, ensure the attribute name is consistent between the loaded state and the current model architecture; a common fix is to rename the attribute in the model definition or provide a mapping during checkpoint loading.

Related Data & ML Packages

Subscribe to Updates

Get notified when new versions are released

RSS Feed