Change8

XGBoost

Data & ML

Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

Latest: v3.2.014 releases3 breaking changes11 common errorsView on GitHub

Release History

v3.2.0
Feb 10, 2026

This release note points to the official documentation page for changes in version 3.2.0, with additional artifacts forthcoming.

v3.1.3
Jan 9, 2026
v3.1.22 fixes1 feature
Nov 20, 2025

Release 3.1.2 adds automatic inference of the enable_categorical flag during model load and includes bug fixes for Python callback ordering and NCCL 2.28 loading.

v3.1.13 fixes
Oct 21, 2025

XGBoost 3.1.1 improves error handling by emitting correct errors for GPU inputs on CPU-only builds, provides clearer messages for the removed binary model format, and fixes SHAP group ID handling when the intercept is a vector.

v3.1.01 feature
Oct 17, 2025

XGBoost 3.1.0 release adds experimental R GPU binary packages with CUDA support and provides source tarball downloads.

v3.1.0rc1
Sep 26, 2025
v3.0.5Breaking1 feature
Sep 5, 2025

XGBoost 3.0.5 removes the __restrict__ macro and switches compilation to CUDA Toolkit 13, providing updated GPU support.

v3.0.4Breaking1 feature
Aug 11, 2025

XGBoost 3.0.4 removes the `__restrict__` qualifier and makes CUDA lineinfo optional, with experimental CUDA-enabled R binaries now available.

v3.0.36 fixes4 features
Jul 30, 2025

XGBoost 3.0.3 introduces new APIs, adds Rapids 25.06 and Win‑ARM64 wheel support, and includes several bug fixes for GPU evaluation and metrics.

v3.0.21 fix1 feature
May 25, 2025

The update introduces Dask 2025.4.0 scheduler info compatibility and fixes VM fallback logic on WSL2, along with experimental CUDA-enabled R binary packages.

v3.0.1Breaking2 fixes6 features
May 13, 2025

This release adds GPU driver detection, deep-tree external-memory optimizations, new manylinux_2_28_x86_64 CPU wheels, Dask compatibility workarounds, and changes model output to use denormal floating-point values, along with several bug fixes.

v3.0.0
Mar 15, 2025

XGBoost 3.0.0 introduces experimental GPU-enabled R binary packages and provides source tarball downloads, with SHA-256 hashes for verification.

v3.0.0rc1
Feb 26, 2025

No specific changelog items were listed; refer to the linked issue for details.

v2.1.43 features
Feb 6, 2025

Patch 2.1.4 adds scikit-learn 1.6 compatibility, CUDA 12.8 wheel support with Blackwell, and updates for RMM 25.02 logger changes.

Common Errors

ThrowOnCudaError1 report

The "ThrowOnCudaError" in xgboost usually arises from insufficient GPU memory or incompatible CUDA driver/toolkit versions. Reduce the `max_bin` or `gpu_id` parameters in your xgboost configuration to alleviate GPU memory pressure, and ensure your CUDA toolkit and driver versions match the xgboost build requirements or consider building from source using a compatible CUDA version.

RayTaskError1 report

RayTaskError in xgboost often arises from GPU memory exhaustion during training, especially with large datasets. To fix this, reduce the `max_depth` parameter in your XGBoost model to decrease GPU memory usage or implement early stopping to prevent over-allocation; consider downsampling your training data if feasible to reduce overall memory footprint. Also make sure the `tree_method` parameter is set to `gpu_hist` for GPU acceleration.

NoClassDefFoundError1 report

The NoClassDefFoundError in XGBoost, especially related to JNI, often indicates that the XGBoost native library (e.g., xgboost4j.dll, libxgboost4j.so) is missing from the Java classpath or is not accessible by the JVM. Ensure that the XGBoost native library corresponding to your system architecture is correctly placed in a directory included in your java.library.path or is bundled within your application's resources, and is loaded before XGBoost classes are initialized. Specifically verify XGBoostJNI is accessible to the JVM at runtime when the Booster object is loaded.

CellExecutionError1 report

CellExecutionError in xgboost often arises from a mismatch between the xgboost version and the available hardware, specifically when trying to use GPU acceleration without a GPU-enabled xgboost build. Ensure you've installed the correct xgboost version with GPU support (e.g., `xgboost-cu11` for CUDA 11) using pip, and verify that your CUDA drivers are compatible with the selected XGBoost version if you intend to leverage GPU acceleration. Furthermore, double-check that CUDA is correctly installed and accessible to your environment.

HeapDumpOnOutOfMemoryError1 report

This error in XGBoost usually means the Java Virtual Machine (JVM) ran out of memory while training or predicting, often because the dataset or model is too large. Increase the JVM heap size by setting the `java_opts` parameter in your XGBoost configuration (e.g., `-Xmx4g` for 4GB) to allocate more memory to the Java process, or reduce the size of your data. Consider using distributed XGBoost on a cluster to handle larger datasets efficiently.

UnsupportedClassVersionError1 report

The "UnsupportedClassVersionError" in xgboost usually indicates that the Java runtime environment (JRE) being used is older than what was used to compile xgboost's Java components. To fix it, upgrade your JRE to a version that is compatible with the xgboost Java library, ideally Java 17 or later, and ensure your JAVA_HOME environment variable points to the new JRE installation. Verify that your system PATH includes the updated Java installation's bin directory to ensure the correct Java version is being used.

Related Data & ML Packages

Subscribe to Updates

Get notified when new versions are released

RSS Feed