XGBoost
Data & MLScalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Release History
v3.2.0This release note points to the official documentation page for changes in version 3.2.0, with additional artifacts forthcoming.
v3.1.3v3.1.22 fixes1 featureRelease 3.1.2 adds automatic inference of the enable_categorical flag during model load and includes bug fixes for Python callback ordering and NCCL 2.28 loading.
v3.1.13 fixesXGBoost 3.1.1 improves error handling by emitting correct errors for GPU inputs on CPU-only builds, provides clearer messages for the removed binary model format, and fixes SHAP group ID handling when the intercept is a vector.
v3.1.01 featureXGBoost 3.1.0 release adds experimental R GPU binary packages with CUDA support and provides source tarball downloads.
v3.1.0rc1v3.0.5Breaking1 featureXGBoost 3.0.5 removes the __restrict__ macro and switches compilation to CUDA Toolkit 13, providing updated GPU support.
v3.0.4Breaking1 featureXGBoost 3.0.4 removes the `__restrict__` qualifier and makes CUDA lineinfo optional, with experimental CUDA-enabled R binaries now available.
v3.0.36 fixes4 featuresXGBoost 3.0.3 introduces new APIs, adds Rapids 25.06 and Win‑ARM64 wheel support, and includes several bug fixes for GPU evaluation and metrics.
v3.0.21 fix1 featureThe update introduces Dask 2025.4.0 scheduler info compatibility and fixes VM fallback logic on WSL2, along with experimental CUDA-enabled R binary packages.
v3.0.1Breaking2 fixes6 featuresThis release adds GPU driver detection, deep-tree external-memory optimizations, new manylinux_2_28_x86_64 CPU wheels, Dask compatibility workarounds, and changes model output to use denormal floating-point values, along with several bug fixes.
v3.0.0XGBoost 3.0.0 introduces experimental GPU-enabled R binary packages and provides source tarball downloads, with SHA-256 hashes for verification.
v3.0.0rc1No specific changelog items were listed; refer to the linked issue for details.
v2.1.43 featuresPatch 2.1.4 adds scikit-learn 1.6 compatibility, CUDA 12.8 wheel support with Blackwell, and updates for RMM 25.02 logger changes.
Common Errors
ThrowOnCudaError1 reportThe "ThrowOnCudaError" in xgboost usually arises from insufficient GPU memory or incompatible CUDA driver/toolkit versions. Reduce the `max_bin` or `gpu_id` parameters in your xgboost configuration to alleviate GPU memory pressure, and ensure your CUDA toolkit and driver versions match the xgboost build requirements or consider building from source using a compatible CUDA version.
RayTaskError1 reportRayTaskError in xgboost often arises from GPU memory exhaustion during training, especially with large datasets. To fix this, reduce the `max_depth` parameter in your XGBoost model to decrease GPU memory usage or implement early stopping to prevent over-allocation; consider downsampling your training data if feasible to reduce overall memory footprint. Also make sure the `tree_method` parameter is set to `gpu_hist` for GPU acceleration.
NoClassDefFoundError1 reportThe NoClassDefFoundError in XGBoost, especially related to JNI, often indicates that the XGBoost native library (e.g., xgboost4j.dll, libxgboost4j.so) is missing from the Java classpath or is not accessible by the JVM. Ensure that the XGBoost native library corresponding to your system architecture is correctly placed in a directory included in your java.library.path or is bundled within your application's resources, and is loaded before XGBoost classes are initialized. Specifically verify XGBoostJNI is accessible to the JVM at runtime when the Booster object is loaded.
CellExecutionError1 reportCellExecutionError in xgboost often arises from a mismatch between the xgboost version and the available hardware, specifically when trying to use GPU acceleration without a GPU-enabled xgboost build. Ensure you've installed the correct xgboost version with GPU support (e.g., `xgboost-cu11` for CUDA 11) using pip, and verify that your CUDA drivers are compatible with the selected XGBoost version if you intend to leverage GPU acceleration. Furthermore, double-check that CUDA is correctly installed and accessible to your environment.
HeapDumpOnOutOfMemoryError1 reportThis error in XGBoost usually means the Java Virtual Machine (JVM) ran out of memory while training or predicting, often because the dataset or model is too large. Increase the JVM heap size by setting the `java_opts` parameter in your XGBoost configuration (e.g., `-Xmx4g` for 4GB) to allocate more memory to the Java process, or reduce the size of your data. Consider using distributed XGBoost on a cluster to handle larger datasets efficiently.
UnsupportedClassVersionError1 reportThe "UnsupportedClassVersionError" in xgboost usually indicates that the Java runtime environment (JRE) being used is older than what was used to compile xgboost's Java components. To fix it, upgrade your JRE to a version that is compatible with the xgboost Java library, ideally Java 17 or later, and ensure your JAVA_HOME environment variable points to the new JRE installation. Verify that your system PATH includes the updated Java installation's bin directory to ensure the correct Java version is being used.
Related Data & ML Packages
An Open Source Machine Learning Framework for Everyone
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
scikit-learn: machine learning in Python
Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Streamlit — A faster way to build and share data apps.
Subscribe to Updates
Get notified when new versions are released