-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Describe the issue
[Issue]
When importing onnxruntime built for Python 3.13t (Free-threaded Python), the following RuntimeWarning is displayed, which is unnecessary as ONNX Runtime is intended to be thread-safe and capable of running without the GIL.
root@dc97bf6c52fc:/opt# python3.13t -c "import onnxruntime"
<frozen importlib._bootstrap>:488: RuntimeWarning: The global interpreter lock (GIL) has been enabled to load module 'onnxruntime.capi.onnxruntime_pybind11_state', which has not declared that it can run safely without the GIL. To override this behavior and keep the GIL disabled (at your own risk), run with PYTHON_GIL=0 or -Xgil=0.[Root Cause]
The onnxruntime_pybind11_state.so module is compiled using pybind11. Although pybind11 (v2.13+) supports injecting the necessary Py_MOD_GIL_NOT_USED flag, the code responsible for this is conditionally compiled:
if (gil_not_used_option(options...)) {
#if defined(Py_mod_gil) && defined(Py_GIL_DISABLED)
mod_def_slots[next_slot++] = {Py_mod_gil, Py_MOD_GIL_NOT_USED}; // This line is skipped
#endif
}[Proposed Solution]
Pass the definition to the C++ compiler:
CXXFLAGS="-DPy_GIL_DISABLED=1" ./build.sh ...This ensures that the conditional compilation block for Py_MOD_GIL_NOT_USED is activated, resolving the warning and allowing true Free-threaded operation.
To reproduce
root@dc97bf6c52fc:/opt# python3.13t -c "import onnxruntime"
<frozen importlib._bootstrap>:488: RuntimeWarning: The global interpreter lock (GIL) has been enabled to load module 'onnxruntime.capi.onnxruntime_pybind11_state', which has not declared that it can run safely without the GIL. To override this behavior and keep the GIL disabled (at your own risk), run with PYTHON_GIL=0 or -Xgil=0.Urgency
Not urgent
Platform
Linux
OS Version
Ubuntu 22.04.4 LTS
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.22
ONNX Runtime API
Python
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
CUDA 12.6, TensorRT 10.9