Skip to content

Switching torch extra between pypi & torch backend #16368

@tpgillam

Description

@tpgillam

uv 0.9.4. Consider the following pyproject.toml in an otherwise empty directory:

[project]
name = "moo"
version = "0.1.0"
requires-python = ">=3.13"
dependencies = ["numpy"]


[project.optional-dependencies]
cpu = [
  "torch>=2.8.0",
]
cu128 = [
  "torch>=2.8.0",
]
pypi = [
  "torch>=2.8.0",
]


[tool.uv]
conflicts = [
  [
    { extra = "cpu" },
    { extra = "cu128" },
    { extra = "pypi" },
  ],
]

[tool.uv.sources]
torch = [
  { index = "pytorch-cpu", extra = "cpu" },
  { index = "pytorch-cu128", extra = "cu128" },
  { index = "pytorch-pypi", extra = "pypi" },
]

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128"
explicit = true

[[tool.uv.index]]
name = "pytorch-pypi"
url = "https://pypi.org/simple/"
explicit = true

This is basically the same as the documented example, with the addition of the pytorch-pypi index as an option.

I've now run the following commands on a machine with a GPU:

$ uv run --extra cpu python -c "import torch; print(torch.cuda.is_available())"
Using CPython 3.13.7
Creating virtual environment at: .venv
Installed 11 packages in 5.26s
False
$ uv run --extra cu128 python -c "import torch; print(torch.cuda.is_available())"
True
$ uv run --extra pypi python -c "import torch; print(torch.cuda.is_available())"
True
$ uv run --extra cpu python -c "import torch; print(torch.cuda.is_available())"
Uninstalled 1 package in 2.45s
Installed 1 package in 3.90s
False
$ uv run --extra pypi python -c "import torch; print(torch.cuda.is_available())"
False

A little inspection of the verbose logs shows that, when specifying --extra pypi, no package is added or removed. We can look at the syncs to see a bit more explicitly:

$ rm -rf .venv && uv sync --extra pypi
Using CPython 3.13.7
Creating virtual environment at: .venv
Resolved 31 packages in 0.65ms
Installed 27 packages in 4.70s
 + filelock==3.20.0
 + fsspec==2025.9.0
 + jinja2==3.1.6
 + markupsafe==3.0.3
 + mpmath==1.3.0
 + networkx==3.5
 + numpy==2.3.4
 + nvidia-cublas-cu12==12.8.4.1
 + nvidia-cuda-cupti-cu12==12.8.90
 + nvidia-cuda-nvrtc-cu12==12.8.93
 + nvidia-cuda-runtime-cu12==12.8.90
 + nvidia-cudnn-cu12==9.10.2.21
 + nvidia-cufft-cu12==11.3.3.83
 + nvidia-cufile-cu12==1.13.1.3
 + nvidia-curand-cu12==10.3.9.90
 + nvidia-cusolver-cu12==11.7.3.90
 + nvidia-cusparse-cu12==12.5.8.93
 + nvidia-cusparselt-cu12==0.7.1
 + nvidia-nccl-cu12==2.27.5
 + nvidia-nvjitlink-cu12==12.8.93
 + nvidia-nvshmem-cu12==3.3.20
 + nvidia-nvtx-cu12==12.8.90
 + setuptools==80.9.0
 + sympy==1.14.0
 + torch==2.9.0
 + triton==3.5.0
 + typing-extensions==4.15.0
$ uv sync --extra pypi
Resolved 31 packages in 2ms
Audited 27 packages in 13ms
$ uv sync --extra cpu
Resolved 31 packages in 1ms
Uninstalled 17 packages in 3.05s
Installed 1 package in 4.31s
 - nvidia-cublas-cu12==12.8.4.1
 - nvidia-cuda-cupti-cu12==12.8.90
 - nvidia-cuda-nvrtc-cu12==12.8.93
 - nvidia-cuda-runtime-cu12==12.8.90
 - nvidia-cudnn-cu12==9.10.2.21
 - nvidia-cufft-cu12==11.3.3.83
 - nvidia-cufile-cu12==1.13.1.3
 - nvidia-curand-cu12==10.3.9.90
 - nvidia-cusolver-cu12==11.7.3.90
 - nvidia-cusparse-cu12==12.5.8.93
 - nvidia-cusparselt-cu12==0.7.1
 - nvidia-nccl-cu12==2.27.5
 - nvidia-nvjitlink-cu12==12.8.93
 - nvidia-nvshmem-cu12==3.3.20
 - nvidia-nvtx-cu12==12.8.90
 - torch==2.9.0
 + torch==2.9.0+cpu
 - triton==3.5.0
$ uv sync --extra pypi
Resolved 31 packages in 2ms
Installed 16 packages in 185ms
 + nvidia-cublas-cu12==12.8.4.1
 + nvidia-cuda-cupti-cu12==12.8.90
 + nvidia-cuda-nvrtc-cu12==12.8.93
 + nvidia-cuda-runtime-cu12==12.8.90
 + nvidia-cudnn-cu12==9.10.2.21
 + nvidia-cufft-cu12==11.3.3.83
 + nvidia-cufile-cu12==1.13.1.3
 + nvidia-curand-cu12==10.3.9.90
 + nvidia-cusolver-cu12==11.7.3.90
 + nvidia-cusparse-cu12==12.5.8.93
 + nvidia-cusparselt-cu12==0.7.1
 + nvidia-nccl-cu12==2.27.5
 + nvidia-nvjitlink-cu12==12.8.93
 + nvidia-nvshmem-cu12==3.3.20
 + nvidia-nvtx-cu12==12.8.90
 + triton==3.5.0
$ uv sync --extra cu128
Resolved 31 packages in 2ms
Uninstalled 1 package in 2.75s
Installed 1 package in 4.33s
 - torch==2.9.0+cpu
 + torch==2.9.0+cu128
$ uv sync --extra pypi
Resolved 31 packages in 1ms
Audited 27 packages in 4ms

I presume this is because the torch indices name their wheels e.g. 2.9.0+cpu and 2.9.0+cu128, whereas the pypi wheel is just 2.9.0. But this behaviour did trip us up a bit, so possibly there's some room for improvement here, even if just in the docs.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions