Skip to content

Conversation

@yiqingy0
Copy link
Collaborator

@yiqingy0 yiqingy0 commented Dec 24, 2025

Summary by CodeRabbit

  • Chores
    • Added diagnostic checks for GB200 hardware configuration to the build pipeline infrastructure.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yiqingy0
Copy link
Collaborator Author

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29695 [ run ] triggered by Bot. Commit: d37cf49

Copy link
Collaborator

@lancelly lancelly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Thanks!

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29695 [ run ] completed with state SUCCESS. Commit: d37cf49
/LLM/main/L0_MergeRequest_PR pipeline #22811 (Partly Tested) completed with status: 'SUCCESS'

Signed-off-by: Yiqing Yan <[email protected]>
@yiqingy0
Copy link
Collaborator Author

/bot run --stage-list "GB200-8_GPUs-2_Nodes-PyTorch-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29726 [ run ] triggered by Bot. Commit: d97d3cb

@yiqingy0 yiqingy0 marked this pull request as ready for review December 24, 2025 07:50
@yiqingy0 yiqingy0 requested review from a team as code owners December 24, 2025 07:50
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 24, 2025

📝 Walkthrough

Walkthrough

A diagnostic block is added to the slurm_run.sh script that detects when the stage name contains "GB200" and attempts to retrieve Coherent driver information from nvidia parameters, with graceful error handling if the lookup fails.

Changes

Cohort / File(s) Summary
Diagnostic block for GB200 stage
jenkins/scripts/slurm_run.sh
Added stage-specific diagnostic check: when stageName contains "GB200", prints a message and greps "Coherent" from /proc/driver/nvidia/params with fallback error message. Block inserted before llmapiLaunchScript setup.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete. It only contains the template with empty sections (Description, Test Coverage, and unchecked PR Checklist items), with no actual details filled in about what the change does or why. Fill in the Description section explaining what the GB200 coherent GPU mapping check does and why it's needed. Add Test Coverage section documenting relevant tests, and complete the PR Checklist items.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title correctly follows the template format with [None][infra] and clearly describes the main change: adding a check for GB200 coherent GPU mapping in the SLURM run script.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
jenkins/scripts/slurm_run.sh (1)

64-67: Consider moving this check inside the SLURM_LOCALID == 0 block.

This diagnostic currently runs on every local process, causing duplicate executions per node. Similar diagnostics like nvidia-smi (line 48) are already gated by SLURM_LOCALID == 0. Moving this block inside that conditional would eliminate redundant output and align with the existing pattern.

🔎 Proposed refactor

Move the GB200 check inside the existing SLURM_LOCALID == 0 block:

 if [ $SLURM_LOCALID -eq 0 ]; then
     wget -nv $llmTarfile
     tar -zxf $tarName
     which python3
     python3 --version
     apt-get install -y libffi-dev
     nvidia-smi && nvidia-smi -q && nvidia-smi topo -m
+    if [[ "$stageName" == *GB200* ]]; then
+        echo "Checking Coherent GPU mapping (for GB200)..."
+        grep Coherent /proc/driver/nvidia/params || echo "Unable to grep Coherent from /proc/driver/nvidia/params"
+    fi
     if [[ $pytestCommand == *--run-ray* ]]; then
         pip3 install --retries 10 ray[default]
     fi
     cd $llmSrcNode && pip3 install --retries 10 -r requirements-dev.txt
     cd $resourcePathNode &&  pip3 install --retries 10 --force-reinstall --no-deps TensorRT-LLM/tensorrt_llm-*.whl
     gpuUuids=$(nvidia-smi -q | grep "GPU UUID" | awk '{print $4}' | tr '\n' ',' || true)
     hostNodeName="${HOST_NODE_NAME:-$(hostname -f || hostname)}"
     echo "HOST_NODE_NAME = $hostNodeName ; GPU_UUIDS = $gpuUuids ; STAGE_NAME = $stageName"
     touch install_lock.lock
 else
     while [ ! -f install_lock.lock ]; do
         sleep 5
     done
 fi
 
-if [[ "$stageName" == *GB200* ]]; then
-    echo "Checking Coherent GPU mapping (for GB200)..."
-    grep Coherent /proc/driver/nvidia/params || echo "Unable to grep Coherent from /proc/driver/nvidia/params"
-fi
-
 llmapiLaunchScript="$llmSrcNode/tensorrt_llm/llmapi/trtllm-llmapi-launch"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8c1cfc8 and d97d3cb.

📒 Files selected for processing (1)
  • jenkins/scripts/slurm_run.sh
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • jenkins/scripts/slurm_run.sh
🔇 Additional comments (1)
jenkins/scripts/slurm_run.sh (1)

64-67: Clarify whether this check should validate the output or just log diagnostics.

The PR title suggests this is meant to "check" GB200 coherent GPU mapping, but the implementation only prints output without validation. Should this:

  1. Just log for diagnostics (current behavior) – if so, the implementation is fine
  2. Validate the mapping – if so, add logic to verify expected values and fail/warn on mismatch

If validation is needed, the fallback message also doesn't distinguish between different failure modes (missing file, permission denied, driver not loaded), making it harder to diagnose issues.

Could you clarify the intended behavior? If validation is required, what values should be checked in the Coherent parameter output?

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29726 [ run ] completed with state SUCCESS. Commit: d97d3cb
/LLM/main/L0_MergeRequest_PR pipeline #22839 (Partly Tested) completed with status: 'SUCCESS'

@yiqingy0
Copy link
Collaborator Author

/bot skip --comment "The stage is tested in /LLM/main/L0_MergeRequest_PR pipeline #22839 (Partly Tested)"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29784 [ skip ] triggered by Bot. Commit: d97d3cb

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29784 [ skip ] completed with state SUCCESS. Commit: d97d3cb
Skipping testing for commit d97d3cb

@yiqingy0 yiqingy0 merged commit 69152c4 into NVIDIA:main Dec 24, 2025
5 of 7 checks passed
@yiqingy0 yiqingy0 deleted the check_coherent branch December 24, 2025 09:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants