Skip to content

Conversation

@SimengLiu-nv
Copy link
Collaborator

@SimengLiu-nv SimengLiu-nv commented Oct 16, 2025

… info with CI failure. Hard to reproduce the error locally

Summary by CodeRabbit

  • Tests

    • Expanded test coverage to include tensor parallelism size 4 configuration.
  • Chores

    • Enhanced multi-GPU testing with improved NCCL debug logging for diagnostics.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@SimengLiu-nv SimengLiu-nv requested a review from a team as a code owner October 16, 2025 18:59
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 16, 2025

📝 Walkthrough

Walkthrough

Added NCCL debug logging to Jenkins multi-GPU test environment. Removed a conditional test skip for tensor parallel size 4 in DeepSeek multi-GPU tests, allowing those configurations to execute.

Changes

Cohort / File(s) Summary
Jenkins Configuration
jenkins/L0_Test.groovy
Added NCCL_DEBUG=INFO to the extraInternalEnv string for multi-GPU test runs to enable NCCL debug output.
Test Skip Removal
tests/unittest/_torch/multi_gpu_modeling/test_deepseek.py
Removed conditional skip for tp_size == 4 that referenced an external nvbugs link; tests with tensor parallel size 4 will now execute.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description is largely incomplete and does not meet the repository's template requirements. While the template structure is present, the critical sections are unfilled or insufficiently completed. The PR title is missing entirely—only the template comments and format instructions are shown. The "Description" section explaining the issue and solution contains only placeholder text with no actual content. The "Test Coverage" section is completely empty with no tests listed. Only a truncated fragment ("… info with CI failure. Hard to reproduce the error locally") appears at the beginning, which is incomplete and provides minimal context. Although the PR Checklist checkbox is marked as complete, this does not substitute for the missing substantive content in the required sections. The author should provide a complete PR description by filling in all required sections: (1) a properly formatted PR title following the [ticket/ID][type] Summary convention (e.g., [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect debug information for CI failures), (2) a substantive Description section explaining why NCCL_DEBUG=INFO is being added and why the error is difficult to reproduce locally, (3) a Test Coverage section identifying which tests validate these changes (e.g., the test_deepseek.py multi-GPU test that was modified), and (4) confirmation that all relevant PR Checklist items have been reviewed. The truncated opening text should be incorporated into the Description section with complete details.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title follows the required format correctly with the NVBugs ticket ID, the [ci] type tag, and a clear, specific summary of the change. The title accurately describes the main objective of the PR—adding the NCCL_DEBUG=INFO environment flag to enable better debugging output for multi-GPU CI test runs. The title is directly related to the changeset modifications (adding NCCL_DEBUG=INFO to jenkins/L0_Test.groovy and removing a skip condition from the test file) and provides sufficient clarity for reviewers to understand the primary purpose of the changes.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21610 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21610 [ run ] completed with state FAILURE

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21626 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21626 [ run ] completed with state SUCCESS
/LLM/release-1.1/L0_MergeRequest_PR pipeline #169 completed with status: 'FAILURE'

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --stage-list "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@SimengLiu-nv
Copy link
Collaborator Author

PR_Github #21626 [ run ] completed with state SUCCESS /LLM/release-1.1/L0_MergeRequest_PR pipeline #169 completed with status: 'FAILURE'

The initial CI failed with timeout on unittest/B200_PCIe-PyTorch-2/unittest/_torch/thop/parallel/test_moe.py::TestMoeFp4::test_autotune[RoutingDSv3-384-1024-1024] in the test single GPU stage. Multi-gpu test is skipped. Rerun the multi-gpu test in the new CI.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21716 [ run ] triggered by Bot. Commit: 74d9ff2

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21716 [ run ] completed with state SUCCESS. Commit: 74d9ff2
/LLM/release-1.1/L0_MergeRequest_PR pipeline #180 (Partly Tested) completed with status: 'FAILURE'

@SimengLiu-nv
Copy link
Collaborator Author

PR_Github #21716 [ run ] completed with state SUCCESS. Commit: 74d9ff2 /LLM/release-1.1/L0_MergeRequest_PR pipeline #180 (Partly Tested) completed with status: 'FAILURE'

Got Pod failed (Reason: Terminated, Message: Pod was terminated in response to imminent node shutdown.) Need to rerun the pipeline.

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21729 [ run ] triggered by Bot. Commit: 146c86a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21729 [ run ] completed with state SUCCESS. Commit: 146c86a
/LLM/release-1.1/L0_MergeRequest_PR pipeline #181 completed with status: 'FAILURE'

@SimengLiu-nv
Copy link
Collaborator Author

PR_Github #21729 [ run ] completed with state SUCCESS. Commit: 146c86a /LLM/release-1.1/L0_MergeRequest_PR pipeline #181 completed with status: 'FAILURE'

More single node test failures with TestMoeFp4, added more waives and restart the CI.

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21755 [ run ] triggered by Bot. Commit: a81d670

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21755 [ run ] completed with state SUCCESS. Commit: a81d670
/LLM/release-1.1/L0_MergeRequest_PR pipeline #184 completed with status: 'FAILURE'

@SimengLiu-nv
Copy link
Collaborator Author

SimengLiu-nv commented Oct 19, 2025

PR_Github #21755 [ run ] completed with state SUCCESS. Commit: a81d670 /LLM/release-1.1/L0_MergeRequest_PR pipeline #184 completed with status: 'FAILURE'

GB300 node request failed. x86 single and multi node tests all passed.
The target test passed: DGX_H100-4_GPUs-PyTorch-DeepSeek-2/test_unittests.py::test_unittests_v2[unittest/_torch/multi_gpu_modeling/test_deepseek.py::test_deepseek_streaming[tp1-bf16-trtllm-deepseekv3_lite]] PASSED

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21819 [ run ] triggered by Bot. Commit: 9b230f8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21819 [ run ] completed with state SUCCESS. Commit: 9b230f8
/LLM/release-1.1/L0_MergeRequest_PR pipeline #192 completed with status: 'FAILURE'

… info with CI failure. Hard to reproduce the error locally

Signed-off-by: Simeng Liu <simengl@nvidia.com>
…tune[RoutingDSv3-384-1024-1024] failed with timeout.

Signed-off-by: Simeng Liu <simengl@nvidia.com>
…no_autotune entirely for known timeout issues.

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Simeng Liu <simengl@nvidia.com>
@SimengLiu-nv
Copy link
Collaborator Author

PR_Github #21819 [ run ] completed with state SUCCESS. Commit: 9b230f8 /LLM/release-1.1/L0_MergeRequest_PR pipeline #192 completed with status: 'FAILURE'

Got a new failure on multi-GPU test that unrelated to this PR. Added a new commit to waive it and filed an nvbug.

@SimengLiu-nv
Copy link
Collaborator Author

@shaharmor98 May I get a CI skip on this PR?

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast --extra-stage "DGX_H100-4_GPUs-PyTorch-DeepSeek-1, DGX_H100-4_GPUs-PyTorch-DeepSeek-2"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21922 [ run ] triggered by Bot. Commit: ab16bed

@SimengLiu-nv
Copy link
Collaborator Author

New CI pipeline failed to schedule slurm job on DGX_B200-4_GPU nodes.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21922 [ run ] completed with state SUCCESS. Commit: ab16bed
/LLM/release-1.1/L0_MergeRequest_PR pipeline #202 completed with status: 'FAILURE'

@SimengLiu-nv
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22072 [ run ] triggered by Bot. Commit: ab16bed

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22072 [ run ] completed with state SUCCESS. Commit: ab16bed
/LLM/release-1.1/L0_MergeRequest_PR pipeline #217 completed with status: 'SUCCESS'

@SimengLiu-nv
Copy link
Collaborator Author

@NVIDIA/trt-llm-release-branch-approval Hi team, can i get reviews for this PR? It's needed for the merge. Thanks!

@SimengLiu-nv SimengLiu-nv enabled auto-merge (squash) October 21, 2025 23:50
@SimengLiu-nv SimengLiu-nv merged commit 1375b9f into NVIDIA:release/1.1 Oct 22, 2025
5 checks passed
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 4, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 4, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 5, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 6, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 10, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 12, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 14, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 17, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 18, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 19, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
mikeiovine pushed a commit to mikeiovine/TensorRT-LLM that referenced this pull request Nov 19, 2025
… info with CI failure. (NVIDIA#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
mikeiovine pushed a commit that referenced this pull request Nov 20, 2025
… info with CI failure. (#8440)

Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants