Skip to content

Conversation

@jinyangyuan-nvidia
Copy link
Collaborator

@jinyangyuan-nvidia jinyangyuan-nvidia commented Oct 20, 2025

Summary by CodeRabbit

  • Refactor
    • Updated mixture-of-experts kernel parameter handling to improve token accounting and resource management
    • Refined FP8 block-scale operation parameter signatures for consistency across execution paths
    • Optimized internal kernel interfaces for better performance tracking

Description

The performance of FP8 blockwise grouped GEMM may be much worse than expected when using attention DP with the low-latency mode of DeepEP. This issue is caused by the wrongly selected tactics. More specifically, the problem shape is estimated incorrectly because tokens have already been expanded before feeding to fused_moe. This PR fixes the issue by passing the expected number of tokens per expert to tactic selection.

Test Coverage

The effectiveness of this PR has been verified using Qwen3-235B-A22B-FP8 on 8xH20 GPUs. The average ISL is 4216 (ranging from 3133 to 5116), OSL is 1024, the concurrency is 16, and EP size is 8. The average TPOT data are shown below.

ADP Original Modified
off 17.45 ms 17.28 ms
on 30.58 ms 20.41 ms

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@jinyangyuan-nvidia jinyangyuan-nvidia self-assigned this Oct 20, 2025
@jinyangyuan-nvidia jinyangyuan-nvidia requested review from a team as code owners October 20, 2025 09:48
…hen using attention DP

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

📝 Walkthrough

Walkthrough

The changes refactor the Mixture of Experts (MoE) kernel interfaces and implementations to introduce new token-count parameters (num_valid_rows, expected_tokens_per_expert, expected_m) and reorder dimensional arguments across multiple kernel call sites, headers, implementations, and test code for improved per-expert token accounting.

Changes

Cohort / File(s) Summary
FP8 Block-Scale GEMM Core
cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h, cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu
Added expected_m parameter to moeGemm function signature after num_problems. Introduced wrapper overload that retains the original parameter list and forwards to the updated implementation using member expected_m_ for backward compatibility. Updated internal calls to pass explicit expected_m value.
MoE Kernel Interface Refactoring
cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h, cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
Refactored runMoe, gemm1, gemm2, BlockScaleFC1, and BlockScaleFC2 signatures to introduce num_valid_rows and expected_tokens_per_expert parameters, replacing and reordering previous dimensional arguments. Changes propagate across interface declarations and concrete implementations for improved token-expert accounting.
MoE Kernel Implementation
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
Computed expected_tokens_per_expert as (num_valid_rows * experts_per_token + full_num_experts - 1) / full_num_experts and threaded through BlockScaleFC1, BlockScaleFC2, and moeGemm paths replacing expanded_num_rows usage. Updated profiling paths with additional token count arguments in GEMM1 and GEMM2 branches.
MoE Backend Calls
cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp, cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
Updated runMoe invocations to include duplicated num_tokens argument, shifting subsequent parameters; changed call signature from ..., num_tokens, mHiddenSize, ... to ..., num_tokens, num_tokens, mHiddenSize, ....
MoE Top-Level Operations
cpp/tensorrt_llm/thop/moeOp.cpp, cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp
Added optional num_valid_tokens parameter to runMoe entry points; computed expected_m as (m_total + num_problems - 1) / num_problems and passed to moeGemm in FP8 block-scaling path.
MoE Unit Tests
cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
Updated test invocations of runMoe to duplicate mTotalTokens argument in both OSS and non-OSS branches, aligning with updated kernel interface.
PyTorch Bindings
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py, tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py
Propagated tuner_num_tokens argument through fused MoE invocation paths from higher-level operations to lower-level runner calls.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant MoEOp as MoE Op Layer
    participant MoERunner as MoE Runner
    participant Kernel as MoE Kernel

    Caller->>MoEOp: runMoe(..., num_tokens, num_valid_rows)
    activate MoEOp
    MoEOp->>MoEOp: Compute expected_tokens_per_expert<br/>(num_valid_rows * experts_per_token...) / num_experts
    MoEOp->>MoERunner: runMoe(..., num_valid_rows, hidden_size, <br/>expected_tokens_per_expert, ...)
    activate MoERunner
    MoERunner->>MoERunner: Call BlockScaleFC1(..., <br/>expected_tokens_per_expert)
    MoERunner->>Kernel: Launch FC1 kernel
    MoERunner->>MoERunner: Call BlockScaleFC2(..., <br/>expected_tokens_per_expert)
    MoERunner->>Kernel: Launch FC2 kernel
    deactivate MoERunner
    deactivate MoEOp
    Kernel-->>Caller: Results
Loading
sequenceDiagram
    participant Caller
    participant FP8Block as FP8 Block Scaling
    participant GemmRunner as GEMM Runner
    participant Kernel as GEMM Kernel

    Caller->>FP8Block: fp8_block_scaling_moe_gemm_hopper<br/>(..., m_total, num_problems, n, k)
    activate FP8Block
    FP8Block->>FP8Block: Compute expected_m<br/>= (m_total + num_problems - 1) / num_problems
    FP8Block->>GemmRunner: moeGemm(..., num_problems, <br/>expected_m, n, k, ...)
    activate GemmRunner
    GemmRunner->>Kernel: Launch GEMM kernel
    deactivate GemmRunner
    deactivate FP8Block
    Kernel-->>Caller: Results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

The changes involve heterogeneous modifications across multiple interconnected components (headers, implementations, call sites, tests, and Python bindings). Key review considerations include: (1) verifying all call sites across eight C++ files and two Python files are consistently updated with new parameters in correct order; (2) validating the computation of derived parameters (expected_tokens_per_expert, expected_m); (3) ensuring parameter duplication (num_tokens appearing twice, mTotalTokens appearing twice) is intentional; (4) confirming backward-compatibility through wrapper overloads; and (5) tracing the impact of signature changes through multiple layers of abstraction.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP" is directly related to the main changes in this pull request. According to the PR objectives and description, the changeset addresses a performance regression in FP8 blockwise grouped GEMM when using attention data parallelism. The root cause was incorrect tactic selection due to wrong problem-shape estimation, and the fix involves passing the expected number of tokens per expert to the tactic selector. The title accurately identifies both the component (FP8 blockwise grouped GEMM) and the context (attention DP), clearly communicating the primary change from the developer's perspective.
Description Check ✅ Passed The pull request description follows the required template structure and includes all major sections. The Description section clearly explains both the problem (incorrect problem-shape estimation for tactic selection due to pre-expanded tokens) and the solution (passing the expected number of tokens per expert). The Test Coverage section provides concrete performance metrics from validation on Qwen3-235B-A22B-FP8 across 8×H20 GPUs, demonstrating measurable improvement (30.58 ms → 20.41 ms when ADP is enabled). The PR Checklist is completed with a checked checkbox. The PR title format "[None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP" correctly follows the specified template format.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py (1)

195-221: Guard fused-finalize contract: ensure token_final_scales is non-null when use_fused_finalize=True

The CUTLASS epilogue in FINALIZE mode requires non-null router scales. If token_final_scales is None, the kernel will deref a null pointer. Add an early check here before invoking run_moe.

@@
-        # Run the actual MoE computation
+        # Run the actual MoE computation
+        if use_fused_finalize and token_final_scales is None:
+            raise ValueError("token_final_scales must be provided when use_fused_finalize=True")
         output = run_moe(
@@
-            unpadded_hidden_size,
+            unpadded_hidden_size,
             tuner_num_tokens,
         )

Based on learnings

tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)

704-714: NameError: act_fp8 is undefined in register_fake of w4a8_mxfp4_fp8_gemm

The fake kernel returns act_fp8.new_empty but the parameter is act_fp4. This breaks fake registration.

-@w4a8_mxfp4_fp8_gemm.register_fake
-def _(
-    act_fp4: torch.Tensor,
+@w4a8_mxfp4_fp8_gemm.register_fake
+def _(
+    act_fp8: torch.Tensor,
     weight: torch.Tensor,
     act_sf: torch.Tensor,
     weight_scale: torch.Tensor,
     alpha: torch.Tensor,
     output_dtype: torch.dtype,
     to_userbuffers: bool = False,
 ) -> torch.Tensor:
-    return act_fp8.new_empty((act_fp8.size(0), weight.size(0)),
-                             dtype=output_dtype)
+    return act_fp8.new_empty((act_fp8.size(0), weight.size(0)), dtype=output_dtype)
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (1)

4684-4698: Profiler passes wrong args to gemm1/gemm2 (expanded vs expected).

expanded_num_tokens is being passed as expected_tokens_per_expert, and expanded_num_rows is set to original_num_tokens. This breaks tactic profiling and can misconfigure kernels.

Apply this fix:

@@ void GemmProfilerBackend::runProfiler(int original_num_tokens, Config const& tactic, char* workspace_ptr_char,
-    mInterface->gemm1(input,                                                      //
+    // Compute per-expert expectation (ceil)
+    int64_t expected_tokens_per_expert =
+        (static_cast<int64_t>(original_num_tokens) * mK + mNumExperts - 1) / mNumExperts;
+    mInterface->gemm1(input,                                                      //
         output,                                                                   //
         intermediate,                                                             //
         expert_first_token_offset,                                                //
         tma_ws_input_template,                                                    //
         weights_sel,                                                              //
         bias,                                                                     //
         expert_first_token_offset + num_experts_per_node,                         //
         mQuantParams.wo.fc1_weight_scales,                                        //
         mQuantParams.fp8.dequant_fc1,                                             //
         mQuantParams.fp8_mxfp4.fc2.act_global_scale ? mQuantParams.fp8_mxfp4.fc2.act_global_scale
                                                     : mQuantParams.fp8.quant_fc2, //
         fp4_act_scale_flat,                                                       //
         fp4_act_scale_flat,                                                       //
         mQuantParams,                                                             //
-        original_num_tokens,                                                      //
-        original_num_tokens,                                                      //
-        expanded_num_tokens,                                                      //
+        /*num_rows=*/original_num_tokens,                                         //
+        /*expanded_num_rows=*/expanded_num_tokens,                                //
+        /*expected_tokens_per_expert=*/expected_tokens_per_expert,                //
         mExpertHiddenSize,                                                        //
         mExpertInterSize,                                                         //
         num_experts_per_node,                                                     //
@@
-        mInterface->gemm2(input,                            //
+        // Compute per-expert expectation (ceil)
+        int64_t expected_tokens_per_expert =
+            (static_cast<int64_t>(original_num_tokens) * mK + mNumExperts - 1) / mNumExperts;
+        mInterface->gemm2(input,                            //
             intermediate,                                   //
             output,                                         //
             expert_first_token_offset,                      //
             tma_ws_input_template,                          //
             weights_sel,                                    //
             bias,                                           //
             mQuantParams.wo.fc2_weight_scales,              //
             mQuantParams.fp8.dequant_fc2,                   //
             fp4_act_scale_flat,                             //
             mQuantParams,                                   //
             token_topk_unpermuted_scales,                   //
             token_topk_permuted_scales,                     //
             unpermuted_row_to_permuted_row,                 //
             permuted_row_to_unpermuted_row,                 //
             token_selected_experts,                         //
             expert_first_token_offset + mNumExpertsPerNode, //
-            original_num_tokens,                            //
-            original_num_tokens,                            //
-            expanded_num_tokens,                            //
+            /*num_rows=*/original_num_tokens,               //
+            /*expanded_num_rows=*/expanded_num_tokens,      //
+            /*expected_tokens_per_expert=*/expected_tokens_per_expert, //
             mExpertHiddenSize,                              //
             mExpertUnpaddedHiddenSize,                      //
             mExpertInterSize,                               //
             num_experts_per_node,                           //
             mK,                                             //

Also add inline C comments for these long arg lists at other call sites to prevent future swaps.

Also applies to: 4720-4739

cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h (1)

436-447: Add Doxygen documentation for the new parameter.

The expected_tokens_per_expert parameter is central to the performance fix but lacks documentation. This parameter controls tactic selection for FP8 blockwise grouped GEMM.

Consider documenting its purpose:

//! \param expected_tokens_per_expert Expected number of tokens per expert for tactic selection (before expansion).
🧹 Nitpick comments (10)
tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py (1)

160-166: Remove redundant self-assignment

use_all_to_all = use_all_to_all is a no-op and may confuse readers. Drop it.

-        use_all_to_all = use_all_to_all
cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu (2)

129-134: Remove no-op loop; prefer size_t for index

This loop repeatedly assigns the same base pointers and does nothing per-problem. Also mixes int with size_t. Replace with direct assignment.

-    else
-    {
-        for (int i = 0; i < num_problems; i++)
-        {
-            fp8_mat_b = reinterpret_cast<__nv_fp8_e4m3*>(const_cast<void*>(mat_b));
-            per_block_scales = const_cast<float*>(scales_b);
-        }
-    }
+    else
+    {
+        fp8_mat_b = reinterpret_cast<__nv_fp8_e4m3*>(const_cast<void*>(mat_b));
+        per_block_scales = const_cast<float*>(scales_b);
+    }

121-126: Advance workspace pointer after assigning per_block_scales (symmetry/readability)

In gemm() you advance ws_ptr after assigning per_block_scales; here you don't. It currently works (no later use), but advancing keeps accounting consistent.

     if constexpr (internal_quantize_b)
     {
         fp8_mat_b = reinterpret_cast<__nv_fp8_e4m3*>(ws_ptr);
-        ws_ptr += num_problems * shape_n * shape_k * sizeof(__nv_fp8_e4m3);
+        ws_ptr += num_problems * shape_n * shape_k * sizeof(__nv_fp8_e4m3);
         per_block_scales = reinterpret_cast<float*>(ws_ptr);
+        ws_ptr += num_problems * div_up(shape_k, 128) * div_up(shape_n, 128) * sizeof(float);
     }
cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h (1)

1-25: Add include guards as per repo guidelines

Guidelines require TRTLLM_* include guards for headers. Keep #pragma once if you prefer, but add guards.

 #pragma once
+#ifndef TRTLLM_FP8_BLOCKSCALE_GEMM_H
+#define TRTLLM_FP8_BLOCKSCALE_GEMM_H
@@
 } // namespace tensorrt_llm::kernels::fp8_blockscale_gemm
+
+#endif // TRTLLM_FP8_BLOCKSCALE_GEMM_H

As per coding guidelines

Also applies to: 139-139

tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)

1-10: Add NVIDIA Apache-2.0 header

This file lacks the required header with current year.

+# Copyright (c) 2025, NVIDIA CORPORATION. All rights reserved.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.

As per coding guidelines

cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp (2)

229-236: expected_m heuristic works; consider max per-problem from token_offset for robustness

Ceil(m_total/num_problems) is a solid improvement over expanded M, but can under-estimate when routing is imbalanced. If token_offset encodes cumulative per-problem row counts (length num_problems+1), compute expected_m as max(problem_m_offsets[i+1]-problem_m_offsets[i]) and fall back to the ceil average if shape is unexpected.

-    auto const expected_m = (m_total + num_problems - 1) / num_problems;
+    size_t expected_m = (m_total + num_problems - 1) / num_problems;
+    if (token_offset.numel() == static_cast<long>(num_problems + 1))
+    {
+        auto offs = token_offset.contiguous();
+        auto* p = offs.data_ptr<int64_t>();
+        int64_t max_m = 0;
+        for (size_t i = 0; i < num_problems; ++i)
+        {
+            max_m = std::max<int64_t>(max_m, p[i + 1] - p[i]);
+        }
+        expected_m = static_cast<size_t>(max_m);
+    }
@@
-    gemm_runner->moeGemm(out.data_ptr(), mat1.data_ptr(), mat2.data_ptr(),
-        static_cast<int64_t*>(token_offset.data_ptr()), num_problems, expected_m, n, k, stream, mat1ScalePtr,
-        mat2ScalePtr);
+    gemm_runner->moeGemm(out.data_ptr(), mat1.data_ptr(), mat2.data_ptr(),
+        static_cast<int64_t*>(token_offset.data_ptr()), num_problems, expected_m, n, k, stream, mat1ScalePtr,
+        mat2ScalePtr);

Also applies to: 251-253


168-170: Avoid magic numbers for alignment checks

Replace 16/128 with named constants for clarity.

-    TORCH_CHECK(k % 16 == 0, "K must be a multiple of 16, (K=", k, ")");
-    TORCH_CHECK(n % 16 == 0, "N must be a multiple of 16, (N=", n, ")");
+    constexpr int kALIGN_KN = 16;
+    TORCH_CHECK(k % kALIGN_KN == 0, "K must be a multiple of ", kALIGN_KN, " (K=", k, ")");
+    TORCH_CHECK(n % kALIGN_KN == 0, "N must be a multiple of ", kALIGN_KN, " (N=", n, ")");

As per coding guidelines

Also applies to: 188-190

cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (1)

849-852: BlockScaleFC1/FC2: expected_tokens_per_expert passed into moeGemm.

Looks correct, aligns with per-expert M budgeting.

Add brief inline C comments for non-obvious params to reduce future ordering mistakes, e.g.:

-    expected_tokens_per_expert, shape_n, shape_k,
+    /*expected_tokens_per_expert=*/expected_tokens_per_expert,
+    /*n=*/shape_n, /*k=*/shape_k,

Also applies to: 859-862

cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (2)

3607-3609: Guard expected_tokens_per_expert against 0.

If num_valid_rows is 0, tactic selection could see M=0. Clamp to at least 1 (or early-return upstream).

-    auto expected_tokens_per_expert = (num_valid_rows * experts_per_token + full_num_experts - 1) / full_num_experts;
+    auto expected_tokens_per_expert = (num_valid_rows * experts_per_token + full_num_experts - 1) / full_num_experts;
+    if (expected_tokens_per_expert < 1) {
+        expected_tokens_per_expert = 1;
+    }

3735-3738: Add inline argument names to long gemm1/gemm2 calls.

Reduces risk of misordered params (we already caught one in the profiler).

-        Self::gemm1(..., quant_params, num_rows, expanded_num_rows, expected_tokens_per_expert, hidden_size,
-            inter_size, num_experts_per_node, fc1_activation_type, alpha_scale_ptr_array_fc1_, !use_lora, stream,
+        Self::gemm1(..., quant_params,
+            /*num_rows=*/num_rows,
+            /*expanded_num_rows=*/expanded_num_rows,
+            /*expected_tokens_per_expert=*/expected_tokens_per_expert,
+            /*hidden_size=*/hidden_size, /*inter_size=*/inter_size,
+            /*num_experts_per_node=*/num_experts_per_node, fc1_activation_type,
+            alpha_scale_ptr_array_fc1_, !use_lora, stream,
             *gemm1_config_, false, nullptr, nullptr);

Also applies to: 3753-3757

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 97ce0ec and 3c89b1a.

📒 Files selected for processing (12)
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu (3 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (10 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (14 hunks)
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h (10 hunks)
  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp (2 hunks)
  • cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp (2 hunks)
  • cpp/tensorrt_llm/thop/moeOp.cpp (6 hunks)
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (1 hunks)
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh}: Namespace closing braces must include a trailing comment with the namespace name (e.g., '} // namespace foo').
Prefer const or constexpr variables over #define for constants.
Declare variables that are not modified after initialization as const.
Avoid magic literals in code; except for 0, nullptr, true, false. Use named constants for comparisons and logic.
Use Allman brace style for formatting.
Place the semicolon of an empty for/while loop on a new line.
Bodies of switch/while/do-while/for must be compound statements (brace-delimited), and if/else must always be followed by brace-delimited statements.
Type names (e.g., classes) must be CamelCase starting with an uppercase letter (e.g., FooBar).
Local variables, methods, and namespaces use lowerCamelCase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not in an anonymous namespace must be lowerCamelCase prefixed with 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number globals that are static or in an anonymous namespace use lowerCamelCase prefixed with 's' (e.g., sMutableStaticGlobal).
Locally visible static variables use lowerCamelCase with 's' prefix (e.g., static std::once_flag sFlag).
Private/protected member variables use 'm' prefix with CamelCase (e.g., mNbFooValues). Public members may omit, but 'm' is encouraged for clarity.
Constants (enums, global constants, static constants, and function-scope magic/literal constants) use uppercase SNAKE_CASE with 'k' prefix (e.g., kDIGIT_NUM).
Function-scope constants that are not magic numbers or literals are named like non-constant variables (e.g., bool const pass = a && b).
If macros are necessary, name them in UPPER_SNAKE_CASE (e.g., FOO_VERSION) and prefer constants over #define.
Use LLVM clang-format; wrap lines at a maximum of 120 columns; use '// clang-format off/on' sparingly with justification.
Use smart pointers for heap allocations; prefer unique_ptr for sole ownership, shared_ptr for shared...

Files:

  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
**/*.{cpp,cxx,cc,cu,h,hpp,hh,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

C++ filenames should be lowerCamelCase (first letter lowercase) and must be case-insensitive unique within a compilation target.

Files:

  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
  • tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
**/*.{h,hpp,hh,hxx,cpp,cxx,cc}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{h,hpp,hh,hxx,cpp,cxx,cc}: Prefer anonymous namespaces over 'static' for internal linkage of functions.
All templates (class/function/member/static) must be instantiated at least once; non-POD classes should have private data members.

Files:

  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp
  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp
  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
  • tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/thop/fp8BlockScalingGemm.cpp
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
**/*.{h,hpp,hh,hxx}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Document new class interfaces and function prototypes with Doxygen; use //! for single-line and //!< for members.

Files:

  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
**/*.{h,hpp,hh,hxx,cuh}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use include guards named 'TRTLLM_<FILE_NAME_IN_CAPS_WITH_UNDERSCORES>_H' (no leading or trailing underscore; directory names excluded).

Files:

  • cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
🧠 Learnings (3)
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
PR: NVIDIA/TensorRT-LLM#7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • cpp/tensorrt_llm/thop/moeOp.cpp
  • cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
PR: NVIDIA/TensorRT-LLM#3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu
🧬 Code graph analysis (7)
cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.cpp (4)
cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (2)
  • num_tokens (426-468)
  • num_tokens (426-427)
cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (1)
  • mExpertHiddenSize (978-978)
cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h (1)
  • mExpertHiddenSize (929-929)
cpp/tensorrt_llm/plugins/mixtureOfExperts/mixtureOfExpertsPlugin.h (1)
  • mExpertHiddenSize (176-176)
cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (1)
cpp/micro_benchmarks/mixtureOfExpertsBackendBenchmarkFixture.h (8)
  • mTotalTokens (540-540)
  • mHiddenSize (378-378)
  • mInterSize (539-539)
  • mNumExperts (379-379)
  • mK (381-381)
  • mWorkspace (515-515)
  • mFinalOutput (537-537)
  • mInputTensor (376-376)
cpp/tensorrt_llm/thop/moeOp.cpp (1)
cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (7)
  • hidden_size (470-481)
  • hidden_size (470-470)
  • parallelism_config (1003-1110)
  • parallelism_config (1003-1003)
  • parallelism_config (1207-1295)
  • parallelism_config (1207-1207)
  • stream (851-863)
cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h (1)
cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu (4)
  • moeGemm (90-165)
  • moeGemm (90-92)
  • moeGemm (168-174)
  • moeGemm (168-170)
cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (3)
cpp/tensorrt_llm/thop/moeOp.cpp (8)
  • num_rows (783-820)
  • num_rows (783-785)
  • input (252-454)
  • input (252-262)
  • input (456-616)
  • input (456-466)
  • input (626-715)
  • input (626-631)
cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h (1)
  • gemm1 (619-640)
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_template_dispatch_tma_ws.h (1)
  • input (506-506)
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (2)
cpp/tests/unit_tests/kernels/mixtureOfExpertsTest.cu (19)
  • hidden_size (470-481)
  • hidden_size (470-470)
  • stream (851-863)
  • k (1552-1553)
  • k (1557-1568)
  • k (1557-1558)
  • k (1572-1580)
  • k (1572-1573)
  • k (1582-1594)
  • k (1582-1583)
  • k (1596-1597)
  • parallelism_config (1003-1110)
  • parallelism_config (1003-1003)
  • parallelism_config (1207-1295)
  • parallelism_config (1207-1207)
  • input (1399-1432)
  • input (1399-1399)
  • input (1434-1437)
  • input (1434-1434)
cpp/tensorrt_llm/thop/moeOp.cpp (8)
  • input (252-454)
  • input (252-262)
  • input (456-616)
  • input (456-466)
  • input (626-715)
  • input (626-631)
  • num_rows (783-820)
  • num_rows (783-785)
cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h (2)
cpp/tensorrt_llm/thop/moeOp.cpp (8)
  • num_rows (783-820)
  • num_rows (783-785)
  • input (252-454)
  • input (252-262)
  • input (456-616)
  • input (456-466)
  • input (626-715)
  • input (626-631)
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (5)
  • T (1965-1969)
  • T (1981-1985)
  • T (1996-2002)
  • BlockScaleFC2 (2891-2916)
  • BlockScaleFC2 (2891-2900)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Check PR Checklist Resolution
  • GitHub Check: Pre-commit Check
🔇 Additional comments (11)
tensorrt_llm/_torch/modules/fused_moe/ops/moe_op_cutlass.py (1)

218-221: tuner_num_tokens forwarding looks correct

Forwarding tuner_num_tokens to the fused runner aligns with the backend API and fixes tactic sizing with ADP. LGTM.

cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.cu (2)

90-93: API extension is coherent

Adding expected_m between num_problems and shape_n matches the caller and improves tactic sizing. Signature/order look consistent with the wrapper below. LGTM.


167-174: Wrapper overload preserves source compatibility

The overload that forwards expected_m_ keeps existing call sites working. Good safeguard.

cpp/tensorrt_llm/kernels/cutlass_kernels/fp8_blockscale_gemm/fp8_blockscale_gemm.h (1)

43-47: New moeGemm(expected_m) virtual and override look consistent

Interface and implementation declarations are aligned with the .cu definition. LGTM.

Also applies to: 103-106

tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)

243-244: tuner_num_tokens is correctly threaded to run_moe

This aligns with the fused runner change and fixes tactic estimation with ADP. LGTM.

cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h (3)

477-482: gemm1: added expected_tokens_per_expert is consistent across interface and override.

No issues spotted.

Also applies to: 642-647


491-498: gemm2: added expected_tokens_per_expert is consistent across interface and override.

No issues spotted.

Also applies to: 658-664


462-468: All call sites correctly updated—no stale signatures detected.

Verification confirms the new num_valid_rows parameter has been properly propagated across all three call sites:

  • moeOp.cpp (lines 422, 437, 584, 599): Correctly passes conditional num_valid_tokens fallback to num_rows
  • mixtureOfExpertsPlugin.cpp (lines 962, 975): Correctly passes num_tokens for both num_rows and num_valid_rows
  • mixtureOfExpertsBackendBenchmarkFixture.h (lines 992, 1005): Correctly passes mTotalTokens for both parameters

Parameter ordering, including the new swizzled_input_sf (3rd param) and unpadded_hidden_size (after hidden_size) are consistent across all implementations.

cpp/tensorrt_llm/kernels/internal_cutlass_kernels/include/moe_kernels.h (3)

449-462: LGTM! Consistent with gemm1 interface changes.

The parameter additions and reordering are consistent with the gemm1 interface changes. The expected_tokens_per_expert parameter is positioned consistently for tactic selection purposes.

However, please ensure documentation is added for the new parameter (same as gemm1).


573-666: LGTM! Implementation signatures correctly match interface changes.

The implementation method signatures and parameter forwarding are consistent with the interface changes:

  • New parameters (num_valid_rows, expected_tokens_per_expert) are correctly propagated through all layers
  • Parameter ordering is consistent across all overloads
  • The override methods correctly forward new parameters to static implementations

792-807: LGTM! BlockScale method signatures correctly extended.

The BlockScaleFC1 and BlockScaleFC2 signatures correctly add the expected_tokens_per_expert parameter. Based on the relevant code snippets, this parameter is forwarded to gemm_runner.moeGemm() for proper tactic selection in the FP8 blockwise scaling path, which directly addresses the performance regression described in the PR objectives.

@jinyangyuan-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21898 [ run ] triggered by Bot. Commit: 55bc09c

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21898 [ run ] completed with state SUCCESS. Commit: 55bc09c
/LLM/main/L0_MergeRequest_PR pipeline #16507 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Copy link
Collaborator

@hyukn hyukn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR should be fine because both perf and CI look good. Just put some questions. Thanks~

@jinyangyuan-nvidia jinyangyuan-nvidia enabled auto-merge (squash) October 27, 2025 02:17
@jinyangyuan-nvidia jinyangyuan-nvidia merged commit 0a0f93d into NVIDIA:main Oct 27, 2025
7 checks passed
@jinyangyuan-nvidia jinyangyuan-nvidia deleted the dev/fix_adp_fp8_moe branch October 27, 2025 02:18
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
…hen using attention DP (NVIDIA#8501)

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…hen using attention DP (NVIDIA#8501)

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…hen using attention DP (NVIDIA#8501)

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…hen using attention DP (NVIDIA#8501)

Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants