-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[None][fix] Switch llm api quickstart example location per workflow. #7182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[None][fix] Switch llm api quickstart example location per workflow. #7182
Conversation
📝 WalkthroughWalkthroughTwo example quickstart scripts were updated: the internal engine example now imports LLM from Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant Q_int as Internal quickstart
participant Q_pub as Public quickstart
participant L_int as LLM (internal)
participant L_pub as LLM (public)
participant E as Engine/Runtime
rect rgb(235,245,255)
U->>Q_int: run internal example
Q_int->>L_int: new LLM(model, build_config)
Q_int->>L_int: generate(prompts)
L_int->>E: execute with build-time config
E-->>L_int: responses
L_int-->>Q_int: results
end
rect rgb(245,255,235)
U->>Q_pub: run public example
Q_pub->>L_pub: new LLM(model)
Q_pub->>L_pub: generate(prompts, sampling_params)
L_pub->>E: execute with run-time sampling params
E-->>L_pub: responses
L_pub-->>Q_pub: results
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
|
/bot run |
|
PR_Github #16264 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
examples/llm-api/_tensorrt_engine/quickstart_example.py (1)
2-2: Remove the inline “NOTE the change” comment.The directory location already communicates the intent; keeping code comments neutral/concise improves readability.
Apply:
-from tensorrt_llm._tensorrt_engine import LLM # NOTE the change +from tensorrt_llm._tensorrt_engine import LLM
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
examples/llm-api/_tensorrt_engine/quickstart_example.py(1 hunks)examples/llm-api/quickstart_example.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else
Files:
examples/llm-api/_tensorrt_engine/quickstart_example.pyexamples/llm-api/quickstart_example.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
examples/llm-api/_tensorrt_engine/quickstart_example.pyexamples/llm-api/quickstart_example.py
🧠 Learnings (2)
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
examples/llm-api/_tensorrt_engine/quickstart_example.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
examples/llm-api/_tensorrt_engine/quickstart_example.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (7)
examples/llm-api/_tensorrt_engine/quickstart_example.py (4)
1-2: Import split matches workflows (public BuildConfig + internal LLM).This separation aligns with the PR goal to keep the TRT-flow example under
_tensorrt_engine. Looks good.
13-14: Using build_config during LLM construction is appropriate for engine-centric examples.Instantiation path is consistent with the internal TRT flow expectations.
26-29: Example code is correct for TensorRT-LLM 1.0’s Python APIPer the official API reference,
LLM.generate()returnsRequestOutputobjects (or a list thereof), each of which exposes:
- a
.promptfield for the input prompt (nvidia.github.io)- an
.outputslist ofCompletionOutputitems, where eachCompletionOutputhas a.textstring field containing the generated text (nvidia.github.io)The snippet:
for output in llm.generate(prompts, sampling_params): print( f"Prompt: {output.prompt!r}, Generated text: {output.outputs[0].text!r}" )correctly reflects these types. Please disregard the earlier concern about missing
.promptor.outputs[0].text.Likely an incorrect or invalid review comment.
7-9: BuildConfig fields are correct—no changes needed.The
BuildConfigdataclass intensorrt_llm/builder.pydefines both
•max_batch_size: int(default 2048)
•max_num_tokens: int(default 8192)
at lines 485–487, matching the example’s usage.examples/llm-api/quickstart_example.py (3)
1-1: Switch to top-level public API import looks correct.Using
from tensorrt_llm import LLM, SamplingParamsmatches the public workflow.
8-8: LLM construction without build-time config aligns with the public quickstart.Keeping build-time details out of the public example is the right call.
20-23: Verified generate signature matches example usageThe call to
llm.generate(prompts, sampling_params)correctly aligns with the currentLLM.generatesignature inllmapi/llm.py, where the first two parameters are:
inputs: Union[PromptInputs, Sequence[PromptInputs]]sampling_params: Optional[Union[SamplingParams, List[SamplingParams]]] = NoneNo further changes are needed at this time.
|
PR_Github #16264 [ run ] completed with state |
Superjomn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
ad3b773 to
bbf390a
Compare
|
/bot reuse-pipeline |
|
PR_Github #16341 [ reuse-pipeline ] triggered by Bot |
|
PR_Github #16341 [ reuse-pipeline ] completed with state |
…7182) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
…VIDIA#7182) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
…VIDIA#7182) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
…VIDIA#7182) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
…VIDIA#7182) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
…VIDIA#7182) Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com> Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
We need to put the quickstart_example.py which use trt flow under _tensorrt_engine folder.
Summary by CodeRabbit
New Features
Refactor
Documentation