-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H #5371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Naveassaf
merged 29 commits into
NVIDIA:main
from
tomeras91:fix-trtllm-bench-for-nemotron-h
Jul 9, 2025
Merged
Changes from all commits
Commits
Show all changes
29 commits
Select commit
Hold shift + click to select a range
1752239
WIP: consider num_attention_layers for kv cache estimation and add ma…
tomeras91 7829ec9
Merge branch 'main' into fix-trtllm-bench-for-nemotron-h
tomeras91 4403183
organize code and logging for max batch size calculation for trtllm-b…
tomeras91 6ff4602
consider only attention layers when estimating number of tokens in Kv…
tomeras91 e6615a8
propagate kv_cache_gpu_mem_fraction to calc_engine_setting for trtllm…
tomeras91 42d65f3
release mamba cache memory when shutting down MambaCacheManager (and …
tomeras91 17d22e5
small refactor - MambaCacheManager method names to match BaseResource…
tomeras91 7dfeab8
refactor - is_nemotron_hybrid works on dicts as well
tomeras91 ee85bac
remove log
tomeras91 d0d0b7e
Add comment explaining squaring of kv_cache_gpu_mem_fraction + save r…
tomeras91 63bea92
remove debug print
tomeras91 c8c71df
fix - use config.get() only if config is a dict
tomeras91 3e6a30e
Merge branch 'main' into fix-trtllm-bench-for-nemotron-h
tomeras91 83e0673
optimistic tune max batch size only if not mamba attention hybrid model
tomeras91 4b2ba21
Merge branch 'main' into fix-trtllm-bench-for-nemotron-h
tomeras91 e6e65fc
Merge branch 'main' into fix-trtllm-bench-for-nemotron-h
tomeras91 8cf5ee7
Merge branch 'fix-trtllm-bench-for-nemotron-h' of github.com:tomeras9…
tomeras91 aa5d87c
fix: Mamba cache size estimation for FP8 - always use NO_QUANT for ma…
tomeras91 ac481b2
Merge branch 'main' into fix-trtllm-bench-for-nemotron-h
tomeras91 7904672
introduce NemotronHybridConfig that inherits from ModelConfig
tomeras91 04cba88
Move logic to compute extra model class to ModelConfig class
tomeras91 337e7aa
refactor max batch size estimation and make it more general (less mam…
tomeras91 4b0182b
remove redundant MambaConfig
tomeras91 ea4e816
simplify computation of total kv cache memory
tomeras91 1975d38
remove whitespace
tomeras91 1670ad9
compute cache memory fraction in ModelConfig + enable_optimistic_tuni…
tomeras91 3e40792
reduce formatting diff
tomeras91 0a0d2c8
Merge branch 'main' into fix-trtllm-bench-for-nemotron-h
tomeras91 47b9eb8
Add get_num_attention_layers() function in _torch/model_config.py::Mo…
tomeras91 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.