This website requires JavaScript.
Explore
Help
Sign In
kanshan
/
TensorRT-LLMs
Watch
1
Star
0
Fork
0
You've already forked TensorRT-LLMs
mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced
2026-01-22 03:35:00 +08:00
Code
Issues
Actions
1
Packages
Projects
Releases
Wiki
Activity
f4736aec8e
TensorRT-LLMs
/
tensorrt_llm
/
bench
History
Jiagan Cheng
85b4ae26b7
[
https://nvbugs/5451342
][fix] Use runtime max_batch_size when cuda_graph_config.max_batch_size is not provided in trtllm-bench (
#7031
)
...
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
2025-08-26 08:10:35 -04:00
..
benchmark
[
https://nvbugs/5453667
] [fix] reverting a breaking change: make trtllm-bench
enable_chunked_context
defaults backend-dependent (
#6956
)
2025-08-16 00:29:02 -04:00
build
[None][fix] Remove expand configuration from mamba2 mixer (
#6521
)
2025-08-05 04:18:25 -04:00
dataclasses
[
https://nvbugs/5451342
][fix] Use runtime max_batch_size when cuda_graph_config.max_batch_size is not provided in trtllm-bench (
#7031
)
2025-08-26 08:10:35 -04:00
utils
Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (
#5130
)
2025-06-15 18:54:04 +03:00
__init__.py
Update TensorRT-LLM
2024-08-20 18:55:15 +08:00