TensorRT-LLMs/tensorrt_llm/bench/benchmark/utils
Frank 0dcf47f1c2
[TRTLLM-4717][perf] Set CUDA graph max batch size and padding in throughput benchmark. (#3875)
* Set cuda graph max batch size.

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>

* Set padding.

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>

---------

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
2025-05-09 23:20:52 +08:00
..
__init__.py Update TensorRT-LLM (#2460) 2024-11-19 18:30:34 +08:00
asynchronous.py feat: adding multimodal (only image for now) support in trtllm-bench (#3490) 2025-04-18 07:06:16 +08:00
general.py [TRTLLM-4717][perf] Set CUDA graph max batch size and padding in throughput benchmark. (#3875) 2025-05-09 23:20:52 +08:00
processes.py perf: Readd iteration logging for trtllm-bench. (#3039) 2025-04-01 08:13:09 +08:00