TensorRT-LLMs/tensorrt_llm/bench/benchmark
Frank 0dcf47f1c2
[TRTLLM-4717][perf] Set CUDA graph max batch size and padding in throughput benchmark. (#3875)
* Set cuda graph max batch size.

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>

* Set padding.

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>

---------

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
2025-05-09 23:20:52 +08:00
..
utils [TRTLLM-4717][perf] Set CUDA graph max batch size and padding in throughput benchmark. (#3875) 2025-05-09 23:20:52 +08:00
__init__.py Update TensorRT-LLM (#2389) 2024-10-29 22:24:38 +08:00
low_latency.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
throughput.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00