TensorRT-LLMs/tensorrt_llm/bench
nv-guomingz 4e4d18826f
chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie… (#6003)
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
2025-07-15 15:50:03 +09:00
..
benchmark chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie… (#6003) 2025-07-15 15:50:03 +09:00
build [TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H (#5371) 2025-07-09 11:30:15 +03:00
dataclasses [enhance] Add the ability to write a request timeline. (#5258) 2025-07-10 17:15:30 -07:00
utils Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (#5130) 2025-06-15 18:54:04 +03:00
__init__.py Update TensorRT-LLM 2024-08-20 18:55:15 +08:00