TensorRT-LLMs/tensorrt_llm/bench
kris1025 6a3a921284
[TRTLLM-6685][feat] Add speculative metrics for trt llm bench (#6476)
Signed-off-by: linquanh <linquanh@nvidia.com>
2025-08-04 15:22:57 -07:00
..
benchmark [fix] Fixes to parameter usage and low latency configuration. (#6343) 2025-07-29 01:36:13 -04:00
build [TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H (#5371) 2025-07-09 11:30:15 +03:00
dataclasses [TRTLLM-6685][feat] Add speculative metrics for trt llm bench (#6476) 2025-08-04 15:22:57 -07:00
utils Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (#5130) 2025-06-15 18:54:04 +03:00
__init__.py Update TensorRT-LLM 2024-08-20 18:55:15 +08:00