TensorRT-LLMs/tensorrt_llm/bench
Frank 161490f039
[fix] Fixes KV Cache overrides in trtllm-bench (#6103)
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
2025-07-18 03:44:44 +08:00
..
benchmark [TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend (#5752) 2025-07-16 16:42:59 +08:00
build [TRTLLM-5838][fix] fix max batch size and max tokens in kv cache estimations for Nemotron-H (#5371) 2025-07-09 11:30:15 +03:00
dataclasses [fix] Fixes KV Cache overrides in trtllm-bench (#6103) 2025-07-18 03:44:44 +08:00
utils Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (#5130) 2025-06-15 18:54:04 +03:00
__init__.py Update TensorRT-LLM 2024-08-20 18:55:15 +08:00