TensorRT-LLMs/tensorrt_llm/bench/dataclasses
h-guo18 55fed1873c
[None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039)
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-10-17 15:55:57 -04:00
..
__init__.py Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
configuration.py [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
engine.py Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
enums.py Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
general.py Enable trtllm-bench to run LoRA and add basic e2e perf testing capability for LoRA in PyT flow (#5130) 2025-06-15 18:54:04 +03:00
reporting.py [None][chore] extract weights loading related logic to model loader (#7579) 2025-09-25 10:19:22 -07:00
statistics.py [TRTLLM-6685][feat] Add speculative metrics for trt llm bench (#6476) 2025-08-04 15:22:57 -07:00