TensorRT-LLMs/benchmarks
Chang Liu 23500b55c3
[TRTLLM-7398][feat] Support KV cache salting for secure KV cache reuse (#7106)
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
2025-09-06 17:58:32 -04:00
..
cpp [TRTLLM-7398][feat] Support KV cache salting for secure KV cache reuse (#7106) 2025-09-06 17:58:32 -04:00
README.md chore: Remove deprecated Python runtime benchmark (#4171) 2025-05-14 18:41:05 +08:00

TensorRT-LLM Benchmarks

Overview

There are currently two workflows to benchmark TensorRT-LLM:

  • trtllm-bench
    • trtllm-bench is native to TensorRT-LLM and is a Python benchmarker for reproducing and testing the performance of TensorRT-LLM.
    • NOTE: This benchmarking suite is a current work in progress and is prone to large changes.
  • C++ benchmarks
    • The recommended workflow that uses TensorRT-LLM C++ API and can take advantage of the latest features of TensorRT-LLM.