mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-26 13:43:38 +08:00
* **Model:** Llama-3.1-Nemotron-Nano-8B-v1
* **Precision:** float16
* **Environment:**
* GPUs: 1 H100 PCIe
* Driver: 570.86.15
* **Test String:** `llama_v3.1_nemotron_nano_8b-bench-pytorch-float16-input_output_len:128,128`
* **Request Throughput:** 81.86 req/sec
* **Total Token Throughput:** 20956.44 tokens/sec
* **Average Request Latency:** 5895.24 ms
* **Test String:** `llama_v3.1_nemotron_nano_8b-bench-pytorch-float16-input_output_len:2000,2000`
* **Request Throughput:** 1.45 req/sec
* **Total Token Throughput:** 5783.92 tokens/sec
* **Average Request Latency:** 211541.08 ms
* **Test String:** `llama_v3.1_nemotron_nano_8b-bench-float16-maxbs:128-input_output_len:128,128`
* **Request Throughput:** 52.75 req/sec
* **Total Token Throughput:** 13505.00 tokens/sec
* **Average Request Latency:** 5705.50 ms
* **Test String:** `llama_v3.1_nemotron_nano_8b-bench-float16-maxbs:128-input_output_len:2000,2000`
* **Request Throughput:** 1.41 req/sec
* **Total Token Throughput:** 5630.76 tokens/sec
* **Average Request Latency:** 217139.59 ms
Signed-off-by: Venky Ganesh <gvenkatarama@nvidia.com>
|
||
|---|---|---|
| .. | ||
| dev | ||
| qa | ||
| test-db | ||
| waives.txt | ||