mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Doc: fix link in llama4 Maverick example (#5864)
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
This commit is contained in:
parent
e1fb1de4d9
commit
c24eb67054
@ -9,7 +9,7 @@ This document shows how to run Llama4-Maverick on B200 with PyTorch workflow and
|
||||
- [B200 Max-throughput](#b200-max-throughput)
|
||||
- [B200 Min-latency](#b200-min-latency)
|
||||
- [Advanced Configuration](#advanced-configuration)
|
||||
- [Exploring ISL/OSL combinations](#exploring-islosl-combinations)
|
||||
- [Configuration tuning](#configuration-tuning)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Out of memory issues](#out-of-memory-issues)
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user