mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
[None][doc] Add the missing content for model support section and fix valid links for long_sequence.md (#8869)
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
This commit is contained in:
parent
271a981f1f
commit
65b793c77e
@ -26,7 +26,7 @@ Note that if chunked context is enabled, please set the `max_num_tokens` to be a
|
||||
|
||||
<div align="center">
|
||||
<figure>
|
||||
<img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/feat_long_seq_chunked_attention.png" alt="feat_long_seq_chunked_attention" width="240" height="auto">
|
||||
<img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/media/feat_long_seq_chunked_attention.png" alt="feat_long_seq_chunked_attention" width="240" height="auto">
|
||||
</figure>
|
||||
</div>
|
||||
<p align="center"><sub><em>Figure 1. Illustration of chunked attention </em></sub></p>
|
||||
@ -43,7 +43,7 @@ Note that chunked attention can only be applied to context requests.
|
||||
|
||||
<div align="center">
|
||||
<figure>
|
||||
<img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/blogs/media/feat_long_seq_chunked_attention.png" alt="feat_long_seq_sliding_win_attn" width="240" height="auto">
|
||||
<img src="https://github.com/NVIDIA/TensorRT-LLM/raw/main/docs/source/media/feat_long_seq_chunked_attention.png" alt="feat_long_seq_sliding_win_attn" width="240" height="auto">
|
||||
</figure>
|
||||
</div>
|
||||
<p align="center"><sub><em>Figure 2. Illustration of sliding window attention </em></sub></p>
|
||||
|
||||
@ -25,6 +25,11 @@ TensorRT LLM delivers breakthrough performance on the latest NVIDIA GPUs:
|
||||
|
||||
TensorRT LLM supports the latest and most popular LLM architectures:
|
||||
|
||||
- **Language Models**: GPT-OSS, Deepseek-R1/V3, Llama 3/4, Qwen2/3, Gemma 3, Phi 4...
|
||||
- **Multi-modal Models**: LLaVA-NeXT, Qwen2-VL, VILA, Llama 3.2 Vision...
|
||||
|
||||
TensorRT LLM strives to support the most popular models on **Day 0**.
|
||||
|
||||
### FP4 Support
|
||||
[NVIDIA B200 GPUs](https://www.nvidia.com/en-us/data-center/dgx-b200/) , when used with TensorRT LLM, enable seamless loading of model weights in the new [FP4 format](https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/#what_is_nvfp4), allowing you to automatically leverage optimized FP4 kernels for efficient and accurate low-precision inference.
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user