mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
|
|
||
|---|---|---|
| .. | ||
| deepseek-r1-deepgemm.yaml | ||
| deepseek-r1-latency.yaml | ||
| deepseek-r1-throughput.yaml | ||
| gpt-oss-120b-latency.yaml | ||
| gpt-oss-120b-throughput.yaml | ||
| llama-3.3-70b.yaml | ||
| llama-4-scout.yaml | ||
| qwen3-disagg-prefill.yaml | ||
| qwen3-next.yaml | ||
| qwen3.yaml | ||
| README.md | ||
Recommended LLM API Configuration Settings
This directory contains recommended LLM API performance settings for popular models. They can be used out-of-the-box with trtllm-serve via the --extra_llm_api_options CLI flag, or you can adjust them to your specific use case.
For model-specific deployment guides, please refer to the official documentation.