[None][docs] Add --config preference over --extra_llm_api_options in CODING_GUIDELINES.md (#10426)

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
This commit is contained in:
Venky 2026-01-06 08:35:47 +05:30 committed by GitHub
parent 46f035befe
commit aa1fe931de
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 13 additions and 5 deletions

View File

@ -487,6 +487,14 @@ else:
f.read()
```
## Documentation Guidelines
#### CLI Options in Documentation
1. When documenting CLI commands for `trtllm-serve`, `trtllm-bench`, `trtllm-eval`, or similar tools, prefer using `--config` over `--extra_llm_api_options` for specifying configuration files.
- `--config` is the preferred, shorter alias for configuration file options.
- Example: `trtllm-serve --model <model_path> --config config.yaml` (preferred)
- Avoid: `trtllm-serve --model <model_path> --extra_llm_api_options config.yaml`
## NVIDIA Copyright
1. All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification. The following block of text should be prepended to the top of all files. This includes .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.

View File

@ -88,13 +88,13 @@ enable_chunked_prefill: false
Run the command with the config file:
```bash
trtllm-bench/trtllm-serve --model <model_path> --extra_llm_api_options extra_config.yaml ...
trtllm-bench/trtllm-serve --model <model_path> --config extra_config.yaml ...
```
For example, users can evaluate a model with trtllm-eval on LongBenchV2 task like this:
```bash
trtllm-eval --model <path_to_model> --extra_llm_api_options extra_config.yaml longbench_v2 --max_output_length 1024 ...
trtllm-eval --model <path_to_model> --config extra_config.yaml longbench_v2 --max_output_length 1024 ...
```
## Developer Guide

View File

@ -83,7 +83,7 @@ TRTLLM_ENABLE_PDL=1 trtllm-serve <model_path> \
--port 8000 \
--backend _autodeploy \
--trust_remote_code \
--extra_llm_api_options nano_v3.yaml
--config nano_v3.yaml
# OR you can launch trtllm-server to support reasoning content parsing.
TRTLLM_ENABLE_PDL=1 trtllm-serve <model_path> \
@ -92,7 +92,7 @@ TRTLLM_ENABLE_PDL=1 trtllm-serve <model_path> \
--backend _autodeploy \
--trust_remote_code \
--reasoning_parser nano-v3 \
--extra_llm_api_options nano_v3.yaml
--config nano_v3.yaml
# OR you can launch trtllm-server to support tool-calling.
TRTLLM_ENABLE_PDL=1 trtllm-serve <model_path> \
@ -102,7 +102,7 @@ TRTLLM_ENABLE_PDL=1 trtllm-serve <model_path> \
--trust_remote_code \
--reasoning_parser nano-v3 \
--tool_parser qwen3_coder \
--extra_llm_api_options nano_v3.yaml
--config nano_v3.yaml
```
For the client: