TensorRT-LLMs/tests/unittest/api_stability/references
Erin 89dabf5aa1
[TRTLLM-9736][feat] AsyncLLM and verl integ (#9353)
Signed-off-by: Liwei Ma <liweim@nvidia.com>
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Co-authored-by: Liwei Ma <liweim@nvidia.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Co-authored-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-12-11 09:33:25 -08:00
..
batched_logits_processor.yaml test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
calib_config.yaml test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
completion_output.yaml [TRTLLM-4517] [feat] Additional model outputs (#7206) 2025-10-13 15:33:18 +02:00
guided_decoding_params.yaml feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
llm.yaml [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00
logits_processor.yaml feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
quant_config.yaml [TRTLLM-6174][feat] Enable FP32 mamba ssm cache (#6574) 2025-08-10 16:27:51 -04:00
request_output.yaml [None][feat] Add opentelemetry tracing (#5897) 2025-10-27 18:51:07 +08:00
sampling_params.yaml [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00