..
serve
[None][fix] Fix iteration stats for spec-dec ( #9855 )
2025-12-16 14:11:38 -08:00
run_llm_fp8_quant_llama_70b.py
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default ( #5312 )
2025-06-20 03:01:10 +08:00
run_llm_quickstart_atexit.py
[TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) ( #5431 )
2025-07-01 19:06:41 +08:00
test_ad_guided_decoding.py
[ #8245 ][feat] Autodeploy: Guided Decoding Support ( #8551 )
2025-10-28 09:29:57 +08:00
test_ad_speculative_decoding.py
[ #9241 ][feat] AutoDeploy: Support Eagle3 Speculative Decoding ( #9869 )
2025-12-24 23:30:42 -05:00
test_bert.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_bindings.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_chatglm.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_commandr.py
[ https://nvbugs/5410279 ][test] resubmit timeout refactor ( #6337 )
2025-08-05 16:39:25 +08:00
test_draft_target_model.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_eagle.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_enc_dec.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_exaone.py
[ https://nvbugs/5410279 ][test] resubmit timeout refactor ( #6337 )
2025-08-05 16:39:25 +08:00
test_flux.py
[None][chore] Update the Flux autodeploy example ( #8434 )
2025-11-18 14:16:04 -08:00
test_gemma.py
[TRTLLM-8638][fix] fix test issues ( #8557 )
2025-10-24 02:16:55 -04:00
test_gpt.py
[ https://nvbugs/5552132 ][fix] Enable LoRa for GPT OSS Torch ( #8253 )
2025-12-03 15:42:15 +01:00
test_gptj.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_granite.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_internlm.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_llama.py
[None][fix] Waive gb200 ( #9580 )
2025-12-02 12:09:21 +08:00
test_llm_api_with_mpi.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
test_mamba.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_medusa.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_mistral.py
[TRTLLM-6496][feat] Add LoRa Torch tests for the latest NIM model list ( #6806 )
2025-10-03 12:10:48 -07:00
test_mixtral.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_multimodal.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_nemotron_nas.py
[TRTLLM-6496][feat] Add LoRa Torch tests for the latest NIM model list ( #6806 )
2025-10-03 12:10:48 -07:00
test_nemotron.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_ngram.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_openai.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
test_phi.py
[TRTLLM-6496][feat] Add LoRa Torch tests for the latest NIM model list ( #6806 )
2025-10-03 12:10:48 -07:00
test_qwen2audio.py
move the reset models into examples/models/core directory ( #3555 )
2025-04-19 20:48:59 -07:00
test_qwen.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_qwenvl.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_ray.py
[TRTLLM-9737][chore] Add rl perf reproduce script and enhance the robustness of Ray tests ( #9939 )
2025-12-24 15:27:01 +08:00
test_recurrentgemma.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_redrafter.py
test: skip post blackwell ( #6357 )
2025-08-01 13:10:14 -04:00
test_whisper.py
[ https://nvbugs/5747930 ][fix] Use offline tokenizer for whisper models. ( #10121 )
2025-12-20 09:42:07 +08:00