TensorRT-LLMs/tests/integration/test_lists/qa
Ivy Zhang 5eefdf2c75 tests: Add llama4 functional cases (#6392)
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
2025-08-04 11:19:58 +08:00
..
.gitignore Update (#2978) 2025-03-23 16:39:35 +08:00
benchmark_test_list.txt tests: add TestNemotronH cuda graph tests (#6390) 2025-07-30 18:45:58 +10:00
examples_test_list.txt tests: Add llama4 functional cases (#6392) 2025-08-04 11:19:58 +08:00
llm_multinodes_function_test.txt tests: add multi nodes tests (#5196) 2025-06-18 18:08:04 +08:00
llm_release_digits_func.txt [None][infra] add eagle3 one model accuracy tests (#6264) 2025-08-02 16:07:46 -07:00
llm_release_digits_perf.txt [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
llm_release_gb20x.txt [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
llm_release_perf_multinode_test.txt chore: Mass integration of release/0.18 (#3421) 2025-04-16 10:03:29 +08:00
llm_release_rtx_pro_6000.txt [https://nvbugs/5340941][https://nvbugs/5375785] - fix: Wrap attentio… (#6355) 2025-08-01 07:38:06 -04:00
llm_sanity_test.txt [TRTLLM-6473][test] add speculative decoding and ep load balance cases into QA test list (#6436) 2025-08-03 22:11:26 -04:00
llm_triton_integration_test.txt chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
trt_llm_integration_perf_sanity_test.yml [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
trt_llm_integration_perf_test.yml [TRTLLM-6657][feat] Add LoRA support for Gemma3 (#6371) 2025-08-01 09:19:54 -04:00
trt_llm_release_perf_cluster_test.yml test: organize perf cases and add missing perflab cases in qa test list (#6283) 2025-07-28 20:33:32 +10:00
trt_llm_release_perf_l2_test.yml test: fix some test failure and add llama_nemotron models in perf sanity test, add more torch cases (#5693) 2025-07-14 17:17:30 +08:00
trt_llm_release_perf_sanity_test.yml test: organize perf cases and add missing perflab cases in qa test list (#6283) 2025-07-28 20:33:32 +10:00
trt_llm_release_perf_test.yml test:[nvbug 5415268] add kv_cache_free_gpu_mem_fraction param and llama4 rcca cases (#6430) 2025-07-29 15:52:45 +10:00