TensorRT-LLMs/tests/integration/test_lists/qa
Wanli Jiang 9632dba02e
feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
2025-07-30 09:20:16 -07:00
..
.gitignore Update (#2978) 2025-03-23 16:39:35 +08:00
benchmark_test_list.txt tests: add TestNemotronH cuda graph tests (#6390) 2025-07-30 18:45:58 +10:00
examples_test_list.txt feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
llm_multinodes_function_test.txt tests: add multi nodes tests (#5196) 2025-06-18 18:08:04 +08:00
llm_release_digits_func.txt [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
llm_release_digits_perf.txt [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
llm_release_gb20x.txt [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
llm_release_perf_multinode_test.txt chore: Mass integration of release/0.18 (#3421) 2025-04-16 10:03:29 +08:00
llm_release_rtx_pro_6000.txt test: update test list for RTX6KD (#6213) 2025-07-22 18:55:24 +08:00
llm_sanity_test.txt fix: support mixture of text & multimodal prompts (#6345) 2025-07-30 08:52:31 +08:00
llm_triton_integration_test.txt chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
trt_llm_integration_perf_sanity_test.yml [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
trt_llm_integration_perf_test.yml [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
trt_llm_release_perf_cluster_test.yml test: organize perf cases and add missing perflab cases in qa test list (#6283) 2025-07-28 20:33:32 +10:00
trt_llm_release_perf_l2_test.yml test: fix some test failure and add llama_nemotron models in perf sanity test, add more torch cases (#5693) 2025-07-14 17:17:30 +08:00
trt_llm_release_perf_sanity_test.yml test: organize perf cases and add missing perflab cases in qa test list (#6283) 2025-07-28 20:33:32 +10:00
trt_llm_release_perf_test.yml test:[nvbug 5415268] add kv_cache_free_gpu_mem_fraction param and llama4 rcca cases (#6430) 2025-07-29 15:52:45 +10:00