mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* add cases for rtx_pro_6000 and update test filter Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com> * amend a typo in model llama_v3.1_405b_instruct fp4 and add more cases for rtx pro 6000 and waive_list Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com> --------- Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com> Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| .gitignore | ||
| examples_test_list.txt | ||
| llm_multinodes_function_test.txt | ||
| llm_release_gb20x.txt | ||
| llm_release_perf_multinode_test.txt | ||
| llm_release_rtx_pro_6000.txt | ||
| llm_sanity_test.txt | ||
| trt_llm_integration_perf_sanity_test.yml | ||
| trt_llm_integration_perf_test.yml | ||
| trt_llm_release_perf_cluster_test.yml | ||
| trt_llm_release_perf_sanity_test.yml | ||
| trt_llm_release_perf_test.yml | ||