TensorRT-LLMs/tests/integration/test_lists
2ez4bz 2480aedb73 [TRTLLM-5252][feat] Add fp8 support for Mistral Small 3.1 (#6731)
This commit adds some level of FP8 support to Mistral Small 3.1 by:

* disabling quantization for the vision sub-model since `modelopt` does
  support quantizing it (yet).
* extending existing accuracy tests to use a modelopt produced FP8
  checkpoint.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-09-01 11:02:31 +08:00
..
dev Update (#2978) 2025-03-23 16:39:35 +08:00
qa [TRTLLM-5252][feat] Add fp8 support for Mistral Small 3.1 (#6731) 2025-09-01 11:02:31 +08:00
test-db [TRTLLM-5252][feat] Add fp8 support for Mistral Small 3.1 (#6731) 2025-09-01 11:02:31 +08:00
waives.txt [https://nvbugs/5375594][fix] fix oom issue on structural_tag test case (#6838) 2025-09-01 11:02:31 +08:00