TensorRT-LLMs/tests/unittest/trt/attention
Pamela Peng 6cdfc54883
feat: Add FP8 support for SM 120 (#3248)
* Allow FP8 on SM120

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* fix sm121

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* fix

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* fix pre-commit

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* review update

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

---------

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-04-14 16:05:41 -07:00
..
test_bert_attention.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_gpt_attention_IFB.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_gpt_attention_no_cache.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_gpt_attention.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
test_sage_attention.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00