TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT
Jhao-Ting Chen 6edaa23c1c
[None][feat] Multi-block mode for Hopper spec dec XQA kernel (#4416)
Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>
2025-08-03 14:31:33 -07:00
..
nvrtcWrapper [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
compileEngine.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
compileEngine.h refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00
cubinObj.cpp [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
cubinObj.h [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
cubinObjRegistry.h fix: segfault in cudaDriverWrapper (#3017) 2025-04-02 08:55:19 +02:00
decoderXQAImplJIT.cpp [None][feat] Multi-block mode for Hopper spec dec XQA kernel (#4416) 2025-08-03 14:31:33 -07:00
decoderXQAImplJIT.h [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
kernelUtils.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
kernelUtils.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
serializationUtils.h Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00