TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT
dongjiyingdjy 51652b9b2b
feat : add PositionEmbeddingType=0 to xqa support (#4934)
Signed-off-by: Jiying Dong <87510204+dongjiyingdjy@users.noreply.github.com>
2025-06-05 21:50:42 +08:00
..
nvrtcWrapper infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
compileEngine.cpp feat : add PositionEmbeddingType=0 to xqa support (#4934) 2025-06-05 21:50:42 +08:00
compileEngine.h refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00
cubinObj.cpp fix: rename some terms (#4534) 2025-05-23 23:23:49 +08:00
cubinObj.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cubinObjRegistry.h fix: segfault in cudaDriverWrapper (#3017) 2025-04-02 08:55:19 +02:00
decoderXQAImplJIT.cpp fix: update checks that broke medusa tests when use_py_session=True (#4339) 2025-05-15 15:47:28 -07:00
decoderXQAImplJIT.h Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
kernelUtils.cpp feat : add PositionEmbeddingType=0 to xqa support (#4934) 2025-06-05 21:50:42 +08:00
kernelUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
serializationUtils.h Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00