TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT
Ming Wei ed887940d4
infra: open source XQA kernels (#3762)
Replace libtensorrt_llm_nvrtc_wrapper.so with its source code, which
consists of two parts:

1. NVRTC glue code
2. XQA kernel code

During TensorRT-LLM build, XQA kernel code is embedded as C++ arries via
gen_cpp_header.py and passed to NVRTC for JIT compilation.

Signed-off-by: Ming Wei <2345434+ming-wei@users.noreply.github.com>
2025-04-30 18:05:15 +08:00
..
nvrtcWrapper infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
compileEngine.cpp Revert "infra: move nvrtc_wrapper to conan (#3282)" (#3573) 2025-04-15 22:45:13 +08:00
compileEngine.h refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00
cubinObj.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cubinObj.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cubinObjRegistry.h fix: segfault in cudaDriverWrapper (#3017) 2025-04-02 08:55:19 +02:00
decoderXQAImplJIT.cpp feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00
decoderXQAImplJIT.h Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
kernelUtils.cpp Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
kernelUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
serializationUtils.h Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00