TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderXQAImplJIT
Faraz 10d5af06e0
[NVBUG-5291971] JIT path for XQA (#4675)
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
2025-06-02 16:24:59 +02:00
..
nvrtcWrapper [NVBUG-5291971] JIT path for XQA (#4675) 2025-06-02 16:24:59 +02:00
compileEngine.cpp Revert "infra: move nvrtc_wrapper to conan (#3282)" (#3573) 2025-04-15 22:45:13 +08:00
compileEngine.h refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00
cubinObj.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cubinObj.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cubinObjRegistry.h fix: segfault in cudaDriverWrapper (#3017) 2025-04-02 08:55:19 +02:00
decoderXQAImplJIT.cpp fix: update checks that broke medusa tests when use_py_session=True (#4339) 2025-05-15 15:47:28 -07:00
decoderXQAImplJIT.h Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
kernelUtils.cpp Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
kernelUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
serializationUtils.h Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00