TensorRT-LLMs/cpp/kernels/xqa
qsang-nv 8ef8e73002
update spec_dec (#6079)
Signed-off-by: Qidi Sang <200703406+qsang-nv@users.noreply.github.com>
2025-07-16 17:50:43 +08:00
..
nvrtc infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
test [#5403][perf] Conditionally enable SWAP AB for speculative decoding (#5404) 2025-07-01 18:32:37 +08:00
barriers.cuh [perf] improve XQA-MLA perf (#5468) 2025-06-26 18:09:13 +08:00
CMakeLists.txt [feat]: improve performance of XQA-MLA for sm120 (#5087) 2025-06-18 14:19:22 +08:00
cuda_hint.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
defines.h [#5403][perf] Conditionally enable SWAP AB for speculative decoding (#5404) 2025-07-01 18:32:37 +08:00
gen_cpp_header.py [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
gen_cubins.py update spec_dec (#6079) 2025-07-16 17:50:43 +08:00
gmma_impl.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
gmma.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
hostUtils.h fix: rename some terms (#4534) 2025-05-23 23:23:49 +08:00
ldgsts.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
mha_components.cuh [perf] improve XQA-MLA perf (#5468) 2025-06-26 18:09:13 +08:00
mha_sm90_transpose.xlsx infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
mha_sm90.cu [#5403][perf] Conditionally enable SWAP AB for speculative decoding (#5404) 2025-07-01 18:32:37 +08:00
mha_stdheaders.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mha.cu update spec_dec (#6079) 2025-07-16 17:50:43 +08:00
mha.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mhaUtils.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mla_sm120.cu [perf] improve XQA-MLA perf (#5468) 2025-06-26 18:09:13 +08:00
mla_sm120.cuh [perf] improve XQA-MLA perf (#5468) 2025-06-26 18:09:13 +08:00
mma.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
platform.h infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
README.md infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
ref.py infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
RefChecker.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
specDec.h infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
tensorMap.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
tensorMap.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
tma.h [perf] improve XQA-MLA perf (#5468) 2025-06-26 18:09:13 +08:00
utils.cuh [feat]: improve performance of XQA-MLA for sm120 (#5087) 2025-06-18 14:19:22 +08:00
utils.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00

XQA - A set of optimized kernels for generation-phase MQA/GQA

Dependency

If you want to build & run unit tests, you need libgtest-dev and libeigen3-dev.

Options

Kernel compile-time options can be found in defines.h. See code comments for details. Runtime options of unit tests can be modified in test.cpp.

Build & run unit tests

You need to install libgtest-dev and libeigen3-dev before building. To build, use the normal cmake build steps:

  • mkdir build
  • cd build
  • cmake .. -DCMAKE_BUILD_TYPE=Release
  • cmake --build . -j

To run unit tests, run ./unitTests. There are a few runtime options that can be controlled with environment variables:

  • XQA_ZERO_FILL: Set this to 1 to initialize input data with zeros (instead of random numbers). This is useful if you want to run perf tests quickly and skip the slow random data generation step. Note there is an impact on measure perf.
  • XQA_USE_QGMMA: On Hopper, we try to use TMA+QGMMA kernel (mha_sm90.cu) by default if possible. To force using mha.cu, set this to 0.
  • XQA_NB_SUB_SEQ: The number of CUDA thread blocks used to handle one K/V head. We have reasonable default but if you want to change it manually, use this variable.

Generation cubins used in TensorRT-LLM

Run gen_cubin.py in the repo workspace.