TensorRT-LLMs/cpp/kernels/xqa
yunruis 30c5b4183a
refactoring: port customized kernels with public cutlass version (#5027)
Signed-off-by: yunruis 

Merge this to unblock others since the full CI has been run through
2025-06-13 16:19:31 +08:00
..
nvrtc infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
test [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
barriers.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
CMakeLists.txt [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
cuda_hint.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
defines.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
gen_cpp_header.py [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
gen_cubins.py infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
gmma_impl.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
gmma.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
hostUtils.h fix: rename some terms (#4534) 2025-05-23 23:23:49 +08:00
ldgsts.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
mha_components.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mha_sm90_transpose.xlsx infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
mha_sm90.cu refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
mha_stdheaders.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mha.cu [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mha.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mhaUtils.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mla_sm120.cu [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mla_sm120.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
mma.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
platform.h infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
README.md infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
ref.py infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
RefChecker.cuh infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
specDec.h infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
tensorMap.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
tensorMap.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
tma.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
utils.cuh [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
utils.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00

XQA - A set of optimized kernels for generation-phase MQA/GQA

Dependency

If you want to build & run unit tests, you need libgtest-dev and libeigen3-dev.

Options

Kernel compile-time options can be found in defines.h. See code comments for details. Runtime options of unit tests can be modified in test.cpp.

Build & run unit tests

You need to install libgtest-dev and libeigen3-dev before building. To build, use the normal cmake build steps:

  • mkdir build
  • cd build
  • cmake .. -DCMAKE_BUILD_TYPE=Release
  • cmake --build . -j

To run unit tests, run ./unitTests. There are a few runtime options that can be controlled with environment variables:

  • XQA_ZERO_FILL: Set this to 1 to initialize input data with zeros (instead of random numbers). This is useful if you want to run perf tests quickly and skip the slow random data generation step. Note there is an impact on measure perf.
  • XQA_USE_QGMMA: On Hopper, we try to use TMA+QGMMA kernel (mha_sm90.cu) by default if possible. To force using mha.cu, set this to 0.
  • XQA_NB_SUB_SEQ: The number of CUDA thread blocks used to handle one K/V head. We have reasonable default but if you want to change it manually, use this variable.

Generation cubins used in TensorRT-LLM

Run gen_cubin.py in the repo workspace.