mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Replace libtensorrt_llm_nvrtc_wrapper.so with its source code, which consists of two parts: 1. NVRTC glue code 2. XQA kernel code During TensorRT-LLM build, XQA kernel code is embedded as C++ arries via gen_cpp_header.py and passed to NVRTC for JIT compilation. Signed-off-by: Ming Wei <2345434+ming-wei@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| nvrtc | ||
| test | ||
| barriers.cuh | ||
| CMakeLists.txt | ||
| cuda_hint.cuh | ||
| defines.h | ||
| gen_cpp_header.py | ||
| gen_cubins.py | ||
| gmma_impl.cuh | ||
| gmma.cuh | ||
| hostUtils.h | ||
| ldgsts.cuh | ||
| mha_sm90_transpose.xlsx | ||
| mha_sm90.cu | ||
| mha_stdheaders.cuh | ||
| mha.cu | ||
| mha.h | ||
| mhaUtils.cuh | ||
| mma.cuh | ||
| pairedF32Op.cuh | ||
| platform.h | ||
| README.md | ||
| ref.py | ||
| RefChecker.cuh | ||
| specDec.h | ||
| tma.h | ||
| utils.cuh | ||
| utils.h | ||
XQA - A set of optimized kernels for generation-phase MQA/GQA
Dependency
If you want to build & run unit tests, you need libgtest-dev and libeigen3-dev.
Options
Kernel compile-time options can be found in defines.h. See code comments for details. Runtime options of unit tests can be modified in test.cpp.
Build & run unit tests
You need to install libgtest-dev and libeigen3-dev before building. To build, use the normal cmake build steps:
mkdir buildcd buildcmake .. -DCMAKE_BUILD_TYPE=Releasecmake --build . -j
To run unit tests, run ./unitTests. There are a few runtime options that can be controlled with environment variables:
- XQA_ZERO_FILL: Set this to 1 to initialize input data with zeros (instead of random numbers). This is useful if you want to run perf tests quickly and skip the slow random data generation step. Note there is an impact on measure perf.
- XQA_USE_QGMMA: On Hopper, we try to use TMA+QGMMA kernel (mha_sm90.cu) by default if possible. To force using mha.cu, set this to 0.
- XQA_NB_SUB_SEQ: The number of CUDA thread blocks used to handle one K/V head. We have reasonable default but if you want to change it manually, use this variable.
Generation cubins used in TensorRT-LLM
Run gen_cubin.py in the repo workspace.