TensorRT-LLMs/cpp/tests/unit_tests/kernels
zhhuang-nv 97bc680cd8
feat: support kv cache reuse for MLA (#3571)
* support kv cache reuse for MLA

load compressed_kv and k_pe and do up-projection
use 192/128 head size MLA context kernel
support Blackwell and Hopper now

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* add CI test

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix: set k_pe head_num to 1 for kernel 2 and kernel 2V2

Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>

* resolve comments

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* use GPTJ style RoPE for MLA

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix rebase error and some docs

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix kv_lens

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* tiny fix

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix torch compile

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix: use normal device memory instead of pinned memory for unit test

Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>

* fix L0 tests

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* fix torch compile after rebase

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* resolve comments

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* resolve comments again

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

---------

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
Signed-off-by: zhhuang-nv <145532724+zhhuang-nv@users.noreply.github.com>
Co-authored-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
2025-05-15 15:22:21 +08:00
..
allReduce feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
cudaCoreGemm Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fused_gated_gemm Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
sampling test: Test OOB access issue in penaltyKernel for endId=-1 (#4035) 2025-05-05 10:24:28 -07:00
smoothQuant Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
weightOnly Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
banRepeatNGramsKernelsTest.cpp chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
CMakeLists.txt feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
decodingKernelTest.cpp chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
logitsBitmaskTest.cpp Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mixtureOfExpertsTest.cu feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
mlaPreprocessTest.cu feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
ropeTest.cu feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
shiftKCacheKernelTest.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
stopCriteriaKernelsTest.cpp chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00