TensorRT-LLMs/cpp/tests/unit_tests/kernels
zhhuang-nv 8452775db8
[TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535)
* optimize kv cache reuse workflow for MLA

write kv cache first and only call up-projection GEMM once
relax contiguous requirements of k/v for setting paged kv cache
return two contiguous tensors when loading MLA KV Cache

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* support fp8 kv cache for MLA kv cache reuse

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

* resolve comments

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>

---------

Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
2025-05-23 19:47:50 +08:00
..
allReduce fix potential issues in allreduce fusion kernel and ut (#4226) 2025-05-19 17:38:29 +08:00
cudaCoreGemm
fused_gated_gemm
sampling
smoothQuant
weightOnly
banRepeatNGramsKernelsTest.cpp
CMakeLists.txt
decodingKernelTest.cpp
logitsBitmaskTest.cpp
mixtureOfExpertsTest.cu [https://nvbugs/5297775] fix: Correct memory guard for large MOE tests to account for TP space (#4553) 2025-05-23 14:57:49 +12:00
mlaPreprocessTest.cu [TRTLLM-5070][feat] Support FP8 KV Cache Reuse for MLA (#4535) 2025-05-23 19:47:50 +08:00
ropeTest.cu
shiftKCacheKernelTest.cu
stopCriteriaKernelsTest.cpp