TensorRT-LLMs/cpp/kernels/xqa/test
Pengbo Wang c0e25e5418
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (#10264)
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
2026-01-11 19:26:10 -05:00
..
refAttention.cpp [TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (#10264) 2026-01-11 19:26:10 -05:00
refAttention.h [TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (#10264) 2026-01-11 19:26:10 -05:00
test.cpp [TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (#10264) 2026-01-11 19:26:10 -05:00
warmup.cu [None][feat] Add vLLM KV Pool support for XQA mla kernel (#8560) 2025-10-22 14:12:57 +08:00