This website requires JavaScript.
Explore
Help
Sign In
kanshan
/
TensorRT-LLMs
Watch
1
Star
0
Fork
0
You've already forked TensorRT-LLMs
mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced
2026-01-14 06:27:45 +08:00
Code
Issues
Actions
1
Packages
Projects
Releases
Wiki
Activity
432f185dee
TensorRT-LLMs
/
cpp
/
kernels
/
xqa
/
test
History
Pengbo Wang
c0e25e5418
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
...
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
2026-01-11 19:26:10 -05:00
..
refAttention.cpp
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
2026-01-11 19:26:10 -05:00
refAttention.h
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
2026-01-11 19:26:10 -05:00
test.cpp
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
2026-01-11 19:26:10 -05:00
warmup.cu
[None][feat] Add vLLM KV Pool support for XQA mla kernel (
#8560
)
2025-10-22 14:12:57 +08:00