This website requires JavaScript.
Explore
Help
Sign In
kanshan
/
TensorRT-LLMs
Watch
1
Star
0
Fork
0
You've already forked TensorRT-LLMs
mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced
2026-01-14 06:27:45 +08:00
Code
Issues
Actions
1
Packages
Projects
Releases
Wiki
Activity
55a7b4db1d
TensorRT-LLMs
/
cpp
/
tensorrt_llm
/
kernels
/
decoderMaskedMultiheadAttention
/
decoderXQAImplJIT
/
nvrtcWrapper
History
Pengbo Wang
c0e25e5418
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
...
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
2026-01-11 19:26:10 -05:00
..
include
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
2026-01-11 19:26:10 -05:00
src
[TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (
#10264
)
2026-01-11 19:26:10 -05:00
CMakeLists.txt
[
https://nvbugs/4141427
][chore] Add more details to LICENSE file (
#9881
)
2025-12-13 08:35:31 +08:00