TensorRT-LLMs/cpp/tensorrt_llm
Perkz Zheng 1c5b0d6a13
[Feat] add chunked-attention kernels on Hopper (for llama4) (#4291)
* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add mtp for fmha_v2 MLA kernels and add chunked-attention support for hopper fmha kernels

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-05-19 09:57:10 -07:00
..
batch_manager refactor: Copy sequence lengths once in decoder setup (#4102) 2025-05-16 22:03:55 +08:00
common [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
cutlass_extensions/include/cutlass_extensions [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
executor feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
executor_worker
kernels [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins Fix bias shape in weightOnlyGroupwiseQuantMatmulPlugin for TRT workflow (#4348) 2025-05-16 10:02:30 +08:00
pybind refactor: Copy sequence lengths once in decoder setup (#4102) 2025-05-16 22:03:55 +08:00
runtime [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
CMakeLists.txt feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00