TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Fanrong Li 0d20a8fd61
[TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
Co-authored-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
2025-10-14 08:23:16 -07:00
..
cubin [TRTLLM-8536][feat] Update trtllm gen fmha kernels to support block sparse attention (#8301) 2025-10-13 05:54:48 -07:00
CMakeLists.txt
fmhaKernels.h [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
fmhaReduction.cu [None][feat] support gpt-oss with fp8 kv cache (#7612) 2025-09-15 02:17:37 +08:00
fmhaReduction.h
fmhaRunner.cpp
fmhaRunner.h
fmhaRunnerParams.h [TRTLLM-8536][feat] Update trtllm gen fmha kernels to support block sparse attention (#8301) 2025-10-13 05:54:48 -07:00
kernelParams.h [TRTLLM-8536][feat] Update trtllm gen fmha kernels to support block sparse attention (#8301) 2025-10-13 05:54:48 -07:00
kernelUtils.h