TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
2025-11-17 09:01:53 +08:00
..
cubin [TRTLLM-8816][feat] add optimized trtllm-gen attention kernels on sm103 (#9081) 2025-11-13 12:41:07 +08:00
CMakeLists.txt
fmhaKernels.h [TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975) 2025-11-17 09:01:53 +08:00
fmhaReduction.cu [None][update] optimized sparse mla kernels && fix unspecified cuda launch (#8866) 2025-11-02 22:26:59 -08:00
fmhaReduction.h [None][feat] Optimize MLA kernels with separate reduction kernels (#7597) 2025-09-09 16:58:44 +08:00
fmhaRunner.cpp [TRTLLM-4629] [feat] trtllm-gen kernels support sm103 (#7570) 2025-09-07 10:04:10 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h [TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975) 2025-11-17 09:01:53 +08:00
kernelParams.h [None][feat] add swapsMmaAb sparseMla kernels (#8913) 2025-11-05 09:32:34 -08:00
kernelUtils.h [None][feat] Optimize MLA kernels with separate reduction kernels (#7597) 2025-09-09 16:58:44 +08:00
prepareCustomMask.cu [TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975) 2025-11-17 09:01:53 +08:00
prepareCustomMask.h [TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975) 2025-11-17 09:01:53 +08:00