TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels/include
xiweny c076a02b38
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Signed-off-by: Daniel Stokes <dastokes@nvidia.com>
Signed-off-by: Zhanrui Sun <zhanruis@nvidia.com>
Signed-off-by: Xiwen Yu <xiweny@nvidia.com>
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: xiweny <13230610+VALLIS-NERIA@users.noreply.github.com>
Co-authored-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Co-authored-by: Daniel Stokes <dastokes@nvidia.com>
Co-authored-by: Zhanrui Sun <zhanruis@nvidia.com>
Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-09-16 09:56:18 +08:00
..
allreduce_gemm_runner.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
common.h [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
cutlass_kernel_selector.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
fp4_gemm.h feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
low_latency_gemm.h refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
moe_gemm_kernels.h [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
moe_kernels.h [None][perf] Disable Swap AB when num tokens exceeds N dimension (#7104) 2025-08-28 21:29:55 -04:00
moe_util_kernels.h [TRTLLM-7319][perf] Fuse slicing into MoE. (#6728) 2025-08-25 16:52:30 -04:00