| .. |
|
allreduce_gemm
|
fix TMA error with GEMM+AR on TP=2 (#6075)
|
2025-07-18 10:26:08 +08:00 |
|
fp4_gemm
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
fp8_blockscale_gemm
|
[None][perf] Use fp8 quant kernel in DS3.2 indexer module (#8701)
|
2025-10-29 12:45:09 +08:00 |
|
fp8_rowwise_gemm
|
[None][feat] Add FP8 rowwise GEMMs for B200 (#8332)
|
2025-10-27 16:33:14 -04:00 |
|
fpA_intB_gemm
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
fused_gated_gemm
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
include
|
[None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501)
|
2025-10-27 10:18:19 +08:00 |
|
int8_gemm
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
low_latency_gemm
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
moe_gemm
|
[https://nvbugs/5575687][fix] fix moe_gemm's preexit position that cause illegal memory access (#8786)
|
2025-10-31 09:08:23 +08:00 |
|
python
|
[None][feat] GPT-OSS Sm120/Sm121 Support (#7937)
|
2025-10-06 16:59:06 -04:00 |
|
CMakeLists.txt
|
[None][feat] Add FP8 rowwise GEMMs for B200 (#8332)
|
2025-10-27 16:33:14 -04:00 |
|
cutlass_heuristic.cpp
|
[None][feat] GPT-OSS Sm120/Sm121 Support (#7937)
|
2025-10-06 16:59:06 -04:00 |
|
cutlass_heuristic.h
|
[None][perf] Add MOE support for dynamic cluster shapes and custom epilogue schedules (#6126)
|
2025-09-02 21:54:43 -04:00 |
|
cutlass_preprocessors.cpp
|
[TRTLLM-5366][feat]Add support for sm121 (#5524)
|
2025-07-08 14:27:00 -07:00 |
|
cutlass_preprocessors.h
|
Update TensorRT-LLM (#1492)
|
2024-04-24 14:44:22 +08:00 |
|
cutlass_type_conversion.h
|
chore: cutlass cleanup (#3165)
|
2025-04-01 13:57:38 +08:00 |