TensorRT-LLMs/cpp/tensorrt_llm
Void 316e5c3be3
feat: fix and improve allreduce and fusion kernels (#3064)
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
2025-04-08 19:33:52 +08:00
..
batch_manager feat: use cudaMalloc to allocate kvCache (#3303) 2025-04-08 10:59:14 +08:00
common feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor ucx interface (#3306) 2025-04-07 08:44:34 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels feat: fix and improve allreduce and fusion kernels (#3064) 2025-04-08 19:33:52 +08:00
layers chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
plugins chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
pybind feat: fix and improve allreduce and fusion kernels (#3064) 2025-04-08 19:33:52 +08:00
runtime feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00
thop feat: fix and improve allreduce and fusion kernels (#3064) 2025-04-08 19:33:52 +08:00
CMakeLists.txt [feat] open source fp8_blockscale_gemm (#3071) 2025-04-02 12:12:52 +08:00