TensorRT-LLMs/cpp/tensorrt_llm
Chang Liu 389b73c349
[None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (#9529)
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
2025-11-28 15:26:52 +08:00
..
batch_manager [https://nvbugs/5680310][fix] Fix ctx only timed out test (#9410) 2025-11-27 11:21:21 +08:00
common [None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (#9529) 2025-11-28 15:26:52 +08:00
cutlass_extensions/include/cutlass_extensions [None] [chore] Update to cutlass 4.3 (#8637) 2025-11-28 08:54:34 +08:00
deep_ep [TRTLLM-9197][infra] Move thirdparty stuff to it's own listfile (#8986) 2025-11-20 16:44:23 -08:00
deep_gemm [TRTLLM-9211][infra] Minor fixes to 3rdparty/CMakelists (#9365) 2025-11-23 22:57:02 -08:00
executor [TRTLLM-9197][infra] Move thirdparty stuff to it's own listfile (#8986) 2025-11-20 16:44:23 -08:00
executor_worker
flash_mla [TRTLLM-9211][infra] Minor fixes to 3rdparty/CMakelists (#9365) 2025-11-23 22:57:02 -08:00
kernels [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (#9376) 2025-11-26 16:38:25 +08:00
layers [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
nanobind [TRTLLM-9389][chore] Rename AlltoAll backend names (#9329) 2025-11-23 13:52:57 -08:00
plugins [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
pybind [TRTLLM-9389][chore] Rename AlltoAll backend names (#9329) 2025-11-23 13:52:57 -08:00
runtime [None][refactor] decoding inputs, part 2 (#5799) 2025-11-18 14:38:51 +01:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (#9145) 2025-11-25 10:56:07 -08:00
CMakeLists.txt [TRTLLM-9286][feat] Integration of CuteDSL NVFP4 grouped GEMM (#8880) 2025-11-18 17:40:12 -08:00