TensorRT-LLMs/cpp/tensorrt_llm
dongxuy04 a370643b26
[None][fix] support topk autotuner input for expert slot per group larger than 32 (#9087)
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2025-11-14 08:37:20 +08:00
..
batch_manager [TRTLLM-8540][feat] Add support for disagg in DSv3.2 (#8735) 2025-11-12 08:21:11 -08:00
common [TRTLLM-8803][feat] Add rope and uk-bgemm overlap for mla generation (#8495) 2025-11-06 17:39:57 +08:00
cutlass_extensions/include/cutlass_extensions [None][feat] GPT-OSS Sm120/Sm121 Support (#7937) 2025-10-06 16:59:06 -04:00
deep_ep [TRTLLM-6589][feat] Support CUDA graph for DeepEP (#7514) 2025-10-02 10:13:24 -07:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [TRTLLM-8540][feat] Add support for disagg in DSv3.2 (#8735) 2025-11-12 08:21:11 -08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
flash_mla [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
kernels [None][fix] support topk autotuner input for expert slot per group larger than 32 (#9087) 2025-11-14 08:37:20 +08:00
layers [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
nanobind [TRTLLM-8803][feat] Add rope and uk-bgemm overlap for mla generation (#8495) 2025-11-06 17:39:57 +08:00
plugins [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
pybind [TRTLLM-8803][feat] Add rope and uk-bgemm overlap for mla generation (#8495) 2025-11-06 17:39:57 +08:00
runtime [None][feat] add flag for EPLB to force using GDRCopy (#8650) 2025-10-29 13:33:26 +08:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [None][fix] Remove unnecessary attention workspace memory check (#9064) 2025-11-12 11:18:50 +08:00
CMakeLists.txt [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00