TensorRT-LLMs/cpp/tensorrt_llm
Perkz Zheng 60101eb8a5
[None][fix] trtllm-gen cubins compiled with wrong arch. (#7953)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-09-24 04:13:36 -07:00
..
batch_manager [TRTLLM-6341][feature] Support SWA KV cache reuse (#6768) 2025-09-24 14:28:24 +08:00
common [TRTLLM-6994][feat] FP8 Context MLA integration (Cherry-pick https://github.com/NVIDIA/TensorRT-LLM/pull/6059 from release/1.1.0rc2) (#7610) 2025-09-19 09:40:49 +08:00
cutlass_extensions/include/cutlass_extensions [TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm (#7757) 2025-09-21 11:38:17 +08:00
deep_ep [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [TRTLLM-7989][infra] Bundle UCX and NIXL libs in the TRTLLM python package (#7766) 2025-09-22 16:43:35 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [None][fix] trtllm-gen cubins compiled with wrong arch. (#7953) 2025-09-24 04:13:36 -07:00
layers refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
nanobind [TRTLLM-6341][feature] Support SWA KV cache reuse (#6768) 2025-09-24 14:28:24 +08:00
plugins [None][feat] support gpt-oss with fp8 kv cache (#7612) 2025-09-15 02:17:37 +08:00
pybind [TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904) 2025-09-19 20:55:32 +08:00
runtime [https://nvbugs/5489015][fix] Support communicator split in MNNVL allreduce and fix the binding issues. (#7387) 2025-09-17 07:43:20 +08:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [https://nvbugs/5532248][fix] Fix fused_moe OOM (#7931) 2025-09-24 02:22:38 -07:00
CMakeLists.txt [https://nvbugs/5453827][fix] Fix RPATH of th_common shared library to find pip-installed NCCL (#6984) 2025-08-21 17:58:30 +08:00