TensorRT-LLMs/cpp/tensorrt_llm
Linda eb4ed18a63
[None][fix] max_num_sequences argument in nanobind (#6862)
Signed-off-by: Linda-Stadter <57756729+Linda-Stadter@users.noreply.github.com>
2025-08-13 19:16:17 -04:00
..
batch_manager [None][refactor] Simplify decoder state initialization (#6559) 2025-08-12 21:44:41 +02:00
common [None][feat] CUTLASS MoE FC2+Finalize fusion (#3294) 2025-08-12 15:56:48 +08:00
cutlass_extensions/include/cutlass_extensions [None][feat] CUTLASS MoE FC2+Finalize fusion (#3294) 2025-08-12 15:56:48 +08:00
deep_ep [None][feat] DeepEP LL combine FP4 (#6822) 2025-08-13 04:20:21 -04:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [TRTLLM-6881][feat] Include attention dp rank info with KV cache events (#6563) 2025-08-07 14:17:07 +02:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [https://nvbugs/5394685][fix] the bug with spec-decoding + SWA && an accuracy issue related to 2CTA MLA (#6834) 2025-08-13 13:55:56 -07:00
layers refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
nanobind [None][fix] max_num_sequences argument in nanobind (#6862) 2025-08-13 19:16:17 -04:00
plugins [https://nvbugs/5302040][feat] Add whisper support (Bert Attention on SM100 and GPTAttention for cross attention on SM100) (#5527) 2025-08-13 11:19:13 -07:00
pybind [None][refactor] Simplify decoder state initialization (#6559) 2025-08-12 21:44:41 +02:00
runtime [None][refactor] Simplify decoder state initialization (#6559) 2025-08-12 21:44:41 +02:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [TRTLLM-6906][chore] Using pybind to bind functions in thop/attentionOp (#6745) 2025-08-12 16:45:16 +08:00
CMakeLists.txt [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00