TensorRT-LLMs/cpp/tensorrt_llm
peaceh-nv 8ec3b1de10
[None][feat] : Add FP8 context MLA support for SM120 (#6059)
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
2025-08-07 16:16:34 +08:00
..
batch_manager [TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6510) 2025-08-07 09:05:36 +03:00
common [None][feat] : Add FP8 context MLA support for SM120 (#6059) 2025-08-07 16:16:34 +08:00
cutlass_extensions/include/cutlass_extensions [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
deep_ep DeepEP LL dispatch FP4 (#6296) 2025-07-28 11:25:42 +08:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6510) 2025-08-07 09:05:36 +03:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [None][feat] : Add FP8 context MLA support for SM120 (#6059) 2025-08-07 16:16:34 +08:00
layers refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
nanobind [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
plugins [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
pybind [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
runtime [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [None][feat] : Add FP8 context MLA support for SM120 (#6059) 2025-08-07 16:16:34 +08:00
CMakeLists.txt [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00