| .. |
|
batch_manager
|
[TRTLLM-6549][fix] add kv cache time output back (#7798)
|
2025-09-23 14:12:42 -04:00 |
|
common
|
[TRTLLM-6994][feat] FP8 Context MLA integration (Cherry-pick https://github.com/NVIDIA/TensorRT-LLM/pull/6059 from release/1.1.0rc2) (#7610)
|
2025-09-19 09:40:49 +08:00 |
|
cutlass_extensions/include/cutlass_extensions
|
[TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm (#7757)
|
2025-09-21 11:38:17 +08:00 |
|
deep_ep
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
deep_gemm
|
[https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588)
|
2025-08-06 16:44:21 +08:00 |
|
executor
|
[TRTLLM-7989][infra] Bundle UCX and NIXL libs in the TRTLLM python package (#7766)
|
2025-09-22 16:43:35 +08:00 |
|
executor_worker
|
|
|
|
kernels
|
[None][feat] support JIT mha.cu for SPEC_DEC in runtime (#6078)
|
2025-09-23 14:56:17 -07:00 |
|
layers
|
refactor: Remove enforced sorted order of batch slots (#3502)
|
2025-07-14 17:23:02 +02:00 |
|
nanobind
|
[TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904)
|
2025-09-19 20:55:32 +08:00 |
|
plugins
|
[None][feat] support gpt-oss with fp8 kv cache (#7612)
|
2025-09-15 02:17:37 +08:00 |
|
pybind
|
[TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904)
|
2025-09-19 20:55:32 +08:00 |
|
runtime
|
[https://nvbugs/5489015][fix] Support communicator split in MNNVL allreduce and fix the binding issues. (#7387)
|
2025-09-17 07:43:20 +08:00 |
|
testing
|
fix: Improve chunking test and skip empty kernel calls (#5710)
|
2025-07-04 09:08:15 +02:00 |
|
thop
|
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend (#6794)
|
2025-09-23 08:24:21 +08:00 |
|
CMakeLists.txt
|
[https://nvbugs/5453827][fix] Fix RPATH of th_common shared library to find pip-installed NCCL (#6984)
|
2025-08-21 17:58:30 +08:00 |