| .. |
|
attention_backend
|
[None][feat] Initial PR for trtllm-gen attention backend (#10784)
|
2026-02-11 17:16:52 +08:00 |
|
auto_deploy
|
[#11203][feat] AutoDeploy: Refactor node caching and improve engine build time (#11250)
|
2026-02-10 13:35:44 -08:00 |
|
compilation
|
[None][chore] Mass merge commits from release/1.2.0rc6.post1 branch (#11384)
|
2026-02-10 14:00:42 +08:00 |
|
configs
|
[TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405)
|
2025-10-24 13:40:41 -04:00 |
|
cuda_tile_kernels
|
[None][feat] Integrate cuda.tile RMS norm kernels (#9725)
|
2026-02-02 19:44:27 +08:00 |
|
custom_ops
|
[None][feat] Remove the hard code for activation type definition in T… (#11164)
|
2026-02-11 21:50:45 +08:00 |
|
cute_dsl_kernels
|
[TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130)
|
2026-02-06 09:49:30 +08:00 |
|
debug
|
Add debug hook to support dump tensor data and add new debug functions easily (#5182)
|
2025-06-24 17:45:28 +08:00 |
|
disaggregation
|
[None][fix] Avoid reserved filename on Windows (#11382)
|
2026-02-10 11:22:59 +08:00 |
|
distributed
|
[TRTLLM-10264][feat] Support attention DP + Helix CP (#10477)
|
2026-01-29 02:57:13 -05:00 |
|
models
|
[None][chore] Merge residual+hidden into layer norm at the end of each NemotronH MTP, and remove a % operation (#11406)
|
2026-02-11 12:01:36 -05:00 |
|
modules
|
[TRTLLM-10273][feat] Move MambaCacheManager from Python to C++ (#10540)
|
2026-02-10 07:20:56 -08:00 |
|
peft
|
[https://nvbugs/5322131][feat] Multi-LoRA serving with CUDA Graph (#8279)
|
2026-01-22 14:01:18 +01:00 |
|
pyexecutor
|
[None][chore] Introduceing an abstract WaitingQueue interface to decouple the request scheduling logic from specific queue implementations (#11330)
|
2026-02-12 09:18:24 +08:00 |
|
shared_tensor
|
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396)
|
2025-07-10 05:12:53 +09:00 |
|
speculative
|
[https://nvbugs/5853720][fix] Disable cutedsl argmax kernel to fix perf regression (#11403)
|
2026-02-10 18:10:38 +08:00 |
|
__init__.py
|
[TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py (#9002)
|
2025-11-13 10:47:35 +08:00 |
|
async_llm.py
|
[TRTLLM-9736][feat] AsyncLLM and verl integ (#9353)
|
2025-12-11 09:33:25 -08:00 |
|
autotuner.py
|
[TRTLLM-10264][feat] Support attention DP + Helix CP (#10477)
|
2026-01-29 02:57:13 -05:00 |
|
cublaslt_utils.py
|
[https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943)
|
2025-10-23 15:55:10 +08:00 |
|
cuda_tile_utils.py
|
[None][feat] Integrate cuda.tile RMS norm kernels (#9725)
|
2026-02-02 19:44:27 +08:00 |
|
cute_dsl_utils.py
|
[None][chore] polish error message in cute_dsl_utils.py (#7852)
|
2025-09-19 12:05:11 +08:00 |
|
device_mesh.py
|
[TRTLLM-9465][fix] Swap TP-CP grouping order (#10350)
|
2026-01-05 20:08:03 +08:00 |
|
expert_statistic.py
|
[TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587)
|
2025-11-17 18:07:13 +01:00 |
|
flashinfer_utils.py
|
[TRTLLM-9578][feat] make PDL enabled by default (#9695)
|
2025-12-25 07:15:24 -05:00 |
|
hostfunc.py
|
[TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948)
|
2025-09-03 15:16:11 -07:00 |
|
llm.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
memory_buffer_utils.py
|
[https://nvbugs/5811697][fix] Fix buffer reuse. (#10716)
|
2026-01-25 18:12:21 +08:00 |
|
metadata.py
|
[None][feat] Use Separate QKV Input Layout for Context MLA (#6538)
|
2025-08-19 22:04:48 +08:00 |
|
model_config.py
|
[TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130)
|
2026-02-06 09:49:30 +08:00 |
|
utils.py
|
[None][feat] Remove the hard code for activation type definition in T… (#11164)
|
2026-02-11 21:50:45 +08:00 |
|
virtual_memory.py
|
[TRTLLM-9736][feat] AsyncLLM and verl integ (#9353)
|
2025-12-11 09:33:25 -08:00 |