..
attention_backend
[TRTLLM-9766][feat] Integration of the KVCacheManager V2 to TRTLLM Runtime ( #10659 )
2026-02-02 14:29:02 +08:00
auto_deploy
[ #8242 ][feat] Add int4 GPTQ support for AutoDeploy ( #8248 )
2026-01-30 23:07:24 -08:00
compilation
[TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. ( #8531 )
2026-01-05 15:44:37 +08:00
configs
[TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache ( #8405 )
2025-10-24 13:40:41 -04:00
custom_ops
[TRTLLM-10398][feat] Enable TRTLLM moe backend for Nemotron Super ( #10791 )
2026-01-31 13:48:25 +08:00
cute_dsl_kernels
[TRTLLM-9831][perf] Use TMA.RED to improve effective memory bandwidth ( #10987 )
2026-01-27 16:15:32 +08:00
debug
Add debug hook to support dump tensor data and add new debug functions easily ( #5182 )
2025-06-24 17:45:28 +08:00
disaggregation
[TRTLLM-9527][feat] Python transceiver components (step 2) ( #10494 )
2026-01-22 10:14:50 -08:00
distributed
[TRTLLM-10264][feat] Support attention DP + Helix CP ( #10477 )
2026-01-29 02:57:13 -05:00
models
[ https://nvbugs/5691730 ][fix] Have LoRa bf16 ckpts work with Llama 3.3-70B-fp8 ( #9808 )
2026-02-02 16:26:46 +08:00
modules
[ https://nvbugs/5691730 ][fix] Have LoRa bf16 ckpts work with Llama 3.3-70B-fp8 ( #9808 )
2026-02-02 16:26:46 +08:00
peft
[ https://nvbugs/5322131 ][feat] Multi-LoRA serving with CUDA Graph ( #8279 )
2026-01-22 14:01:18 +01:00
pyexecutor
[TRTLLM-9766][feat] Integration of the KVCacheManager V2 to TRTLLM Runtime ( #10659 )
2026-02-02 14:29:02 +08:00
shared_tensor
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes ( #5396 )
2025-07-10 05:12:53 +09:00
speculative
[TRTLLM-10312][perf] Improve performance of _write_finish_reasons in TorchSampler ( #10459 )
2026-01-29 11:06:09 -05:00
__init__.py
[TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py ( #9002 )
2025-11-13 10:47:35 +08:00
async_llm.py
[TRTLLM-9736][feat] AsyncLLM and verl integ ( #9353 )
2025-12-11 09:33:25 -08:00
autotuner.py
[TRTLLM-10264][feat] Support attention DP + Helix CP ( #10477 )
2026-01-29 02:57:13 -05:00
cublaslt_utils.py
[ https://nvbugs/5451205 ][feat] Add cuBLASLt NVFP4 GEMM backend support ( #7943 )
2025-10-23 15:55:10 +08:00
cute_dsl_utils.py
[None][chore] polish error message in cute_dsl_utils.py ( #7852 )
2025-09-19 12:05:11 +08:00
device_mesh.py
[TRTLLM-9465][fix] Swap TP-CP grouping order ( #10350 )
2026-01-05 20:08:03 +08:00
expert_statistic.py
[TRTLLM-8831][feat] Enable early exit with overlap scheduler ( #8587 )
2025-11-17 18:07:13 +01:00
flashinfer_utils.py
[TRTLLM-9578][feat] make PDL enabled by default ( #9695 )
2025-12-25 07:15:24 -05:00
hostfunc.py
[TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) ( #6948 )
2025-09-03 15:16:11 -07:00
llm.py
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default ( #5312 )
2025-06-20 03:01:10 +08:00
memory_buffer_utils.py
[ https://nvbugs/5811697 ][fix] Fix buffer reuse. ( #10716 )
2026-01-25 18:12:21 +08:00
metadata.py
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
model_config.py
[TRTLLM-9771][feat] Allow overriding quantization configs ( #11062 )
2026-01-31 10:48:51 -05:00
utils.py
[TRTLLM-9771][feat] Support partial update weight for fp8 ( #10456 )
2026-01-22 14:46:05 +08:00
virtual_memory.py
[TRTLLM-9736][feat] AsyncLLM and verl integ ( #9353 )
2025-12-11 09:33:25 -08:00