TensorRT-LLMs/tensorrt_llm/_torch
Yuewei Na 0d18b2d7a4
[None][feat] Add priority-based KV cache offload filtering support (#10751)
Signed-off-by: Yuewei Na <yna@nvidia.com>
Signed-off-by: Yuewei Na <nv-yna@users.noreply.github.com>
Co-authored-by: Yuewei Na <nv-yna@users.noreply.github.com>
2026-02-05 05:22:56 -05:00
..
attention_backend [https://nvbugs/5624818][fix] Work around accuracy issue by enforcing paged_context_fmha on Hopper for fmha_v2 (#11192) 2026-02-04 19:21:50 +08:00
auto_deploy [TRTLLM-10673][feat] Improved layer classification for sharding (#10718) 2026-02-04 18:06:10 +01:00
compilation [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
configs [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
cuda_tile_kernels [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
custom_ops [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cute_dsl_kernels [https://nvbugs/5854860][fix] Fix cutedsl argmax on sm120 (#11181) 2026-02-04 17:15:31 -05:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
disaggregation [TRTLLM-9527][feat] Python transceiver components (step 2) (#10494) 2026-01-22 10:14:50 -08:00
distributed [TRTLLM-10264][feat] Support attention DP + Helix CP (#10477) 2026-01-29 02:57:13 -05:00
models [None][feat] Nemotron H: Eagle3 support (#11131) 2026-02-02 10:26:25 -05:00
modules [None][chore] Pass without_comm to cutlass and deepgemm (#11229) 2026-02-05 02:07:59 -05:00
peft [https://nvbugs/5322131][feat] Multi-LoRA serving with CUDA Graph (#8279) 2026-01-22 14:01:18 +01:00
pyexecutor [None][feat] Add priority-based KV cache offload filtering support (#10751) 2026-02-05 05:22:56 -05:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [None][fix] Fix MTP 1-model sampler (#10369) 2026-02-02 16:26:46 +08:00
__init__.py [TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py (#9002) 2025-11-13 10:47:35 +08:00
async_llm.py [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00
autotuner.py [TRTLLM-10264][feat] Support attention DP + Helix CP (#10477) 2026-01-29 02:57:13 -05:00
cublaslt_utils.py [https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943) 2025-10-23 15:55:10 +08:00
cuda_tile_utils.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cute_dsl_utils.py [None][chore] polish error message in cute_dsl_utils.py (#7852) 2025-09-19 12:05:11 +08:00
device_mesh.py [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
expert_statistic.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
flashinfer_utils.py [TRTLLM-9578][feat] make PDL enabled by default (#9695) 2025-12-25 07:15:24 -05:00
hostfunc.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
memory_buffer_utils.py [https://nvbugs/5811697][fix] Fix buffer reuse. (#10716) 2026-01-25 18:12:21 +08:00
metadata.py [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
model_config.py [TRTLLM-9771][feat] Allow overriding quantization configs (#11062) 2026-01-31 10:48:51 -05:00
utils.py [TRTLLM-9771][feat] Support partial update weight for fp8 (#10456) 2026-01-22 14:46:05 +08:00
virtual_memory.py [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00