TensorRT-LLMs/tensorrt_llm/_torch
mpikulski adc0d82500
[https://nvbugs/5791242][chore] remove obsolete code (#11388)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2026-02-10 10:55:29 +01:00
..
attention_backend [TRTLLM-10321][feat] Support different KV cache layout for one-model spec dec (#10502) 2026-02-10 05:16:02 +08:00
auto_deploy [#11032][feat] MLA revisited and GLM 4.7 Flash support (#11324) 2026-02-09 23:26:51 -05:00
compilation [None][chore] Mass merge commits from release/1.2.0rc6.post1 branch (#11384) 2026-02-10 14:00:42 +08:00
configs [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
cuda_tile_kernels [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
custom_ops [TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130) 2026-02-06 09:49:30 +08:00
cute_dsl_kernels [TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130) 2026-02-06 09:49:30 +08:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
disaggregation [None][fix] Avoid reserved filename on Windows (#11382) 2026-02-10 11:22:59 +08:00
distributed [TRTLLM-10264][feat] Support attention DP + Helix CP (#10477) 2026-01-29 02:57:13 -05:00
models [TRTLLM-10321][feat] Support different KV cache layout for one-model spec dec (#10502) 2026-02-10 05:16:02 +08:00
modules [TRTLLM-9771][feat] Make update_weights compatible with CUDA Graph (#11267) 2026-02-10 01:12:49 -05:00
peft [https://nvbugs/5322131][feat] Multi-LoRA serving with CUDA Graph (#8279) 2026-01-22 14:01:18 +01:00
pyexecutor [https://nvbugs/5791242][chore] remove obsolete code (#11388) 2026-02-10 10:55:29 +01:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [TRTLLM-10321][feat] Support different KV cache layout for one-model spec dec (#10502) 2026-02-10 05:16:02 +08:00
__init__.py [TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py (#9002) 2025-11-13 10:47:35 +08:00
async_llm.py [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00
autotuner.py [TRTLLM-10264][feat] Support attention DP + Helix CP (#10477) 2026-01-29 02:57:13 -05:00
cublaslt_utils.py [https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943) 2025-10-23 15:55:10 +08:00
cuda_tile_utils.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cute_dsl_utils.py [None][chore] polish error message in cute_dsl_utils.py (#7852) 2025-09-19 12:05:11 +08:00
device_mesh.py [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
expert_statistic.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
flashinfer_utils.py [TRTLLM-9578][feat] make PDL enabled by default (#9695) 2025-12-25 07:15:24 -05:00
hostfunc.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
memory_buffer_utils.py [https://nvbugs/5811697][fix] Fix buffer reuse. (#10716) 2026-01-25 18:12:21 +08:00
metadata.py [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
model_config.py [TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130) 2026-02-06 09:49:30 +08:00
utils.py [None][feat] Fully non-blocking pipeline parallelism executor loop. (#10349) 2026-02-10 15:43:28 +08:00
virtual_memory.py [TRTLLM-9736][feat] AsyncLLM and verl integ (#9353) 2025-12-11 09:33:25 -08:00