TensorRT-LLMs/tensorrt_llm/_torch
brb-nv b77f4ffe54
[TRTLLM-5971][feat] Integrate helix parallelism (#9342)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-11-29 15:17:30 -08:00
..
attention_backend [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
auto_deploy [#8948][feat] Support custom sharding config (#9143) 2025-11-29 05:28:05 +08:00
compilation [https://nvbugs/5546510][fix] Move torch.cuda.Stream out of torch com… (#8494) 2025-11-20 12:43:13 -05:00
configs [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
custom_ops [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (#9211) 2025-11-28 13:32:21 +08:00
cute_dsl_kernels [None][chore] Upgrade CuteDSL to 4.3.0 (#9444) 2025-11-26 14:53:09 +08:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
models [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
modules [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
peft [TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7033) 2025-08-25 10:37:40 +03:00
pyexecutor [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [None][feat] Add environment variable to force spec-dec number of accepted tokens (#9371) 2025-11-26 07:22:16 -08:00
__init__.py [TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py (#9002) 2025-11-13 10:47:35 +08:00
autotuner.py [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (#9211) 2025-11-28 13:32:21 +08:00
cublaslt_utils.py [https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943) 2025-10-23 15:55:10 +08:00
cute_dsl_utils.py [None][chore] polish error message in cute_dsl_utils.py (#7852) 2025-09-19 12:05:11 +08:00
device_mesh.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
expert_statistic.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
flashinfer_utils.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
hostfunc.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
memory_buffer_utils.py [https://nvbugs/5629833][fix] Don't fill tensors with 0 (#9296) 2025-11-21 20:50:05 +08:00
metadata.py [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
model_config.py [None][fix] Update the attention layers counting for Qwen3-next. (#9072) 2025-11-16 11:52:56 -08:00
utils.py [https://nvbugs/5667687][fix] Set correct lm_head_tp_size_upper_bound (#9300) 2025-11-20 00:41:00 -08:00
virtual_memory.py [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00