TensorRT-LLMs/tensorrt_llm/_torch
pcastonguay e7ae5e2824
feat: Add support for disaggregation with pp with pytorch backend (#6369)
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Signed-off-by: raayandhar <rdhar@nvidia.com>
Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Signed-off-by: pcastonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: raayandhar <rdhar@nvidia.com>
Co-authored-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-07-30 09:42:13 -04:00
..
attention_backend [TRTLLM-6650][feat] Enhance beam search support with CUDA graph integration (#6217) 2025-07-24 18:04:41 +02:00
auto_deploy [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
compilation [https://nvbugs/5340941] - fix: Correct custom ops used by Qwen3 Moe … (#6285) 2025-07-25 14:49:45 +08:00
custom_ops [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [fix][nvbugs/5399355] Fix Lamport buffer clear issue for MNNVL TwoShot Allreduce and add FP16 support. (#6237) 2025-07-25 08:01:40 +08:00
models [nvbug 5380101][fix] Fix nemotronNAS loading for TP>1 (#6447) 2025-07-30 07:22:32 -04:00
modules [fix] Fix wide EP when using DeepEP with online EPLB (#6429) 2025-07-30 00:13:18 -04:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor feat: Add support for disaggregation with pp with pytorch backend (#6369) 2025-07-30 09:42:13 -04:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative fix: remove cudaStreamSynchronize when using relaxed acceptance (#5262) 2025-07-28 09:18:41 +08:00
__init__.py [nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266) 2025-07-27 23:29:21 -04:00
autotuner.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
expert_statistic.py Add MTP support for Online EPLB (#5213) 2025-06-25 07:58:13 +08:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py Add basic Nemo Ckpt Lora Loading in pytorch flow (#6019) 2025-07-22 19:42:45 -07:00
utils.py [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00