TensorRT-LLMs/tensorrt_llm/_torch
Yukun He cbca6505ff
[nvbugs/5268808][fix] Fix the list-out-of-range access issue of AllReduce workspace on multi-node. (#4159)
This issue is found for tp=ep=8 on the multi-node machine due to the inconsistent PP sizes.
* Reform the workspace allocation implementation to avoid the list-out-of-range issues.
* Disable min_latency_mode under the multi-node scenario to avoid the illegal memory access issue.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-05-13 17:17:25 +08:00
..
attention_backend fix: Reset planned states to avoid memory leak in TrtllmAttentionWrapper (#4227) 2025-05-12 23:25:54 +08:00
auto_deploy [TRTLLM-5188] fix: [AutoDeploy] update output shape of prepare_fused_mha_metadata_fake (#4199) 2025-05-12 11:11:40 -04:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
custom_ops feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
distributed [nvbugs/5268808][fix] Fix the list-out-of-range access issue of AllReduce workspace on multi-node. (#4159) 2025-05-13 17:17:25 +08:00
models [nvbugs/5268808][fix] Fix the list-out-of-range access issue of AllReduce workspace on multi-node. (#4159) 2025-05-13 17:17:25 +08:00
modules feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor fix: [https://nvbugspro.nvidia.com/bug/5238626] illegal memory address when running llama 4 with cuda graph enabled (#4101) 2025-05-13 14:58:54 +08:00
speculative [fix] Fix relaxed acceptance to support enabling it in context phase (#4126) 2025-05-09 14:11:14 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pipeline_interface.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
utils.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00