TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Suyog Gupta f94af0fb86
[AutoDeploy] Make all ranks agree on kv-cache size (#4007)
* make all ranks agree on kv-cache size

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* minor cleanups

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* use all_gather_object wrapper

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

---------

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
2025-05-02 04:07:28 +08:00
..
compile feat: [AutoDeploy] generalizing cudagraph to multiple dynamic inputs (#3589) 2025-04-23 03:38:51 +08:00
custom_ops feat:[AutoDeploy] Enhance RoPE support (#3115) 2025-04-11 23:51:24 +08:00
distributed [AutoDeploy] Make all ranks agree on kv-cache size (#4007) 2025-05-02 04:07:28 +08:00
models fix: [AutoDeploy] update hf loading for e_score_correction_bias (#3847) 2025-04-26 02:03:47 +08:00
shim chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
transformations [AutoDeploy] Make all ranks agree on kv-cache size (#4007) 2025-05-02 04:07:28 +08:00
utils feat:[AutoDeploy] Enhance RoPE support (#3115) 2025-04-11 23:51:24 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00