TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Eran Geva 990e674b71
[None][fix] Switch AD AllReduce strategy to NCCL (#8979)
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-11-07 06:49:44 +02:00
..
compile [https://nvbugs/5606166][fix] AutoDeploy: use tuples for cudagraph shape lookup (#8658) 2025-10-28 10:52:43 -07:00
config [TRTLLM-8814][feat] AutoDeploy: Use TRTLLM kernels for FP8 linear (#8820) 2025-11-06 11:00:10 -08:00
custom_ops [TRTLLM-8814][feat] AutoDeploy: Use TRTLLM kernels for FP8 linear (#8820) 2025-11-06 11:00:10 -08:00
distributed [None][fix] Switch AD AllReduce strategy to NCCL (#8979) 2025-11-07 06:49:44 +02:00
export [#8924][fix] Fix AutoDeploy pattern matcher for torch 2.9 (#8920) 2025-11-05 13:29:20 -08:00
models [None][feat] AutoDeploy: Support Latent MOE for Nemotron (#8955) 2025-11-06 12:40:19 -08:00
shim [TRTLLM-8836][chore] Create ModelEngine from LlmArgs (#8600) 2025-11-01 05:26:06 -07:00
transform [TRTLLM-8814][feat] AutoDeploy: Use TRTLLM kernels for FP8 linear (#8820) 2025-11-06 11:00:10 -08:00
utils [TRTLLM-8201][feat] Nemotron H MoE Sharding (#8744) 2025-11-05 12:35:29 -08:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
llm.py [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00