TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Jonas Yang CN 88ea2c4ee9
[TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520)
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2025-10-04 08:12:24 +08:00
..
compile [None][feat] AutoDeploy: compiler backends based on nn.Module (#8126) 2025-10-03 12:14:21 -04:00
config [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
custom_ops [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
distributed [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
export [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
models [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
shim [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
transform [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
transformations [None][feat] AutoDeploy: graph/module inputs with kwargs instead of args (#8137) 2025-10-03 16:53:42 -07:00
utils [None][feat] AutoDeploy: compiler backends based on nn.Module (#8126) 2025-10-03 12:14:21 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [None][feat] AutoDeploy: dive deeper into token generation bugs + enable_block_reuse (#8108) 2025-10-03 04:57:26 -07:00
llm.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00