TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Lucas Liebenwein 6bf4e59267
[#8763][feature] AutoDeploy: configurable dtype for caching (#8812)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-11-10 22:17:14 -08:00
..
compile
config
custom_ops [#8763][feature] AutoDeploy: configurable dtype for caching (#8812) 2025-11-10 22:17:14 -08:00
distributed [None][fix] Switch AD AllReduce strategy to NCCL (#8979) 2025-11-07 06:49:44 +02:00
export
models
shim [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
transform [#8763][feature] AutoDeploy: configurable dtype for caching (#8812) 2025-11-10 22:17:14 -08:00
utils [https://nvbugs/5625972][fix] Add context manager to fix FakeTensorProp (#9047) 2025-11-10 16:25:58 -08:00
__init__.py
llm_args.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00