TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Wei-Ming Chen f28cd3056e
feat: AutoDeploy fp8 quantization support for bmm (#3849)
Signed-off-by: Wei-Ming Chen <17592131+meenchen@users.noreply.github.com>
2025-06-30 12:36:34 -04:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
distributed Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
models [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
shim [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
transformations feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
utils feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
llm_args.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
llm.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00