TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Tracin ef3fdc8051
feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867)
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
2025-06-16 11:30:57 +08:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
distributed Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
models chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
shim [AutoDeploy] _AutoDeployLlmArgs as primary config object (#4891) 2025-06-05 17:20:55 +08:00
transformations fix: build_config in TorchLlmArgs and avoid arbitrary args (#4972) 2025-06-15 17:51:56 -07:00
utils [nvbugs/5331013] fix AutoDeploy for PyTorch 25.05 dependency upgrade (#5106) 2025-06-12 13:07:27 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00