TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Chenghao Zhang f6f6e1f25d
[#9102][feat] AutoDeploy: Support fp8 kv cache (#9107)
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
2025-11-13 23:55:45 -08:00
..
compile [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
config [None][autodeploy] minor refactor to rmsnorm transforms (#8657) 2025-11-13 13:13:58 -08:00
custom_ops [#9102][feat] AutoDeploy: Support fp8 kv cache (#9107) 2025-11-13 23:55:45 -08:00
distributed [None][fix] Switch AD AllReduce strategy to NCCL (#8979) 2025-11-07 06:49:44 +02:00
export [#8924][fix] Fix AutoDeploy pattern matcher for torch 2.9 (#8920) 2025-11-05 13:29:20 -08:00
models [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
shim [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
transform [#8732][feat] Update TRTLLM Cutlass MoE kernels with ReLU2 (#9011) 2025-11-13 16:54:45 -08:00
utils [None][autodeploy] fix weight extraction for graph based quantized checkpoints (#9109) 2025-11-13 13:14:24 -08:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00