TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Eran Geva e051a05e6c
[#8694][fix] fix AutoDeploy cuda memory access failure in nvidia/NVIDIA-Nemotron-Nano-31B-A3-v3 (#8696)
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-10-28 13:21:43 +02:00
..
compile [None][feat] AutoDeploy: compiler backends based on nn.Module (#8126) 2025-10-03 12:14:21 -04:00
config [None][feat] AutoDeploy: Add FP8 MOE for Nemotron (#8599) 2025-10-25 15:26:45 -04:00
custom_ops [#8694][fix] fix AutoDeploy cuda memory access failure in nvidia/NVIDIA-Nemotron-Nano-31B-A3-v3 (#8696) 2025-10-28 13:21:43 +02:00
distributed [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
export [#4585][feat] Replace unified attention before export (#8303) 2025-10-23 18:02:04 -04:00
models [#8245][feat] Autodeploy: Guided Decoding Support (#8551) 2025-10-28 09:29:57 +08:00
shim [#8245][feat] Autodeploy: Guided Decoding Support (#8551) 2025-10-28 09:29:57 +08:00
transform [None][feat] AutoDeploy: Add FP8 MOE for Nemotron (#8599) 2025-10-25 15:26:45 -04:00
utils [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
llm.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00