TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
h-guo18 23920223ab
[#4585][feat] Replace unified attention before export (#8303)
Signed-off-by: h-guo18 <67671475+h-guo18@users.noreply.github.com>
2025-10-23 18:02:04 -04:00
..
compile [None][feat] AutoDeploy: compiler backends based on nn.Module (#8126) 2025-10-03 12:14:21 -04:00
config [None][feat] Enable rms norm fusion for Nemotron MOE (#8563) 2025-10-23 00:09:42 -04:00
custom_ops [None][feat] Enable rms norm fusion for Nemotron MOE (#8563) 2025-10-23 00:09:42 -04:00
distributed [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
export [#4585][feat] Replace unified attention before export (#8303) 2025-10-23 18:02:04 -04:00
models [#4585][feat] Replace unified attention before export (#8303) 2025-10-23 18:02:04 -04:00
shim [TRTLLM-8483][chore] Refine scheduler_config and peft_cache_config in create_py_executor (#8451) 2025-10-22 08:33:48 +08:00
transform [#4585][feat] Replace unified attention before export (#8303) 2025-10-23 18:02:04 -04:00
utils [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
llm.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00