TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Suyog Gupta 7050b1ea49
[#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477)
Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
2025-10-20 15:31:52 -07:00
..
compile [None][feat] AutoDeploy: compiler backends based on nn.Module (#8126) 2025-10-03 12:14:21 -04:00
config [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
custom_ops [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
distributed [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
export [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
models [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
shim [TRTLLM-8436][feat] batched sampling and top-k logprobs improvements (#8398) 2025-10-20 11:15:41 +02:00
transform [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
utils [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#8461][feat] AutoDeploy: trtllm-serve bug fix + unit test (#8462) 2025-10-20 16:06:39 -04:00
llm.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00