TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Lucas Liebenwein be916b19e0
feat: [AutoDeploy] unfusing attention for native support (#3668)
* [AutoDeploy] unfused streamlined attention + caching

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* improved unit testing

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* reviewer feedback

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* some updates to attn_mask handling

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* updated manual benchmarking and cudagraph capture

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

---------

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-02 09:06:49 +08:00
..
compile feat: [AutoDeploy] generalizing cudagraph to multiple dynamic inputs (#3589) 2025-04-23 03:38:51 +08:00
custom_ops feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
distributed [AutoDeploy] Make all ranks agree on kv-cache size (#4007) 2025-05-02 04:07:28 +08:00
models feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
shim feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
transformations feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
utils feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00