TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Fridah-nv 21dbd163a7
[TRTLLM-5188] fix: [AutoDeploy] unwaive AD build test (#4273)
* unwaive small build test

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* unwaive mutigpu/integration tests

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* fix for torch.compile+flashinfer attention

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>
2025-05-14 10:40:12 +08:00
..
compile feat: [AutoDeploy] generalizing cudagraph to multiple dynamic inputs (#3589) 2025-04-23 03:38:51 +08:00
custom_ops [TRTLLM-5188] fix: [AutoDeploy] unwaive AD build test (#4273) 2025-05-14 10:40:12 +08:00
distributed [AutoDeploy] Make all ranks agree on kv-cache size (#4007) 2025-05-02 04:07:28 +08:00
models feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
shim [AutoDeploy][perf] Further optimize flashinfer backend in AutoDeploy (#4024) 2025-05-06 10:46:36 +08:00
transformations feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
utils feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00