TensorRT-LLMs/cpp/tensorrt_llm
Dom Brown 92daec1115
[TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper (#7035)
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-08-20 10:11:25 -04:00
..
batch_manager [TRTLLM-6341][chore] Preliminary refactors on the kv cache manager before supporting swa kv cache reuse (#6767) 2025-08-20 13:57:57 +08:00
common [TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper (#7035) 2025-08-20 10:11:25 -04:00
cutlass_extensions/include/cutlass_extensions [None][fix] Fix W4A8 MoE kernel issue (#7072) 2025-08-20 06:52:47 -04:00
deep_ep [None][feat] DeepEP LL combine FP4 (#6822) 2025-08-13 04:20:21 -04:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper (#7035) 2025-08-20 10:11:25 -04:00
layers refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
nanobind [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
plugins [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
pybind [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
runtime [None][refactor] Simplify decoder state initialization (#6559) 2025-08-12 21:44:41 +02:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
CMakeLists.txt [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00