Wanli Jiang
|
e080294725
|
[TRTLLM-7918][feat] Revert "Support kvcache reuse for phi4mm (#7563)" (#7722)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
|
2025-09-15 17:19:44 +08:00 |
|
Wanli Jiang
|
fc9f4c9295
|
[TRTLLM-7918][feat] Support kvcache reuse for phi4mm (#7563)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
|
2025-09-15 15:47:00 +08:00 |
|
Chang Liu
|
47e37755a3
|
[TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843)
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
|
2025-09-14 20:10:10 -07:00 |
|
Chang Liu
|
faa2f46554
|
[TRTLLM-5059][feat] Enable KV-cache reuse and add E2E tests for llava-next (#7349)
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
|
2025-09-09 14:51:36 -04:00 |
|
dongfengy
|
367ff88a5e
|
[None][feat] Refactor llama4 for multimodal encoder IFB (#6844)
Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>
|
2025-08-28 13:22:19 -07:00 |
|
Chang Liu
|
9687bb42b5
|
[None][doc] Add doc for multimodal feature support matrix (#6619)
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
|
2025-08-08 02:20:29 -04:00 |
|