TensorRT-LLMs/tensorrt_llm/models
Netanel Haber 3c52ac098f
feat: allocate minimal blocks per window size (#3028)
* implement variable window attention by breaking the block manager into window block managers per window size

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* revert isCyclic to be true if the min attention window is reached, not per window size

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* add explanatory comment to mCyclicThreshold

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* load correct gemma config

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* don't shadow inputLength in addSequence - it should remain the function scope input length between window size loop iterations

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix KVCacheManagerVariableWindowAttentionWithReuseTest for multiple window block managers

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* if TYPE_CHECKING

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* set temp_attention_window_inputs to None explicitly

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* set temp_attention_window_inputs to None explicitly

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* pass dtype as well

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* test_gemma variable sliding window attention

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* allot a fraction of primary/secondaryBlocks to different window size heaps, depending on the window size's total contribution to the kvcache size (i.e., including all layers)

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* remove || mEnableBlockReuse which erroneously triggers beamsearch code for cyclic variable attention window code

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* turn off request delaying for MaxUtil

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* make comments better

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* windowSizesTotalSum using std::accumulate

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix error handling of forwardAsync - forwardAsync catch-all catch cleanup code that runs terminateRequest can also fail and must be caught

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix comments

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* remove assert that kills disagg tests, since it isn't necessary

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix corrupted expression: 'isNewTask && (peftCacheManager ?' -> '(isNewTask && peftCacheManager) ?' which caused boolean algebra. Main is correct

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* add Gemma3 to SUPPORTED_HF_ARCHITECTURES

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* support Gemma3

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* finally fix test_gemma - always spread at least {} into generate_summary_cmd, never None

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* finally fix test_gemma - always spread at least {} into generate_summary_cmd, never None

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix kvfactor field for deepseek

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix comment

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix gemma-3 entries in testlist to include vswa

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* only quantize gemma2 VSWA

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

remove misleading comment

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* in sendRequestInfo, fromOldAllocatedBlockIds->fromOldAllocatedBlockIds, like in main

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix: disable KV cache reuse if using attention sink (#3021)

* fix: disable KV cache reuse if using attention sink

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fix: disable KV cache reuse if sink bubble

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* add comment

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-17 16:04:57 +08:00
..
baichuan Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
bert Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
bloom Update TensorRT-LLM 2024-08-20 18:55:15 +08:00
chatglm Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
clip Update (#2978) 2025-03-23 16:39:35 +08:00
cogvlm Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
commandr Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
dbrx Update TensorRT-LLM (#1793) 2024-06-18 18:18:23 +08:00
deepseek_v1 Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
deepseek_v2 Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
dit Update TensorRT-LLM (#2215) 2024-09-10 18:21:22 +08:00
eagle Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
enc_dec fix: nvbugs/5075538: fix cross attention mask when decoder input len > 1 (#3585) 2025-04-16 08:31:33 +08:00
falcon Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
gemma feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
gpt Support prequantized fp8 ckpt for nemotron-mini-4b-instruct (#3046) 2025-04-01 14:52:09 +08:00
gptj Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
gptneox Update TensorRT-LLM (#1891) 2024-07-04 14:37:19 +08:00
grok Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llama feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371) 2025-04-11 21:25:29 +08:00
mamba Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
medusa Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
mllama chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
mmdit_sd3 Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
mpt Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
multimodal_encoders Update (#2978) 2025-03-23 16:39:35 +08:00
nemotron_nas Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
opt Add initial EAGLE-3 implementation (#3035) 2025-03-29 22:31:24 +08:00
phi Update (#2978) 2025-03-23 16:39:35 +08:00
phi3 Add support for Phi-4-mini (#2990) 2025-04-02 08:34:39 +08:00
qwen Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
recurrentgemma Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
redrafter fix: redrafter sampling (#3278) 2025-04-08 07:49:32 +08:00
stdit Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
unet chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
__init__.py Add support for Phi-4-MM (#3296) 2025-04-14 14:24:10 +08:00
automodel.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
convert_utils.py Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
generation_mixin.py feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
model_weights_loader.py Add support for Phi-4-mini (#2990) 2025-04-02 08:34:39 +08:00
modeling_utils.py feat: Add Gemma3 text-only model support (#3247) 2025-04-10 12:34:58 +08:00