mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* init trtllm attn no cache Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * fix: fix the seq_len issue and attn metadata prepare for qwen reward model test fix: fix minor bugs after rebase Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: remove unnecessary debug logs and clean up commented code refactor: update max_seq_len documentation and remove max_seq_len for decoder model contructor in PyTorchModelEngine Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: update calculate_ref_result function to accept tensor inputs and mask type, enhance test_attention_no_cache to support FULL and CAUSAL masks Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: remove unused BERT attention metadata conversion method and add type assertion for no cache attention in PyTorchModelEngine Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: remove use_kv_cache parameter from attention function and related classes, update documentation for KV cache handling Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: implement setAttentionMaskType method for better mask type handling and remove unused conversion function Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: streamline KV cache handling by replacing direct member access with useKVCache method and simplify token per block assignment remove Debug code. Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: Resolve comments for Python code Simplify no cache attention metadata preparation and streamline related attributes in TrtllmAttentionMetadata Removed the private method for converting to no cache attention metadata and integrated its logic into the prepare method. Updated the test for BERT sequence classification to reflect these changes and ensure proper handling of attention metadata. Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * docs: Add is_dummy_attention field to attention metadata for simulation operations Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * refactor: add KVCacheParams to attention backend interface and import relevant metadata classes Updated the attention backend interface to include KVCacheParams and imported TrtllmAttentionMetadata and VanillaAttentionMetadata in model_engine.py for enhanced functionality. Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * fix: fix rebase format issue Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * fix: extend attention mask type handling in MHARunnerFixedParams Added support for additional attention mask types (BIDIRECTIONAL, BIDIRECTIONALGLM, BLOCKSPARSE) in the MHARunnerFixedParams structure to fix the mapping issue between ContextAttentionMaskType and AttentionMaskType Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> * fix: enhance attention mask type handling in TllmGenFmhaRunnerParams Updated the setAttentionMaskType method to include a switch-case structure for better handling of attention mask types, ensuring proper mapping and error handling for invalid types. Signed-off-by: Qixiang Lin <qixiangl@nvidia.com> --------- Signed-off-by: Qixiang Lin <qixiangl@nvidia.com>
41 lines
1.2 KiB
Python
41 lines
1.2 KiB
Python
from dataclasses import dataclass
|
|
from enum import Enum
|
|
from typing import List, Optional
|
|
|
|
import torch
|
|
|
|
|
|
@dataclass
|
|
class KVCacheParams:
|
|
"""
|
|
Parameters for the key-value cache.
|
|
"""
|
|
# Whether to use the cache or not.
|
|
use_cache: bool
|
|
|
|
# The number of the cached tokens of each sequence
|
|
num_cached_tokens_per_seq: Optional[List[int]] = None
|
|
# Block IDs of the each sequence
|
|
# The shape is depending on the cache type:
|
|
# - LINEAR: (1)
|
|
# - PAGED: (num_pages)
|
|
# - PER_TOKEN: (num_tokens)
|
|
# The dtype is int64.
|
|
block_ids_per_seq: Optional[List[list]] = None
|
|
|
|
# The maximum attention window size for each layer.
|
|
host_max_attention_window_sizes: Optional[torch.Tensor] = None
|
|
# The number of sink tokens for each layer.
|
|
host_sink_token_length: Optional[torch.Tensor] = None
|
|
# The number of extra kv for draft tokens
|
|
num_extra_kv_tokens: Optional[List[int]] = 0
|
|
|
|
|
|
class CacheType(Enum):
|
|
# Linear KV cache stores all the cached tokens of a sequence in a single page.
|
|
LINEAR = 0
|
|
# Paged KV cache stores the cached tokens of a sequence in multiple pages.
|
|
PAGED = 1
|
|
# Per-token KV cache stores each token's cached value separately.
|
|
PER_TOKEN = 2
|