TensorRT-LLMs/cpp/include/tensorrt_llm/batch_manager/kvCacheUtils.h
Netanel Haber 3c52ac098f
feat: allocate minimal blocks per window size (#3028)
* implement variable window attention by breaking the block manager into window block managers per window size

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* revert isCyclic to be true if the min attention window is reached, not per window size

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* add explanatory comment to mCyclicThreshold

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* load correct gemma config

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* don't shadow inputLength in addSequence - it should remain the function scope input length between window size loop iterations

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix KVCacheManagerVariableWindowAttentionWithReuseTest for multiple window block managers

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* if TYPE_CHECKING

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* set temp_attention_window_inputs to None explicitly

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* set temp_attention_window_inputs to None explicitly

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* pass dtype as well

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* test_gemma variable sliding window attention

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* allot a fraction of primary/secondaryBlocks to different window size heaps, depending on the window size's total contribution to the kvcache size (i.e., including all layers)

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* remove || mEnableBlockReuse which erroneously triggers beamsearch code for cyclic variable attention window code

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* turn off request delaying for MaxUtil

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* make comments better

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* windowSizesTotalSum using std::accumulate

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix error handling of forwardAsync - forwardAsync catch-all catch cleanup code that runs terminateRequest can also fail and must be caught

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix comments

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* remove assert that kills disagg tests, since it isn't necessary

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix corrupted expression: 'isNewTask && (peftCacheManager ?' -> '(isNewTask && peftCacheManager) ?' which caused boolean algebra. Main is correct

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* add Gemma3 to SUPPORTED_HF_ARCHITECTURES

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* support Gemma3

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* finally fix test_gemma - always spread at least {} into generate_summary_cmd, never None

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* finally fix test_gemma - always spread at least {} into generate_summary_cmd, never None

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix kvfactor field for deepseek

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix comment

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix gemma-3 entries in testlist to include vswa

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* only quantize gemma2 VSWA

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

remove misleading comment

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* in sendRequestInfo, fromOldAllocatedBlockIds->fromOldAllocatedBlockIds, like in main

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix: disable KV cache reuse if using attention sink (#3021)

* fix: disable KV cache reuse if using attention sink

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fix: disable KV cache reuse if sink bubble

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* add comment

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-17 16:04:57 +08:00

221 lines
5.9 KiB
C++

/*
* Copyright (c) 2022-2024, NVIDIA CORPORATION. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include "tensorrt_llm/batch_manager/kvCacheManager.h"
namespace tensorrt_llm::batch_manager::kv_cache_manager
{
class BlockIterator;
class BlockRange
{
public:
// C++20 std::default_sentinel_t equivalent
struct Sentinel
{
};
static BlockRange fromOldAllocatedBlockIds(BaseKVCacheManager const& cacheManager,
LlmRequest::RequestIdType requestId, SizeType32 beam = kFIRST_AND_ONLY_BEAM)
{
assert(kFIRST_AND_ONLY_BEAM == beam);
auto const windowSize = firstWindowSize(cacheManager);
auto const blockIds = cacheManager.getSequence(requestId).getCacheBlockIds(windowSize).at(kFIRST_AND_ONLY_BEAM);
return BlockRange(cacheManager, blockIds, requestId);
}
static BlockRange fromNewlyAllocatedBlockIds(
BaseKVCacheManager const& cacheManager, LlmRequest::RequestIdType requestId)
{
auto const windowSize = firstWindowSize(cacheManager);
auto const blockIds = cacheManager.getNewlyAllocatedBlockIds(requestId, windowSize);
return BlockRange(cacheManager, blockIds, requestId);
}
BlockRange(runtime::ITensor::SharedPtr pool, std::vector<SizeType32> const& blockIds) // Only used in tests
: mManager{nullptr}
, mPool{std::move(pool)}
, mWindowSize{0}
, mRequestId{0}
, mBlockIds{blockIds}
{
TLLM_CHECK(mPool);
}
[[nodiscard]] BlockIterator begin() const;
[[nodiscard]] Sentinel end() const
{
return {};
}
[[nodiscard]] size_t size() const
{
return mBlockIds.size();
}
[[nodiscard]] std::vector<SizeType32> const& getBlockIds() const
{
return mBlockIds;
}
void setBlockIds(std::vector<SizeType32> blockIds)
{
mBlockIds = std::move(blockIds);
}
[[nodiscard]] std::vector<size_t> getBlockHashes() const
{
TLLM_CHECK(mManager);
std::vector<size_t> blockHashes;
blockHashes.reserve(mBlockIds.size());
auto& blockManager = mManager->getBlockManager();
for (auto id : mBlockIds)
{
blockHashes.emplace_back(blockManager.getBlockById(id, mWindowSize)->getHash());
}
return blockHashes;
}
void updatePoolIdx(SizeType32 poolIdx)
{
TLLM_CHECK(mManager);
mPool = mManager->getBlockManager().getPrimaryPool(poolIdx);
auto const newWindowSize = mManager->getBlockManager().getPoolWindowSize(poolIdx);
if (newWindowSize != mWindowSize)
{
mWindowSize = newWindowSize;
mBlockIds = mManager->getSequence(mRequestId).getCacheBlockIds(mWindowSize).at(kFIRST_AND_ONLY_BEAM);
}
}
friend class BlockIterator;
private:
BlockRange(
BaseKVCacheManager const& cacheManager, std::vector<SizeType32> blockIds, LlmRequest::RequestIdType requestId)
: mManager(&cacheManager)
, mPool(cacheManager.getBlockManager().getPrimaryPool(kFIRST_POOL_INDEX))
, mWindowSize(firstWindowSize(cacheManager))
, mRequestId(requestId)
, mBlockIds(std::move(blockIds))
{
}
static SizeType32 firstWindowSize(BaseKVCacheManager const& cacheManager)
{
constexpr SizeType32 FIRST_POOL_IDX = 0;
return cacheManager.getBlockManager().getPoolWindowSize(FIRST_POOL_IDX);
}
private:
BaseKVCacheManager const* mManager;
runtime::ITensor::SharedPtr mPool;
SizeType32 mWindowSize;
const LlmRequest::RequestIdType mRequestId;
std::vector<SizeType32> mBlockIds;
static constexpr SizeType32 kFIRST_AND_ONLY_BEAM = 0;
static constexpr SizeType32 kFIRST_POOL_INDEX = 0;
};
class BlockIterator
{
public:
using iterator_category = std::forward_iterator_tag;
using value_type = runtime::ITensor;
using pointer = runtime::ITensor::SharedPtr;
using reference = value_type&;
using SizeType32 = tensorrt_llm::runtime::SizeType32;
BlockIterator(BlockRange const* range, size_t idx)
: mRange{range}
, mIdx{idx}
{
TLLM_CHECK(mIdx == 0 || mIdx < mRange->mBlockIds.size());
update();
}
[[nodiscard]] pointer operator->()
{
return mCurrent;
}
[[nodiscard]] reference operator*()
{
return *mCurrent;
}
BlockIterator& operator++()
{
mIdx++;
update();
return *this;
}
BlockIterator operator++(int)
{
auto ret = *this;
mIdx++;
update();
return ret;
}
operator runtime::ITensor::SharedPtr()
{
return mCurrent;
}
[[nodiscard]] bool operator==(BlockIterator const& other) const
{
return mIdx == other.mIdx && mRange == other.mRange;
}
[[nodiscard]] bool operator==(BlockRange::Sentinel other) const
{
return mIdx == mRange->mBlockIds.size();
}
template <class T>
[[nodiscard]] bool operator!=(T const& other) const
{
return !(*this == other);
}
private:
void update()
{
if (mIdx < mRange->mBlockIds.size())
{
mCurrent = runtime::ITensor::slice(mRange->mPool, mRange->mBlockIds.at(mIdx), 1);
}
}
BlockRange const* mRange;
runtime::ITensor::SharedPtr mCurrent;
size_t mIdx;
};
inline BlockIterator BlockRange::begin() const
{
return {this, 0};
}
} // namespace tensorrt_llm::batch_manager::kv_cache_manager