TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
yuxianq 9d64b6b890
Cache sin cos in model instead of global LRU cache. (#3378)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-14 11:19:09 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py fix: add kv memory size per token of draft model to calculate max number of tokens of kv cache (#3497) 2025-04-13 23:02:14 +08:00
config.py remove useless max_num_tokens member in PyTorchConfig (#3493) 2025-04-12 21:09:58 +08:00
cuda_graph_runner.py feat: Run PyExecutor's inference flow to estimate max_num_tokens for kv_cache_manager (#3092) 2025-04-10 18:29:40 +08:00
decoder.py refactor: decoder buffers (#3307) 2025-04-12 11:41:24 +02:00
distributed.py Only gather responses on rank 0 (#3040) 2025-03-24 21:54:51 -07:00
guided_decoder.py Update (#2978) 2025-03-23 16:39:35 +08:00
kv_cache_transceiver.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py feat: Run PyExecutor's inference flow to estimate max_num_tokens for kv_cache_manager (#3092) 2025-04-10 18:29:40 +08:00
model_engine.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
py_executor_creator.py fix: don't perform memory estimation for start_attention (#3485) 2025-04-12 11:34:46 +08:00
py_executor.py fix: Fixing issue with first gen token being returned twice in streaming (#3427) 2025-04-13 22:45:09 -04:00
resource_manager.py feat: Support PeftCacheManager in Torch (#3186) 2025-04-04 12:38:08 +08:00
scheduler.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00