TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
ixlmar 13b61405e8
fix: improve PyExecutor resource allocations (#4299)
chore: restore symmetry of worker start/shutdown
chore: fix return type of cal_max_tokens
chore: type some more return values
fix: free resources before re-claiming

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-05-16 16:28:10 +01:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py fix: improve PyExecutor resource allocations (#4299) 2025-05-16 16:28:10 +01:00
config_utils.py feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
config.py Breaking change: perf: Enable scheduling overlap by default (#4174) 2025-05-15 14:27:36 +08:00
cuda_graph_runner.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
decoder.py refactor: Copy sequence lengths once in decoder setup (#4102) 2025-05-16 22:03:55 +08:00
guided_decoder.py feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
kv_cache_transceiver.py cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py chore: Mass Integration 0.19 (#4255) 2025-05-16 10:53:25 +02:00
model_engine.py chore: Mass Integration 0.19 (#4255) 2025-05-16 10:53:25 +02:00
py_executor_creator.py fix: improve PyExecutor resource allocations (#4299) 2025-05-16 16:28:10 +01:00
py_executor.py chore: Mass Integration 0.19 (#4255) 2025-05-16 10:53:25 +02:00
resource_manager.py chore: Mass Integration 0.19 (#4255) 2025-05-16 10:53:25 +02:00
scheduler.py refactor: collect executor and decoder states into dataclass (#3234) 2025-04-15 16:31:45 +08:00
seq_slot_manager.py fix: skip add new slot if request has slot 0 (#3991) 2025-05-06 07:46:39 +02:00