TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
shuyixiong d8acea1db3
[TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (#9224)
Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com>
2025-11-26 10:59:06 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (#9308) 2025-11-25 22:11:51 +01:00
config_utils.py [None][fix] fix config loading for DeepSeek-V3.2 in trtllm-bench (#8729) 2025-10-29 05:17:16 -07:00
cuda_graph_runner.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
executor_request_queue.py [None][chore] Reduce nested nvtx ranges. (#9347) 2025-11-25 09:58:41 +08:00
finish_reason.py [TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler (#5328) 2025-06-25 17:41:36 +02:00
grammar_matcher.py [TRTLLM-8763][chore] Deprecate pybind based GuidedDecodingConfig usage in torch backend (#8717) 2025-10-29 20:37:14 +08:00
guided_decoder.py [TRTLLM-8763][chore] Deprecate pybind based GuidedDecodingConfig usage in torch backend (#8717) 2025-10-29 20:37:14 +08:00
handle_additional_outputs.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
handle_logits.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
kv_cache_connector.py [None][feat] Support KV Connector with Disagg Prefill Worker (#8246) 2025-10-24 11:09:06 -07:00
kv_cache_transceiver.py [None][feat] Have ability to cancel disagg request if KV cache resource are exhausted (#9155) 2025-11-18 20:59:17 -05:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
make_decoding_batch_input_output.py [None][refactor] decoding inputs, part 2 (#5799) 2025-11-18 14:38:51 +01:00
mamba_cache_manager.py [https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (#9093) 2025-11-25 17:27:11 +08:00
model_engine.py [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (#9224) 2025-11-26 10:59:06 +08:00
model_loader.py [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (#9224) 2025-11-26 10:59:06 +08:00
py_executor_creator.py [https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (#9438) 2025-11-25 16:20:29 -08:00
py_executor.py [TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (#9308) 2025-11-25 22:11:51 +01:00
resource_manager.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
sampler.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
sampling_utils_flashinfer.py [TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (#9457) 2025-11-25 18:53:53 +01:00
sampling_utils.py [TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (#9411) 2025-11-25 18:46:48 +01:00
scheduler.py [TRTLLM-8483][chore] Refine scheduler_config and peft_cache_config in create_py_executor (#8451) 2025-10-22 08:33:48 +08:00
seq_slot_manager.py [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6537) 2025-08-15 09:52:06 -07:00