TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
Stefan Niebler 0cfd08745c
[TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675)
Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2026-01-16 10:52:41 -08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py [https://nvbugs/5717993][fix] Add execution_stream across PyExecutor, KVCacheManager, PeftCacheManager to ensure proper CUDA stream synchronization between KV cache transfer operations and model forward kernels. (#10060) 2025-12-31 09:22:54 -08:00
config_utils.py [None][feat] Eagle: MLA Based Eagle (#9677) 2026-01-02 13:45:07 -05:00
cuda_graph_runner.py [None][refactor] Unify the usage of MPIDist and TorchDist. (#10380) 2026-01-14 14:05:47 +08:00
executor_request_queue.py [https://nvbugs/5791830][fix] fix pp loop hang caused by i-sending new requests (#10665) 2026-01-15 16:33:55 +08:00
finish_reason.py [TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler (#5328) 2025-06-25 17:41:36 +02:00
grammar_matcher.py [TRTLLM-8763][chore] Deprecate pybind based GuidedDecodingConfig usage in torch backend (#8717) 2025-10-29 20:37:14 +08:00
guided_decoder.py [https://nvbugs/5669671][fix] Support GuidedDecoder with sharded logits (#10698) 2026-01-16 11:04:26 +08:00
handle_additional_outputs.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
handle_logits.py [TRTLLM-8831][feat] Enable early exit with overlap scheduler (#8587) 2025-11-17 18:07:13 +01:00
hang_detector.py [None][feat] Hang detection for executor loop and worker. (#10480) 2026-01-13 02:34:32 -05:00
kv_cache_connector.py [None][feat] Support KV Connector with Disagg Prefill Worker (#8246) 2025-10-24 11:09:06 -07:00
kv_cache_transceiver.py [TRTLLM-9942][feat] new request states and kvcache transceiver APIs in generation-first disagg (#10406) 2026-01-15 19:18:21 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675) 2026-01-16 10:52:41 -08:00
make_decoding_batch_input_output.py [None][refactor] decoding inputs, part 2 (#5799) 2025-11-18 14:38:51 +01:00
mamba_cache_manager.py [TRTLLM-10060][feat] Enable attention dp for Nemotron Super v3. (#10347) 2026-01-13 17:13:55 +08:00
model_engine.py [TRTLLM-10305][feat] Support customized seq len larger than model config (#10600) 2026-01-16 16:07:36 +08:00
model_loader.py [None][feat] Auto download speculative models from HF for pytorch backend, add speculative_model field alias (#10099) 2026-01-14 21:06:07 -08:00
py_executor_creator.py [https://nvbugs/5669671][fix] Support GuidedDecoder with sharded logits (#10698) 2026-01-16 11:04:26 +08:00
py_executor.py [https://nvbugs/5787566][fix] Only keep a limited number of performance statistic data (#10569) 2026-01-14 07:53:01 -05:00
resource_manager.py [None][refactor] Unify the usage of MPIDist and TorchDist. (#10380) 2026-01-14 14:05:47 +08:00
sampler.py [TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675) 2026-01-16 10:52:41 -08:00
sampling_utils_flashinfer.py [TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675) 2026-01-16 10:52:41 -08:00
sampling_utils.py [TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675) 2026-01-16 10:52:41 -08:00
scheduler.py [https://nvbugs/5677746][fix] Use first PP rank's schedule result in other PP ranks to fix PP hang (#9659) 2025-12-08 18:43:52 -08:00
seq_slot_manager.py [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6537) 2025-08-15 09:52:06 -07:00