TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
xiweny c076a02b38
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Signed-off-by: Daniel Stokes <dastokes@nvidia.com>
Signed-off-by: Zhanrui Sun <zhanruis@nvidia.com>
Signed-off-by: Xiwen Yu <xiweny@nvidia.com>
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: xiweny <13230610+VALLIS-NERIA@users.noreply.github.com>
Co-authored-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Co-authored-by: Daniel Stokes <dastokes@nvidia.com>
Co-authored-by: Zhanrui Sun <zhanruis@nvidia.com>
Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-09-16 09:56:18 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py [None][chore] remove executor config in kv cache creator (#7526) 2025-09-10 21:14:44 +08:00
config_utils.py [None][fix] fix hunyuan_moe init bug (#7502) 2025-09-04 03:06:00 -04:00
config.py [None][chore] expose tokens_per_block into KvCacheConfig (#5911) 2025-09-07 21:14:10 -04:00
cuda_graph_runner.py [https://nvbugs/5467981][fix] Fix Qwen2.5-VL fails with cuda graph padding (#7122) 2025-09-15 15:02:34 +08:00
executor_request_queue.py [None][opt] Balance the request based on number of tokens in AttentionDP (#7183) 2025-08-27 11:16:12 +08:00
finish_reason.py [TRTLLM-5974][feat] Support disaggregated serving in TRTLLM Sampler (#5328) 2025-06-25 17:41:36 +02:00
grammar_matcher.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
guided_decoder.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00
handle_logits.py [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
kv_cache_connector.py [None][chore] add TorchLlmArgs to the connector api (#7493) 2025-09-09 09:05:59 -04:00
kv_cache_transceiver.py [TRTLLM-7361][feat] KV cache transfer for uneven pp (#7117) 2025-09-08 13:37:46 -04:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
make_decoding_batch_input_output.py feat: Optimize TRTLLM Sampler perf single beam single step (#5550) 2025-07-07 15:44:47 +02:00
mamba_cache_manager.py [None] [chore] Mamba cache in separate file (#6796) 2025-08-15 13:42:51 +03:00
model_engine.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
py_executor_creator.py [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
py_executor.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
resource_manager.py [https://nvbugs/5480289][fix] release slot manager in mtp MTPHiddenStatesManager (#7340) 2025-09-02 19:37:51 -07:00
sampler_utils.py [TRTLLM-7153] [feat] Move stop_criteria to sample_async (#7041) 2025-09-07 17:36:49 +03:00
sampler.py [None][chore] remove executor config in instantiate sampler (#7516) 2025-09-08 09:02:40 -07:00
scheduler.py [TRTLLM-6392][feat] Support turning on/off spec decoding dynamically (#6363) 2025-07-31 15:31:39 -04:00
seq_slot_manager.py [https://nvbugs/5394392][fix] Enlarge scheduler capacity under disagg bs == 1 (#6537) 2025-08-15 09:52:06 -07:00