TensorRT-LLMs/tensorrt_llm/llmapi
yunruis 126cd707e3
[None][opt] Add batch waiting when scheduling (#7416)
Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com>
Co-authored-by: Tao Li @ NVIDIA <tali@nvidia.com>
2025-09-23 10:27:37 +08:00
..
__init__.py [TRTLLM-5930][doc] 1.0 Documentation. (#6696) 2025-09-09 12:16:03 +08:00
build_cache.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
disagg_utils.py [None][feat] Add logging for OAI disagg server (#7232) 2025-08-26 21:02:03 -07:00
llm_args.py [None][opt] Add batch waiting when scheduling (#7416) 2025-09-23 10:27:37 +08:00
llm_utils.py [https://nvbugs/5448754][fix] Download HF model for all nodes. (#6824) 2025-09-22 14:28:38 +08:00
llm.py [TRTLLM-7328][feat] E-PD Disagg Support via llmapi (3/N) (#7577) 2025-09-22 19:07:18 -07:00
mgmn_leader_node.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
mgmn_worker_node.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
mm_encoder.py [None][chore] Create PyExecutor from TorchLlmArgs Part 1 (#7105) 2025-08-26 10:42:01 +08:00
mpi_session.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
reasoning_parser.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
tokenizer.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
tracer.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
trtllm-llmapi-launch [TRTLLM-6295][test] Exit as early as possible and propagate exit status correctly for multi-node testing (#7739) 2025-09-16 09:59:18 +08:00
utils.py [None][doc] Enhance api reference doc by labeling stable APIs (#7751) 2025-09-22 14:28:38 +08:00