TensorRT-LLMs/tensorrt_llm/llmapi
Kaiyu Xie 62042a9733
[TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571)
Signed-off-by: Cheng Hang <chang@nvidia.com>
Co-authored-by: Cheng Hang <chang@nvidia.com>
2025-09-17 09:41:32 +08:00
..
__init__.py [TRTLLM-5930][doc] 1.0 Documentation. (#6696) 2025-09-09 12:16:03 +08:00
build_cache.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
disagg_utils.py [None][feat] Add logging for OAI disagg server (#7232) 2025-08-26 21:02:03 -07:00
llm_args.py [TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571) 2025-09-17 09:41:32 +08:00
llm_utils.py [None][feat] Support NVFP4 KV Cache (#6244) 2025-09-01 09:24:52 +08:00
llm.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
mgmn_leader_node.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
mgmn_worker_node.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
mm_encoder.py [None][chore] Create PyExecutor from TorchLlmArgs Part 1 (#7105) 2025-08-26 10:42:01 +08:00
mpi_session.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
reasoning_parser.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
tokenizer.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
tracer.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
trtllm-llmapi-launch [TRTLLM-6295][test] Exit as early as possible and propagate exit status correctly for multi-node testing (#7739) 2025-09-16 09:59:18 +08:00
utils.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00