TensorRT-LLMs/tensorrt_llm/llmapi
Richard Huo ce580ce4f5
[None][feat] KV Cache Connector API (#7228)
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: richardhuo-nv <rihuo@nvidia.com>
Co-authored-by: jthomson04 <jwillthomson19@gmail.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-08-28 23:09:27 -04:00
..
__init__.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
build_cache.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
disagg_utils.py [None][feat] Add logging for OAI disagg server (#7232) 2025-08-26 21:02:03 -07:00
llm_args.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
llm_utils.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
llm.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
mgmn_leader_node.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
mgmn_worker_node.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
mm_encoder.py [None][chore] Create PyExecutor from TorchLlmArgs Part 1 (#7105) 2025-08-26 10:42:01 +08:00
mpi_session.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
reasoning_parser.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
tokenizer.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
tracer.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
trtllm-llmapi-launch fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
utils.py [TRTLLM-7157][feat] BREAKING CHANGE Introduce sampler_type, detect sampler according to options (#6831) 2025-08-16 00:27:24 -04:00