TensorRT-LLMs/tensorrt_llm/llmapi
Yechan Kim 5bc3a15f10
feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-07-07 18:03:12 -07:00
..
__init__.py [TRTLLM-6291] feat: Add user-provided speculative decoding support (#5204) 2025-07-07 16:30:43 +02:00
build_cache.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
disagg_utils.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00
llm_args.py chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie… (#5795) 2025-07-08 08:52:36 +08:00
llm_utils.py [TRTLLM-6291] feat: Add user-provided speculative decoding support (#5204) 2025-07-07 16:30:43 +02:00
llm.py feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522) 2025-07-07 18:03:12 -07:00
mgmn_leader_node.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
mgmn_worker_node.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
mpi_session.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
reasoning_parser.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
tokenizer.py perf: Use tokenizers API to optimize incremental detokenization perf (#5574) 2025-07-01 09:35:25 -04:00
tracer.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
trtllm-llmapi-launch fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
utils.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00