TensorRT-LLMs/tensorrt_llm/llmapi
liji-nv 3e0fb60e50
[TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-07-21 19:10:22 +08:00
..
__init__.py chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie… (#6003) 2025-07-15 15:50:03 +09:00
build_cache.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
disagg_utils.py [Disaggregated] Add retry knobs and handling (#5808) 2025-07-19 07:27:59 +08:00
llm_args.py [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
llm_utils.py feat: add support for Modelopt fp8_pb_wo quantization scheme (#6106) 2025-07-18 10:35:12 +08:00
llm.py [fix]: Skip prompt length checking for generation only requests (#6146) 2025-07-19 21:26:37 +08:00
mgmn_leader_node.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
mgmn_worker_node.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
mpi_session.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
reasoning_parser.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
tokenizer.py [NvBug 5370718, 5371538] fix: Fix incremental detokenization (#5825) 2025-07-10 16:30:00 +08:00
tracer.py Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
trtllm-llmapi-launch fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
utils.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00