TensorRT-LLMs/tensorrt_llm/serve
Yi Zhang e44f7687af
feat: Add no_kv_cache_reuse option and streaming support for trtllm serve bench (#4971)
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
2025-06-18 13:37:31 +08:00
..
scripts feat: Add no_kv_cache_reuse option and streaming support for trtllm serve bench (#4971) 2025-06-18 13:37:31 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py [TRTLLM-5053] Refactoring and Unifying the Multimodal input preparation (#4506) 2025-06-03 12:02:07 -07:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py chore: gracefully exit disagg process in tests; better startup and logging (#5109) 2025-06-13 14:03:55 +08:00
openai_protocol.py [fix]: Fall back to HMAC to Avoid IPC Serialization Churn (#5074) 2025-06-13 11:37:50 +08:00
openai_server.py chore: Include prompt_token_ids only for context-only disagg requests (#5055) 2025-06-12 15:00:08 -04:00
postprocess_handlers.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
router.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00