TensorRT-LLMs/triton_backend
Chang Liu 308776442a
[nvbug/5308432] fix: extend triton exit time for test_llava (#5971)
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-12 12:56:37 +09:00
..
all_models chore: remove support for llmapi + TRT backend in Triton (#5856) 2025-07-09 21:30:34 -07:00
ci Move Triton backend to TRT-LLM main (#3549) 2025-05-16 07:15:23 +08:00
inflight_batcher_llm [nvbugs/5309940] Add support for input output token counts (#5445) 2025-06-28 04:39:39 +08:00
scripts [nvbug/5308432] fix: extend triton exit time for test_llava (#5971) 2025-07-12 12:56:37 +09:00
tools [nvbug 5283506] fix: Fix spec decode triton test (#4845) 2025-06-09 08:40:17 -04:00
requirements.txt Move Triton backend to TRT-LLM main (#3549) 2025-05-16 07:15:23 +08:00