TensorRT-LLMs/triton_backend
2025-10-27 21:06:28 -07:00
..
all_models [https://nvbugs/5556475] [fix] Fix the tensorrt_llm_bls model to correctly return the outputs for num_input_tokens and num_output_tokens (#8150) 2025-10-27 21:06:28 -07:00
ci [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
inflight_batcher_llm [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
scripts [nvbug/5308432] fix: extend triton exit time for test_llava (#5971) 2025-07-12 12:56:37 +09:00
tools [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
requirements.txt [None][bug] Fix transformers version for Triton backend (#7964) 2025-09-24 12:55:52 -04:00