TensorRT-LLMs/cpp/include/tensorrt_llm
ixlmar 1ebceb790d
[TRTLLM-5508][feat] check input tokens + improve error handling (#5170)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-08-05 18:27:43 +01:00
..
batch_manager [TRTLLM-5508][feat] check input tokens + improve error handling (#5170) 2025-08-05 18:27:43 +01:00
common [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
deep_gemm fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
executor [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
kernels fix: compatibility with CUDA < 12.9 on __CUDA_ARCH_SPECIFIC__ macro (#5917) 2025-07-28 16:02:26 +08:00
layers v1.2 (#3082) 2025-03-26 23:31:29 +08:00
plugins/api Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
runtime [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00