TensorRT-LLMs/cpp/include/tensorrt_llm
amitz-nv 64c878818b
[TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6786)
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
2025-08-11 14:31:39 -04:00
..
batch_manager [TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6786) 2025-08-11 14:31:39 -04:00
common [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
deep_gemm fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
executor [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
kernels fix: compatibility with CUDA < 12.9 on __CUDA_ARCH_SPECIFIC__ macro (#5917) 2025-07-28 16:02:26 +08:00
layers v1.2 (#3082) 2025-03-26 23:31:29 +08:00
plugins/api Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
runtime [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00