TensorRT-LLMs/cpp/include/tensorrt_llm
Zheng Duan ded694b1aa
feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749)
Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com>
2025-06-04 09:56:31 +08:00
..
batch_manager feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749) 2025-06-04 09:56:31 +08:00
common feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
deep_gemm [perf] Reduce the workspace size of FP4 activation scales for MoE (#4303) 2025-05-30 09:03:52 +08:00
executor feature: KV Cache GPUDirect Storage (#3209) 2025-05-28 23:27:43 +00:00
kernels Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
layers v1.2 (#3082) 2025-03-26 23:31:29 +08:00
plugins/api Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
runtime refactor: Separate DecoderState from GptDecoderBatched (#4700) 2025-06-03 09:42:01 +02:00