TensorRT-LLMs/cpp/tensorrt_llm
Zheng Duan ded694b1aa
feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749)
Signed-off-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com>
2025-06-04 09:56:31 +08:00
..
batch_manager feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749) 2025-06-04 09:56:31 +08:00
common feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749) 2025-06-04 09:56:31 +08:00
cutlass_extensions/include/cutlass_extensions chore: guardword clean for header file. (#4540) 2025-05-23 10:08:14 +08:00
executor feature: KV Cache GPUDirect Storage (#3209) 2025-05-28 23:27:43 +00:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels Replace memset with data initialization within kernels (#4851) 2025-06-04 08:56:46 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins [perf] Reduce the workspace size of FP4 activation scales for MoE (#4303) 2025-05-30 09:03:52 +08:00
pybind refactor: Separate DecoderState from GptDecoderBatched (#4700) 2025-06-03 09:42:01 +02:00
runtime refactor: Separate DecoderState from GptDecoderBatched (#4700) 2025-06-03 09:42:01 +02:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00