TensorRT-LLMs/cpp
bhsueh_NV 83dbc6c75d
[TRTLLM-5532][feat] store the block of context request into kv cache (#6683)
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
2025-08-11 16:14:52 +08:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm [None][chore][kv cache manager] Dead code elimination, we no longer record/fetch through WindowBlockManager:: mContextBlocksByHash (#6249) 2025-08-10 09:10:10 -04:00
kernels [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
micro_benchmarks [TRTLLM-6744][feat] Remove input_sf swizzle for module WideEPMoE (#6231) 2025-08-08 11:13:42 +08:00
tensorrt_llm [TRTLLM-5532][feat] store the block of context request into kv cache (#6683) 2025-08-11 16:14:52 +08:00
tests [None][chore][kv cache manager] Dead code elimination, we no longer record/fetch through WindowBlockManager:: mContextBlocksByHash (#6249) 2025-08-10 09:10:10 -04:00
CMakeLists.txt [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00