TensorRT-LLMs/cpp
Jaedeok Kim fbee279909
fix: remove duplicate layer multiplication in KV cache size calculation (#6481)
Signed-off-by: Jaedeok Kim <jaedeokk@nvidia.com>
2025-07-31 22:34:34 -04:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm fix: compatibility with CUDA < 12.9 on __CUDA_ARCH_SPECIFIC__ macro (#5917) 2025-07-28 16:02:26 +08:00
kernels hopper-style context MLA (#5713) 2025-07-23 14:37:20 +08:00
micro_benchmarks feat: Add support for benchmarking individual gemms in MOE benchmark (#6080) 2025-07-18 09:00:12 +12:00
tensorrt_llm fix: remove duplicate layer multiplication in KV cache size calculation (#6481) 2025-07-31 22:34:34 -04:00
tests fix: remove duplicate layer multiplication in KV cache size calculation (#6481) 2025-07-31 22:34:34 -04:00
CMakeLists.txt feat: nanobind bindings (#6185) 2025-07-21 08:56:57 +01:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00