TensorRT-LLMs/cpp
Bo Deng a30d3b7419
[TRTLLM-9752][fix] disable PDL for quant kernels (#10288)
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2026-01-05 13:20:38 +08:00
..
cmake [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
include/tensorrt_llm [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
kernels [None][feat] Fix attention sink load in xqa (#8836) 2025-11-03 09:39:45 +08:00
micro_benchmarks [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
tensorrt_llm [TRTLLM-9752][fix] disable PDL for quant kernels (#10288) 2026-01-05 13:20:38 +08:00
tests [TRTLLM-7731][feat] Avoid over-allocation of KV cache for transmission in disagg with CP (#8145) 2025-10-31 17:32:39 -07:00
CMakeLists.txt [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00