TensorRT-LLMs/cpp
NVShreyas 6c1862fb33
[TRTLLM-10197][chore] Refactor to setup for RNN cache transceiver (#10957)
Signed-off-by: Shreyas Misra <shreyasm@nvidia.com>
2026-01-27 12:23:02 -08:00
..
cmake [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
include/tensorrt_llm [TRTLLM-9527][feat] change context params and disagg params (step3) (#10495) 2026-01-27 16:34:17 +08:00
kernels [None][feat] Use XQA JIT impl by default and mitigate perf loss with sliding window (#10335) 2026-01-15 15:47:00 +08:00
micro_benchmarks [TRTLLM-9197][infra] Move thirdparty stuff to it's own listfile (#8986) 2025-11-20 16:44:23 -08:00
tensorrt_llm [TRTLLM-10197][chore] Refactor to setup for RNN cache transceiver (#10957) 2026-01-27 12:23:02 -08:00
tests [TRTLLM-9527][feat] change context params and disagg params (step3) (#10495) 2026-01-27 16:34:17 +08:00
CMakeLists.txt [None][chore] Removing pybind11 bindings and references (#10550) 2026-01-26 08:19:12 -05:00
conan.lock [None][infra] Regenerate out dated lock file (#10940) 2026-01-23 09:21:03 -08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00