TensorRT-LLMs/cpp
Void 7d16f3a28b
[https://nvbugs/5788127][fix] Use uint64_t as the dtype of lamport_buffer_size to avoid overflow (#10499)
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
2026-01-13 17:16:22 +08:00
..
cmake
include/tensorrt_llm [https://nvbugs/5689235][fix] Fix cancellation+chunked prefill+disagg (#10111) 2026-01-12 18:23:26 -05:00
kernels [TRTLLM-10022][feat] Add hopper xqa decode support for skip softmax attention (#10264) 2026-01-11 19:26:10 -05:00
micro_benchmarks [TRTLLM-9197][infra] Move thirdparty stuff to it's own listfile (#8986) 2025-11-20 16:44:23 -08:00
tensorrt_llm [https://nvbugs/5788127][fix] Use uint64_t as the dtype of lamport_buffer_size to avoid overflow (#10499) 2026-01-13 17:16:22 +08:00
tests [https://nvbugs/5689235][fix] Fix cancellation+chunked prefill+disagg (#10111) 2026-01-12 18:23:26 -05:00
CMakeLists.txt [TRTLLM-9805][feat] Skip Softmax Attention. (#9821) 2025-12-21 02:52:42 -05:00
conandata.yml
conanfile.py
libnuma_conan.py