TensorRT-LLMs/cpp
Tian Zheng 5efee01da1
[None][feat] Add Skip Softmax MLA kernels for Blackwell and Fix an accuracy bug of NVFP4 KV (#10813)
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
2026-01-26 16:46:33 +08:00
..
cmake [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
include/tensorrt_llm [TRTLLM-9527][feat] Python transceiver components (step 2) (#10494) 2026-01-22 10:14:50 -08:00
kernels [None][feat] Use XQA JIT impl by default and mitigate perf loss with sliding window (#10335) 2026-01-15 15:47:00 +08:00
micro_benchmarks [TRTLLM-9197][infra] Move thirdparty stuff to it's own listfile (#8986) 2025-11-20 16:44:23 -08:00
tensorrt_llm [None][feat] Add Skip Softmax MLA kernels for Blackwell and Fix an accuracy bug of NVFP4 KV (#10813) 2026-01-26 16:46:33 +08:00
tests [TRTLLM-9527][feat] Python transceiver components (step 2) (#10494) 2026-01-22 10:14:50 -08:00
CMakeLists.txt [TRTLLM-9805][feat] Skip Softmax Attention. (#9821) 2025-12-21 02:52:42 -05:00
conan.lock [None][infra] Regenerate out dated lock file (#10940) 2026-01-23 09:21:03 -08:00
conandata.yml
conanfile.py
libnuma_conan.py