TensorRT-LLMs/cpp
Pamela Peng 52465216f4
[https://nvbugs/5295389][fix]fix moe fp4 on sm120 (#4624)
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
2025-05-29 09:50:47 -07:00
..
cmake feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
include/tensorrt_llm fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
kernels [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
micro_benchmarks feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
tensorrt_llm [https://nvbugs/5295389][fix]fix moe fp4 on sm120 (#4624) 2025-05-29 09:50:47 -07:00
tests [https://nvbugs/5289907][fix] Restore per-channel pre-quant (#4545) 2025-05-23 19:46:53 +08:00
CMakeLists.txt fix: better method to help torch find nvtx3 (#4110) 2025-05-15 16:42:30 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py infra: add conan (#3744) 2025-04-30 11:53:14 -07:00