TensorRT-LLMs/cpp/tensorrt_llm
Jackch-NV 1b2b112d44
fix sage attention headsize check error in bertAttentionPlugin.cpp (#3660)
Signed-off-by: Jackch-NV <69230184+Jackch-NV@users.noreply.github.com>
2025-04-18 09:28:04 +08:00
..
batch_manager feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
common feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels add support for smaller hidden_dim (#3609) 2025-04-17 12:00:32 +08:00
layers fix: Eagle decoding (#3456) 2025-04-11 22:06:38 +08:00
plugins fix sage attention headsize check error in bertAttentionPlugin.cpp (#3660) 2025-04-18 09:28:04 +08:00
pybind feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
runtime feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
thop added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
CMakeLists.txt feat: Adding FP8 BMM from Codegen (#3541) 2025-04-16 10:37:15 +02:00