TensorRT-LLMs/cpp/tensorrt_llm
WeiHaocheng dccbfc8b1e
fix: Set init value for moe expert id (#5660)
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-07-03 07:05:31 -04:00
..
batch_manager [TRTLLM-1316] refactor: Remove unnecessary pipeline parallelism logic from postProcessRequest (#5489) 2025-07-02 10:13:31 +02:00
common feat: chunked prefill for MLA (Blackwell) (#4651) 2025-06-26 09:01:00 +08:00
cutlass_extensions/include/cutlass_extensions opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
executor [TRTLLM-5000][feat] NGrams V2 (#4569) 2025-06-27 23:00:17 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels fix: Set init value for moe expert id (#5660) 2025-07-03 07:05:31 -04:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
pybind Feat/pytorch vswa kvcachemanager (#5151) 2025-07-02 15:58:00 +08:00
runtime refactor: Clean up DecodingInput and DecodingOutput (#5617) 2025-07-01 14:31:42 +02:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop [https://nvbugspro.nvidia.com/bug/5329655] [feat] Pytorch path add spec dec param to attention op (#5146) 2025-07-02 04:54:43 -04:00
CMakeLists.txt refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00