TensorRT-LLMs/cpp/tensorrt_llm
Netanel Haber e692779ead
Solve underallocation in VSWA+/VGQA (#4667)
Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
2025-06-12 12:12:46 +08:00
..
batch_manager Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
common Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
cutlass_extensions/include/cutlass_extensions chore: guardword clean for header file. (#4540) 2025-05-23 10:08:14 +08:00
executor [TRTLLM-5007][feat] Add multimodal hashing support (image hashing) (#4145) 2025-06-10 01:59:56 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels fix: XQA is not enabled when history_length < kMinHistoryTokensPerBlock. (#4264) 2025-06-11 09:38:10 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
pybind Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
runtime fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
CMakeLists.txt chore: cleanup GDS Cmake interface (#4928) 2025-06-10 17:25:43 +08:00