TensorRT-LLMs/cpp/tensorrt_llm
dongxuy04 7137cc8f67
fix cuda driver link issue with driver version less than 12.3 (#5025)
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2025-06-10 15:27:39 +08:00
..
batch_manager perf: Removing initializing ptuning buffers to zero (#4915) 2025-06-09 21:57:21 -04:00
common [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
cutlass_extensions/include/cutlass_extensions chore: guardword clean for header file. (#4540) 2025-05-23 10:08:14 +08:00
executor [TRTLLM-5007][feat] Add multimodal hashing support (image hashing) (#4145) 2025-06-10 01:59:56 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels [TRTLLM-5589] feat: Integrate TRT-LLM Gen FP8 Batched GEMM with Pytorch workflow kernel autotuner (#4872) 2025-06-09 11:02:48 +01:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
pybind feat: port MakeDecodingBatchInputOutput to python in TRTLLMSampler (#4828) 2025-06-10 07:28:34 +08:00
runtime fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop [TRTLLM-5589] feat: Integrate TRT-LLM Gen FP8 Batched GEMM with Pytorch workflow kernel autotuner (#4872) 2025-06-09 11:02:48 +01:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00