TensorRT-LLMs/cpp/tensorrt_llm/common
xiweny c076a02b38
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Signed-off-by: Daniel Stokes <dastokes@nvidia.com>
Signed-off-by: Zhanrui Sun <zhanruis@nvidia.com>
Signed-off-by: Xiwen Yu <xiweny@nvidia.com>
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: xiweny <13230610+VALLIS-NERIA@users.noreply.github.com>
Co-authored-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Co-authored-by: Daniel Stokes <dastokes@nvidia.com>
Co-authored-by: Zhanrui Sun <zhanruis@nvidia.com>
Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-09-16 09:56:18 +08:00
..
assert.cpp Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
attentionOp.cpp [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
attentionOp.h [TRTLLM-7192][feat] optimize MLA chunked prefill && support fp8 mla chunked prefill (#7477) 2025-09-15 21:43:49 +08:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
cublasMMWrapper.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cublasMMWrapper.h Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
cublasVersionCheck.h Initial commit 2023-09-20 00:29:41 -07:00
cudaBf16Fallbacks.cuh Update TensorRT-LLM (20240116) (#891) 2024-01-16 20:03:11 +08:00
cudaBufferUtils.cuh Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
cudaDriverWrapper.cpp [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
cudaDriverWrapper.h [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00
cudaFp8Utils.cu Add Llama 4 (#3302) 2025-04-09 03:35:21 +08:00
cudaProfilerUtils.cpp Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
cudaTypeUtils.cuh Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
customAllReduceUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
envUtils.cpp [None][feat] CUTLASS MoE FC2+Finalize fusion (#3294) 2025-08-12 15:56:48 +08:00
envUtils.h [None][feat] CUTLASS MoE FC2+Finalize fusion (#3294) 2025-08-12 15:56:48 +08:00
jsonSerializeOptional.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
logger.cpp chore: improve log-level setting UX (#4352) 2025-05-16 09:47:44 +01:00
mathUtils.h Update TensorRT-LLM (#2094) 2024-08-07 16:44:43 +08:00
mcastDevMemUtils.cpp Adding two-shot allreduce kernel and mnnvl multicasting buffer (#4216) 2025-05-22 03:42:36 +08:00
mcastDevMemUtils.h Adding two-shot allreduce kernel and mnnvl multicasting buffer (#4216) 2025-05-22 03:42:36 +08:00
memoryUtils.cu feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
memoryUtils.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
nvtxUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
opUtils.cpp feat: forward exceptions to Python and catch OOMs (#4497) 2025-05-28 11:58:10 +02:00
opUtils.h feat: forward exceptions to Python and catch OOMs (#4497) 2025-05-28 11:58:10 +02:00
quantTypeUtils.cuh Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
reduceKernelUtils.cuh Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
safetensors.cpp Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
safetensors.h Update TensorRT-LLM (#2110) 2024-08-13 22:34:33 +08:00
stlUtils.h Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
stringUtils.cpp [None][fix] Using RAII to automatically manage the allocation and release of va_list for potential resource leak (#6758) 2025-08-16 15:19:19 +08:00
timestampUtils.cpp Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
timestampUtils.h Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
tllmException.cpp [None][feat] Add Request specific exception (#6931) 2025-09-04 18:43:42 -04:00
workspace.h [https://nvbugs/5415862][fix] Update cublas as 12.9.1 and cuda memory alignment as 256 (#6501) 2025-08-15 11:10:59 +08:00