| .. |
|
assert.cpp
|
Update TensorRT-LLM (#1725)
|
2024-06-04 20:26:32 +08:00 |
|
attentionOp.cpp
|
[TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692)
|
2025-10-31 14:38:31 -07:00 |
|
attentionOp.h
|
[TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692)
|
2025-10-31 14:38:31 -07:00 |
|
CMakeLists.txt
|
[https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943)
|
2025-10-23 15:55:10 +08:00 |
|
cublasMMWrapper.cpp
|
[https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943)
|
2025-10-23 15:55:10 +08:00 |
|
cublasMMWrapper.h
|
[https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943)
|
2025-10-23 15:55:10 +08:00 |
|
cublasVersionCheck.h
|
Initial commit
|
2023-09-20 00:29:41 -07:00 |
|
cudaBf16Fallbacks.cuh
|
Update TensorRT-LLM (20240116) (#891)
|
2024-01-16 20:03:11 +08:00 |
|
cudaBufferUtils.cuh
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
cudaDriverWrapper.cpp
|
[nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133)
|
2025-06-13 15:53:29 +08:00 |
|
cudaDriverWrapper.h
|
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034)
|
2025-08-04 13:51:01 +08:00 |
|
cudaFp8Utils.cu
|
Add Llama 4 (#3302)
|
2025-04-09 03:35:21 +08:00 |
|
cudaProfilerUtils.cpp
|
Update TensorRT-LLM (#1954)
|
2024-07-16 15:30:25 +08:00 |
|
cudaTypeUtils.cuh
|
Update TensorRT-LLM (#2008)
|
2024-07-23 23:05:09 +08:00 |
|
customAllReduceUtils.h
|
[TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870)
|
2025-11-04 16:42:31 +08:00 |
|
envUtils.cpp
|
[TRTLLM-7731][feat] Avoid over-allocation of KV cache for transmission in disagg with CP (#8145)
|
2025-10-31 17:32:39 -07:00 |
|
envUtils.h
|
[TRTLLM-7731][feat] Avoid over-allocation of KV cache for transmission in disagg with CP (#8145)
|
2025-10-31 17:32:39 -07:00 |
|
jsonSerializeOptional.h
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
lamportUtils.cuh
|
[None][feat] MNNVLAllreduce Kernel Refactor (#8018)
|
2025-11-05 08:49:47 +08:00 |
|
logger.cpp
|
chore: improve log-level setting UX (#4352)
|
2025-05-16 09:47:44 +01:00 |
|
mathUtils.h
|
Update TensorRT-LLM (#2094)
|
2024-08-07 16:44:43 +08:00 |
|
mcastDevMemUtils.cpp
|
Adding two-shot allreduce kernel and mnnvl multicasting buffer (#4216)
|
2025-05-22 03:42:36 +08:00 |
|
mcastDevMemUtils.h
|
Adding two-shot allreduce kernel and mnnvl multicasting buffer (#4216)
|
2025-05-22 03:42:36 +08:00 |
|
memoryUtils.cu
|
feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438)
|
2025-05-02 13:25:30 +08:00 |
|
memoryUtils.h
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
nvtxUtils.h
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
opUtils.cpp
|
[None][bug] Set NCCL_GRAPH_REGISTER to false to avoid hang (#8413)
|
2025-10-16 18:59:18 +02:00 |
|
opUtils.h
|
feat: forward exceptions to Python and catch OOMs (#4497)
|
2025-05-28 11:58:10 +02:00 |
|
quantTypeUtils.cuh
|
Update TensorRT-LLM (#2008)
|
2024-07-23 23:05:09 +08:00 |
|
reduceKernelUtils.cuh
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
safetensors.cpp
|
Update TensorRT-LLM (#2792)
|
2025-02-18 21:27:39 +08:00 |
|
safetensors.h
|
Update TensorRT-LLM (#2110)
|
2024-08-13 22:34:33 +08:00 |
|
stlUtils.h
|
Update TensorRT-LLM (#1763)
|
2024-06-11 16:59:02 +08:00 |
|
stringUtils.cpp
|
[None][fix] Using RAII to automatically manage the allocation and release of va_list for potential resource leak (#6758)
|
2025-08-16 15:19:19 +08:00 |
|
timestampUtils.cpp
|
Update TensorRT-LLM (#1954)
|
2024-07-16 15:30:25 +08:00 |
|
timestampUtils.h
|
Update TensorRT-LLM (#1954)
|
2024-07-16 15:30:25 +08:00 |
|
tllmException.cpp
|
[None][feat] Add Request specific exception (#6931)
|
2025-09-04 18:43:42 -04:00 |
|
vec_dtypes.cuh
|
[TRTLLM-7318][feat] MnnvlThroughput AlltoAll implementation. (#7499)
|
2025-10-27 13:23:06 -04:00 |
|
workspace.h
|
[https://nvbugs/5415862][fix] Update cublas as 12.9.1 and cuda memory alignment as 256 (#6501)
|
2025-08-15 11:10:59 +08:00 |