..
moeLoadBalancer
[TRTLLM-9108][feat] Add test configurable moe module multi gpu ( #10699 )
2026-01-23 10:16:58 +08:00
utils
[ https://nvbugs/5825514 ][fix] Add null pointer check to parseNpyHeader ( #10944 )
2026-01-30 03:01:33 -05:00
bufferManager.cpp
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory ( #5034 )
2025-08-04 13:51:01 +08:00
bufferView.h
[None] [refactor] Minor cleanup and improvements ( #7619 )
2025-10-03 11:40:06 +02:00
CMakeLists.txt
[TRTLLM-7349][feat] Adding new orchestrator type -- ray ( #7520 )
2025-10-04 08:12:24 +08:00
cudaMemPool.cpp
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
cudaMemPool.h
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
decoderState.cpp
[None][refactor] Simplify decoder state initialization for speculative decoding ( #6869 )
2025-08-22 18:44:17 +02:00
decodingLayerWorkspace.cpp
Update TensorRT-LLM ( #2184 )
2024-09-03 12:14:23 +02:00
decodingLayerWorkspace.h
Update TensorRT-LLM ( #2436 )
2024-11-12 15:27:49 +08:00
decodingOutput.cpp
Update ( #2978 )
2025-03-23 16:39:35 +08:00
eagleBuffers.cpp
fix: Eagle decoding in TRT flow ( #4229 )
2025-05-14 16:10:49 +02:00
explicitDraftTokensBuffers.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
explicitDraftTokensModule.h
Update TensorRT-LLM ( #1763 )
2024-06-11 16:59:02 +08:00
gptDecoder.cpp
[None][feat] Support ignored prompt length for penalties via new sampling config parameter ( #8127 )
2025-10-27 13:12:31 -04:00
gptDecoderBatched.cpp
[None][fix] Introduce inline namespace to avoid symbol collision ( #9541 )
2025-12-12 23:32:15 +08:00
gptJsonConfig.cpp
Feat: Variable-Beam-Width-Search (VBWS) part3 ( #3338 )
2025-04-08 23:51:27 +08:00
iBuffer.cpp
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices ( #7568 )
2025-09-16 09:56:18 +08:00
ipcNvlsMemory.cu
[None][fix] default disable gemm+allreduce fusion ( #10656 )
2026-01-20 12:31:17 +08:00
ipcSocket.cpp
Fix GEMM+AR fusion on blackwell ( #5563 )
2025-07-09 08:48:47 +08:00
ipcSocket.h
Fix GEMM+AR fusion on blackwell ( #5563 )
2025-07-09 08:48:47 +08:00
ipcUtils.cpp
Cherry pick feat/llama4 to main ( #4739 )
2025-05-30 05:28:40 +08:00
iTensor.cpp
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
jsonSerialization.h
Update TensorRT-LLM ( #2436 )
2024-11-12 15:27:49 +08:00
layerProfiler.cpp
Update TensorRT-LLM ( #1554 )
2024-05-07 23:34:28 +08:00
layerProfiler.h
Update TensorRT-LLM ( #1554 )
2024-05-07 23:34:28 +08:00
lookaheadBuffers.cpp
Feat: Variable-Beam-Width-Search (VBWS) part3 ( #3338 )
2025-04-08 23:51:27 +08:00
loraCache.cpp
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. ( #4399 )
2025-05-19 14:25:36 -07:00
loraManager.cpp
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. ( #4399 )
2025-05-19 14:25:36 -07:00
loraManager.h
[ https://nvbugs/5322131 ][feat] Multi-LoRA serving with CUDA Graph ( #8279 )
2026-01-22 14:01:18 +01:00
loraModule.cpp
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
loraUtils.cpp
Update TensorRT-LLM ( #2820 )
2025-02-25 21:21:49 +08:00
loraUtils.h
Update TensorRT-LLM ( #2820 )
2025-02-25 21:21:49 +08:00
mcastDeviceMemory.cpp
[ https://nvbugs/5782112 ][fix] Fix hanging issue for MNNVL Allreduce under PP ( #10633 )
2026-01-16 13:03:36 +08:00
mcastDeviceMemory.h
[ https://nvbugs/5782112 ][fix] Fix hanging issue for MNNVL Allreduce under PP ( #10633 )
2026-01-16 13:03:36 +08:00
mcastGPUBuffer.h
[ https://nvbugs/5782112 ][fix] Fix hanging issue for MNNVL Allreduce under PP ( #10633 )
2026-01-16 13:03:36 +08:00
memoryCounters.cpp
Update TensorRT-LLM ( #2110 )
2024-08-13 22:34:33 +08:00
ncclCommunicator.cpp
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
ncclCommunicator.h
Update TensorRT-LLM ( #2792 )
2025-02-18 21:27:39 +08:00
promptTuningParams.cpp
Feat: Variable-Beam-Width-Search (VBWS) part4 ( #3979 )
2025-05-12 22:32:29 +02:00
runtimeKernels.cu
refactor: Remove enforced sorted order of batch slots ( #3502 )
2025-07-14 17:23:02 +02:00
runtimeKernels.h
refactor: Remove enforced sorted order of batch slots ( #3502 )
2025-07-14 17:23:02 +02:00
tensorView.h
Update TensorRT-LLM ( #1793 )
2024-06-18 18:18:23 +08:00
tllmBuffers.cpp
Update TensorRT-LLM ( #2792 )
2025-02-18 21:27:39 +08:00
tllmBuffers.h
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory ( #5034 )
2025-08-04 13:51:01 +08:00
tllmLogger.cpp
Update TensorRT-LLM ( #787 )
2024-01-02 17:54:32 +08:00
tllmRuntime.cpp
Feat: Variable-Beam-Width-Search (VBWS) part4 ( #3979 )
2025-05-12 22:32:29 +02:00
tllmRuntime.h
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
2025-09-25 21:02:35 +08:00
tllmStreamReaders.cpp
feat: Integrate GPUDirect Storage (GDS) into Executor API ( #3582 )
2025-04-18 15:59:21 +08:00
tllmStreamReaders.h
feat: Integrate GPUDirect Storage (GDS) into Executor API ( #3582 )
2025-04-18 15:59:21 +08:00
torch.h
[None][feat] KV Cache Connector API ( #7228 )
2025-08-28 23:09:27 -04:00
torchUtils.h
[TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper ( #4123 )
2025-05-14 15:48:07 +08:00
torchView.h
Update TensorRT-LLM ( #1168 )
2024-02-27 17:37:34 +08:00
virtualMemory.cpp
[None][fix] Correct virtual memory allocation alignment ( #9491 )
2025-12-01 10:59:19 +08:00
workerPool.cpp
Update TensorRT-LLM ( #2156 )
2024-08-27 18:20:59 +08:00
workerPool.h
Update TensorRT-LLM ( #2156 )
2024-08-27 18:20:59 +08:00
worldConfig.cpp
[TRTLLM-9465][fix] Swap TP-CP grouping order ( #10350 )
2026-01-05 20:08:03 +08:00