| .. |
|
utils
|
refactor: unique_ptr instead of shared_ptr (#4697)
|
2025-05-29 22:49:35 +02:00 |
|
allocateKvCache.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
assignReqSeqSlots.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
cacheFormatter.cpp
|
fix datatype check (#4606)
|
2025-05-24 08:36:17 +08:00 |
|
cacheFormatter.h
|
cacheTransceiver buffer manager (#3798)
|
2025-04-27 11:48:15 +08:00 |
|
cacheTransBuffer.cpp
|
Fabric Memory for KV Cache Transfer (#4717)
|
2025-05-30 15:50:21 +08:00 |
|
cacheTransBuffer.h
|
Fabric Memory for KV Cache Transfer (#4717)
|
2025-05-30 15:50:21 +08:00 |
|
cacheTransceiver.cpp
|
Agent interface impl for NIXL (#4125)
|
2025-05-22 09:09:41 +08:00 |
|
capacityScheduler.cpp
|
fix: max_num_sequences calculation with overlap scheduling (#4532)
|
2025-06-03 09:31:22 +02:00 |
|
CMakeLists.txt
|
feature: KV Cache GPUDirect Storage (#3209)
|
2025-05-28 23:27:43 +00:00 |
|
contextProgress.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
createNewDecoderRequests.cpp
|
chore: Clean up cpp runtime (#4449)
|
2025-05-28 16:32:59 +02:00 |
|
dataTransceiver.cpp
|
Fabric Memory for KV Cache Transfer (#4717)
|
2025-05-30 15:50:21 +08:00 |
|
dataTransceiver.h
|
Agent interface impl for NIXL (#4125)
|
2025-05-22 09:09:41 +08:00 |
|
dataTransceiverImpl.cpp
|
fix datatype check (#4606)
|
2025-05-24 08:36:17 +08:00 |
|
dataTransceiverImpl.h
|
Agent interface impl for NIXL (#4125)
|
2025-05-22 09:09:41 +08:00 |
|
decoderBuffers.cpp
|
refactor: Copy sequence lengths once in decoder setup (#4102)
|
2025-05-16 22:03:55 +08:00 |
|
encoderBuffers.cpp
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
encoderBuffers.h
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
evictionPolicy.cpp
|
[JIRA-5226219][fix] Fix Bug in KV cache manager (#4596)
|
2025-05-29 22:03:20 -07:00 |
|
guidedDecoder.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
handleContextLogits.cpp
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
handleGenerationLogits.cpp
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
kvCacheEventManager.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
kvCacheManager.cpp
|
[JIRA-5226219][fix] Fix Bug in KV cache manager (#4596)
|
2025-05-29 22:03:20 -07:00 |
|
kvCacheTransferManager.cpp
|
'entered copyBlock' format string expects %s, pass string rather than int (#4820)
|
2025-06-01 08:54:33 -07:00 |
|
llmRequest.cpp
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
logitsPostProcessor.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
loraBuffers.cpp
|
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399)
|
2025-05-19 14:25:36 -07:00 |
|
loraBuffers.h
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
makeDecodingBatchInputOutput.cpp
|
refactor: Copy sequence lengths once in decoder setup (#4102)
|
2025-05-16 22:03:55 +08:00 |
|
medusaBuffers.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
microBatchScheduler.cpp
|
[nvbugs/5274894] fix: Sort requests for functional correctness and performance (adapted from #4608) (#4621)
|
2025-05-26 17:10:55 +08:00 |
|
mlaCacheFormatter.cpp
|
fix datatype check (#4606)
|
2025-05-24 08:36:17 +08:00 |
|
mlaCacheFormatter.h
|
cacheTransceiver buffer manager (#3798)
|
2025-04-27 11:48:15 +08:00 |
|
pauseRequests.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
peftCacheManager.cpp
|
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399)
|
2025-05-19 14:25:36 -07:00 |
|
promptTuningBuffers.cpp
|
feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380)
|
2025-04-21 14:31:01 +08:00 |
|
rnnStateBuffers.cpp
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
rnnStateBuffers.h
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
rnnStateManager.cpp
|
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399)
|
2025-05-19 14:25:36 -07:00 |
|
runtimeBuffers.cpp
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
sequenceSlotManager.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
transformerBuffers.cpp
|
chore: Clean up cpp runtime (#4449)
|
2025-05-28 16:32:59 +02:00 |
|
trtEncoderModel.cpp
|
fix: max_num_sequences calculation with overlap scheduling (#4532)
|
2025-06-03 09:31:22 +02:00 |
|
trtEncoderModel.h
|
[TRTLLM-3429] feat: Overlap scheduling in C++ runtime (#3625)
|
2025-05-06 15:06:46 +02:00 |
|
trtGptModel.h
|
fix: max_num_sequences calculation with overlap scheduling (#4532)
|
2025-06-03 09:31:22 +02:00 |
|
trtGptModelFactory.h
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
trtGptModelInflightBatching.cpp
|
refactor: Separate DecoderState from GptDecoderBatched (#4700)
|
2025-06-03 09:42:01 +02:00 |
|
trtGptModelInflightBatching.h
|
refactor: Separate DecoderState from GptDecoderBatched (#4700)
|
2025-06-03 09:42:01 +02:00 |
|
updateDecoderBuffers.cpp
|
refactor: Separate DecoderState from GptDecoderBatched (#4700)
|
2025-06-03 09:42:01 +02:00 |