TensorRT-LLMs/cpp/include/tensorrt_llm/runtime
Robin Kobus e2f69c5c23
[None] [refactor] Minor cleanup and improvements (#7619)
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-10-03 11:40:06 +02:00
..
utils [TRTLLM-6881][feat] Include attention dp rank info with KV cache events (#6563) 2025-08-07 14:17:07 +02:00
bufferManager.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
common.h [TRTLLM-7398][feat] Support KV cache salting for secure KV cache reuse (#7106) 2025-09-06 17:58:32 -04:00
cudaEvent.h Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
cudaStream.h fix: Move all casters to customCasters. (#3945) 2025-05-02 19:08:28 +08:00
decoderState.h [None][refactor] Simplify decoder state initialization (#6559) 2025-08-12 21:44:41 +02:00
decodingInput.h [None][refactor] Simplify decoder state initialization for speculative decoding (#6869) 2025-08-22 18:44:17 +02:00
decodingOutput.h refactor: Clean up DecodingInput and DecodingOutput (#5617) 2025-07-01 14:31:42 +02:00
eagleBuffers.h fix: Eagle decoding in TRT flow (#4229) 2025-05-14 16:10:49 +02:00
eagleModule.h Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
explicitDraftTokensBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
gptDecoder.h [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
gptDecoderBatched.h [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
gptJsonConfig.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
iBuffer.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
iGptDecoderBatched.h [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
ipcNvlsMemory.h chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
ipcUtils.h Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
iTensor.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
lookaheadBuffers.h Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
lookaheadModule.h [None] [refactor] Minor cleanup and improvements (#7619) 2025-10-03 11:40:06 +02:00
loraCache.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
loraCachePageManagerConfig.h Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
loraModule.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
medusaModule.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
memoryCounters.h Update TensorRT-LLM (#2110) 2024-08-13 22:34:33 +08:00
modelConfig.h [None] [refactor] Minor cleanup and improvements (#7619) 2025-10-03 11:40:06 +02:00
promptTuningParams.h Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
rawEngine.h Update TensorRT-LLM (#2215) 2024-09-10 18:21:22 +08:00
runtimeDefaults.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
samplingConfig.h Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
speculativeDecodingMode.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
speculativeDecodingModule.h Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
tllmLogger.h Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
virtualMemory.h [TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034) 2025-08-04 13:51:01 +08:00
worldConfig.h bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095) 2025-04-21 15:16:55 +08:00