TensorRT-LLMs/cpp/tensorrt_llm/executor
katec846 eeb605abd6
feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380)
* Feat: Offload ptable to cpu if enable_chunk_context

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Feat: offload ptable to cpu for chunk context mode

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix and add comment

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Update Readme for multimodal and add a new param mm_embedding_offloading

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* fix: Correct prompt table offloading condition in PromptTuningBuffers

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Clean up the code

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Add commits to explain copy from cpu <-> gpu using pinned memory

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix namings based on comments

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix format based on precommit

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Modify --mm_embedding_offloading flag

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

---------

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-04-21 14:31:01 +08:00
..
cache_transmission chore: exchange connection id with tagSend/tagRecv (#3320) 2025-04-14 09:30:34 +08:00
CMakeLists.txt Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
contextPhaseParams.cpp Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
debugConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
decodingConfig.cpp Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
disaggServerUtil.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dynamicBatchConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dynamicBatchTuner.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dynamicBatchTuner.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executorConfig.cpp feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorImpl.cpp feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorImpl.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorKVCacheEventManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
extendedRuntimePerfKnobConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecodingConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecodingParams.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
intervalSet.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
jsonSerialization.cpp feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
kvCacheConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheRetentionConfig.cpp chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
logitsPostProcessorConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
loraConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
model.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mropeConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
orchestratorConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
orchestratorUtils.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
outputConfig.cpp feat: Allow individual gatherContext for each additional output (#3374) 2025-04-12 17:00:36 +08:00
parallelConfig.cpp feat: Add numNodes to ParallelConfig (#3346) 2025-04-13 13:55:04 +02:00
peftCacheConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
promptTuningConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
request.cpp Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
requestImpl.h Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
requestUtils.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
requestUtils.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
requestWithId.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
requestWithId.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
response.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
responseImpl.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
samplingConfig.cpp feat: Variable-Beam-Width-Search (VBWS) Part2 (#3133) 2025-04-02 12:31:28 +08:00
schedulerConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
serialization.cpp feat: Integrate GPUDirect Storage (GDS) into Executor API (#3582) 2025-04-18 15:59:21 +08:00
serializeUtils.h feat: Allow individual gatherContext for each additional output (#3374) 2025-04-12 17:00:36 +08:00
speculativeDecodingConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
tensor.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
types.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00