TensorRT-LLMs/cpp/tensorrt_llm/pybind/executor
katec846 eeb605abd6
feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380)
* Feat: Offload ptable to cpu if enable_chunk_context

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Feat: offload ptable to cpu for chunk context mode

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix and add comment

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Update Readme for multimodal and add a new param mm_embedding_offloading

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* fix: Correct prompt table offloading condition in PromptTuningBuffers

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Clean up the code

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Add commits to explain copy from cpu <-> gpu using pinned memory

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix namings based on comments

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix format based on precommit

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Modify --mm_embedding_offloading flag

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

---------

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-04-21 14:31:01 +08:00
..
bindings.cpp feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
bindings.h Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
executor.cpp Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
executor.h Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
executorConfig.cpp feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorConfig.h Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
request.cpp feat: Allow individual gatherContext for each additional output (#3374) 2025-04-12 17:00:36 +08:00
request.h Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
streamCaster.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
tensorCaster.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00