mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* support kv cache reuse for MLA load compressed_kv and k_pe and do up-projection use 192/128 head size MLA context kernel support Blackwell and Hopper now Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * add CI test Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix: set k_pe head_num to 1 for kernel 2 and kernel 2V2 Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> * resolve comments Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * use GPTJ style RoPE for MLA Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix rebase error and some docs Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix kv_lens Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * tiny fix Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix torch compile Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix: use normal device memory instead of pinned memory for unit test Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> * fix L0 tests Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix torch compile after rebase Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * resolve comments Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * resolve comments again Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> --------- Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> Signed-off-by: zhhuang-nv <145532724+zhhuang-nv@users.noreply.github.com> Co-authored-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| utils | ||
| allocateKvCache.cpp | ||
| assignReqSeqSlots.cpp | ||
| cacheFormatter.cpp | ||
| cacheFormatter.h | ||
| cacheTransBuffer.cpp | ||
| cacheTransBuffer.h | ||
| cacheTransceiver.cpp | ||
| capacityScheduler.cpp | ||
| CMakeLists.txt | ||
| contextProgress.cpp | ||
| createNewDecoderRequests.cpp | ||
| dataTransceiver.cpp | ||
| dataTransceiver.h | ||
| dataTransceiverImpl.cpp | ||
| dataTransceiverImpl.h | ||
| decoderBuffers.cpp | ||
| encoderBuffers.cpp | ||
| encoderBuffers.h | ||
| evictionPolicy.cpp | ||
| generateRequestOptions.cpp | ||
| guidedDecoder.cpp | ||
| handleContextLogits.cpp | ||
| handleGenerationLogits.cpp | ||
| kvCacheEventManager.cpp | ||
| kvCacheManager.cpp | ||
| kvCacheTransferManager.cpp | ||
| llmRequest.cpp | ||
| logitsPostProcessor.cpp | ||
| loraBuffers.cpp | ||
| loraBuffers.h | ||
| makeDecodingBatchInputOutput.cpp | ||
| medusaBuffers.cpp | ||
| microBatchScheduler.cpp | ||
| mlaCacheFormatter.cpp | ||
| mlaCacheFormatter.h | ||
| pauseRequests.cpp | ||
| peftCacheManager.cpp | ||
| promptTuningBuffers.cpp | ||
| rnnStateBuffers.cpp | ||
| rnnStateBuffers.h | ||
| rnnStateManager.cpp | ||
| runtimeBuffers.cpp | ||
| sequenceSlotManager.cpp | ||
| transformerBuffers.cpp | ||
| trtEncoderModel.cpp | ||
| trtEncoderModel.h | ||
| trtGptModel.h | ||
| trtGptModelFactory.h | ||
| trtGptModelInflightBatching.cpp | ||
| trtGptModelInflightBatching.h | ||
| updateDecoderBuffers.cpp | ||