mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* support kv cache reuse for MLA load compressed_kv and k_pe and do up-projection use 192/128 head size MLA context kernel support Blackwell and Hopper now Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * add CI test Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix: set k_pe head_num to 1 for kernel 2 and kernel 2V2 Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> * resolve comments Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * use GPTJ style RoPE for MLA Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix rebase error and some docs Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix kv_lens Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * tiny fix Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix torch compile Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix: use normal device memory instead of pinned memory for unit test Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> * fix L0 tests Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * fix torch compile after rebase Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * resolve comments Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> * resolve comments again Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> --------- Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com> Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> Signed-off-by: zhhuang-nv <145532724+zhhuang-nv@users.noreply.github.com> Co-authored-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| allgatherOp.cpp | ||
| allreduceOp.cpp | ||
| attentionOp.cpp | ||
| CMakeLists.txt | ||
| convertSpecDecodingMaskToPackedMaskOp.cpp | ||
| cublasScaledMM.cpp | ||
| cutlassScaledMM.cpp | ||
| dynamicDecodeOp.cpp | ||
| dynamicDecodeOp.h | ||
| fmhaPackMaskOp.cpp | ||
| fp4BatchedQuantize.cpp | ||
| fp4BlockScaleMoe.cpp | ||
| fp4Gemm.cpp | ||
| fp4GemmTrtllmGen.cpp | ||
| fp4Op.cpp | ||
| fp4Quantize.cpp | ||
| fp4Quantize.h | ||
| fp8BatchedGemmTrtllmGen.cpp | ||
| fp8BlockScaleMoe.cpp | ||
| fp8BlockScalingGemm.cpp | ||
| fp8Op.cpp | ||
| fp8Op.h | ||
| fp8PerTensorScaleMoe.cpp | ||
| fp8PerTensorScalingTrtllmGenGemm.cpp | ||
| fp8Quantize.cpp | ||
| fusedTopkSoftmax.cpp | ||
| gatherTreeOp.cpp | ||
| groupRmsNormOp.cpp | ||
| logitsBitmaskOp.cpp | ||
| loraOp.cpp | ||
| mambaConv1dOp.cpp | ||
| mlaPreprocessOp.cpp | ||
| moeCommOp.cpp | ||
| moeOp.cpp | ||
| mtpOp.cpp | ||
| ncclCommunicatorOp.cpp | ||
| ncclCommunicatorOp.h | ||
| noAuxTcOp.cpp | ||
| parallelDecodeKVCacheUpdateOp.cpp | ||
| redrafterCurandOp.cpp | ||
| reducescatterOp.cpp | ||
| relativeAttentionBiasOp.cpp | ||
| selectiveScanOp.cpp | ||
| thUtils.cpp | ||
| thUtils.h | ||
| userbuffersFinalizeOp.cpp | ||
| userbuffersTensor.cpp | ||
| userbuffersTensor.h | ||
| weightOnlyQuantOp.cpp | ||