mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Adding two-shot allreduce kernel and mnnvl multicasting buffergit gffe Signed-off-by: Shiyu Li <shili@nvidia.com> Adding comments Signed-off-by: Shiyu Li <shili@nvidia.com> Add unittest of the twoshot kernel. Signed-off-by: Shiyu Li <shili@nvidia.com> Update dispatch logic Signed-off-by: Shiyu Li <shili@nvidia.com> Use cpu barrier instead of GPU at init Signed-off-by: Shiyu Li <shili@nvidia.com> Merge dispatch logic fix Signed-off-by: Shiyu Li <shili@nvidia.com> Update the kernel to use GPU-managed buffer Signed-off-by: Shiyu Li <shili@nvidia.com> * Refine Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Clean code Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Fix compile error Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Fix issue Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Clean up Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Simplify AllReduce interface Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Rename Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Fix warning Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Tidy code Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Rename Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Fix compile error Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Refine Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Skip ut for no_fusion Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * Refine Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> --------- Signed-off-by: Shiyu Li <shili@nvidia.com> Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> Co-authored-by: Shiyu Li <shili@nvidia.com> |
||
|---|---|---|
| .. | ||
| allgatherOp.cpp | ||
| allreduceOp.cpp | ||
| attentionOp.cpp | ||
| CMakeLists.txt | ||
| convertSpecDecodingMaskToPackedMaskOp.cpp | ||
| cublasScaledMM.cpp | ||
| cutlassScaledMM.cpp | ||
| dynamicDecodeOp.cpp | ||
| dynamicDecodeOp.h | ||
| fmhaPackMaskOp.cpp | ||
| fp4BatchedQuantize.cpp | ||
| fp4BlockScaleMoe.cpp | ||
| fp4Gemm.cpp | ||
| fp4GemmTrtllmGen.cpp | ||
| fp4Op.cpp | ||
| fp4Quantize.cpp | ||
| fp4Quantize.h | ||
| fp8BatchedGemmTrtllmGen.cpp | ||
| fp8BlockScaleMoe.cpp | ||
| fp8BlockScalingGemm.cpp | ||
| fp8Op.cpp | ||
| fp8Op.h | ||
| fp8PerTensorScaleMoe.cpp | ||
| fp8PerTensorScalingTrtllmGenGemm.cpp | ||
| fp8Quantize.cpp | ||
| fusedTopkSoftmax.cpp | ||
| gatherTreeOp.cpp | ||
| groupRmsNormOp.cpp | ||
| logitsBitmaskOp.cpp | ||
| loraOp.cpp | ||
| mambaConv1dOp.cpp | ||
| mlaPreprocessOp.cpp | ||
| moeCommOp.cpp | ||
| moeLoadBalanceOp.cpp | ||
| moeOp.cpp | ||
| mtpOp.cpp | ||
| ncclCommunicatorOp.cpp | ||
| ncclCommunicatorOp.h | ||
| noAuxTcOp.cpp | ||
| parallelDecodeKVCacheUpdateOp.cpp | ||
| redrafterCurandOp.cpp | ||
| reducescatterOp.cpp | ||
| relativeAttentionBiasOp.cpp | ||
| selectiveScanOp.cpp | ||
| thUtils.cpp | ||
| thUtils.h | ||
| userbuffersFinalizeOp.cpp | ||
| userbuffersTensor.cpp | ||
| userbuffersTensor.h | ||
| weightOnlyQuantOp.cpp | ||