mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Add TRT-LLM Gen MOE to Deepseek fix fused moe rebase bug. Fix atol in test_fp4_gemm_quantize.py fix fused moe rebase bug. Fix FusedMoe. Disable 2nd routing kernel preexit Bump routing reduction to fp32 Disable PDL for fc1 [DEBUG] Lift token limit to 16k [Bugfix] Token limit to 16k + fp32 routing + tanh Make fp8 tileN 8 Fix FP8 MoE + Remove redundent temp output for FP4 [FP8-only] Avoid wasting CTAs for activation kernel fix: unblock FP8 weightloading with trtllm-gen Remove max_token limit for trtllm-gen path perf: avoid type-conversion and fill_ from aten Minor fix Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix rebase issues Signed-off-by: Hao Lu <haolu@nvidia.com> * Fix compile issue Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> * CI clean Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> --------- Signed-off-by: Hao Lu <haolu@nvidia.com> Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| assert.cpp | ||
| attentionOp.cpp | ||
| attentionOp.h | ||
| CMakeLists.txt | ||
| cublasMMWrapper.cpp | ||
| cublasMMWrapper.h | ||
| cublasVersionCheck.h | ||
| cudaBf16Fallbacks.cuh | ||
| cudaBufferUtils.cuh | ||
| cudaDriverWrapper.cpp | ||
| cudaDriverWrapper.h | ||
| cudaFp8Utils.cu | ||
| cudaProfilerUtils.cpp | ||
| cudaTypeUtils.cuh | ||
| customAllReduceUtils.h | ||
| envUtils.cpp | ||
| envUtils.h | ||
| jsonSerializeOptional.h | ||
| logger.cpp | ||
| mathUtils.h | ||
| memoryUtils.cu | ||
| memoryUtils.h | ||
| nvtxUtils.h | ||
| opUtils.cpp | ||
| opUtils.h | ||
| quantTypeUtils.cuh | ||
| reduceKernelUtils.cuh | ||
| safetensors.cpp | ||
| safetensors.h | ||
| stlUtils.h | ||
| stringUtils.cpp | ||
| timestampUtils.cpp | ||
| timestampUtils.h | ||
| tllmException.cpp | ||
| workspace.h | ||