mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-02-04 02:02:01 +08:00
* feat: TRT-LLM Gen FP8 MoE Llama4 Signed-off-by: Nikita Korobov <nkorobov@nvidia.com> * feat: TRT-LLM Gen llama4 MoE Top1 routing Signed-off-by: Jiqun Tu <jtu@nvidia.com> * feat: add per tensor FP8 TRT-LLM Gen GEMMs Signed-off-by: Nikita Korobov <nkorobov@nvidia.com> * Update Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Update Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Add license for cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/gemmCubins Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Add guard for routingIndicesClusterKernel Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Guard sm90+ for routingkernels Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Guard sm90+ for routingkernels Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> --------- Signed-off-by: Nikita Korobov <nkorobov@nvidia.com> Signed-off-by: Jiqun Tu <jtu@nvidia.com> Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> Co-authored-by: Nikita Korobov <nkorobov@nvidia.com> Co-authored-by: Jiqun Tu <jtu@nvidia.com> |
||
|---|---|---|
| .. | ||
| deep_gemm_tests.py | ||
| test_cublas_mm.py | ||
| test_fp4_bmm_quantize.py | ||
| test_fp4_gemm_quantize.py | ||
| test_fp4_linear.py | ||
| test_fp8_batched_gemm.py | ||
| test_fp8_block_scale_gemm.py | ||
| test_fp8_linear.py | ||
| test_fp8_quantize.py | ||
| test_logits_bitmask_op.py | ||
| test_mamba_conv1d_op.py | ||
| test_moe_alltoall.py | ||
| test_moe.py | ||
| test_noaux_tc.py | ||
| test_scaled_mm.py | ||
| test_selective_scan_op.py | ||