mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-24 04:33:04 +08:00
* feat: TRT-LLM Gen FP8 MoE Llama4 Signed-off-by: Nikita Korobov <nkorobov@nvidia.com> * feat: TRT-LLM Gen llama4 MoE Top1 routing Signed-off-by: Jiqun Tu <jtu@nvidia.com> * feat: add per tensor FP8 TRT-LLM Gen GEMMs Signed-off-by: Nikita Korobov <nkorobov@nvidia.com> * Update Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Update Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Add license for cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/gemmCubins Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Add guard for routingIndicesClusterKernel Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Guard sm90+ for routingkernels Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> * Guard sm90+ for routingkernels Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> --------- Signed-off-by: Nikita Korobov <nkorobov@nvidia.com> Signed-off-by: Jiqun Tu <jtu@nvidia.com> Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com> Co-authored-by: Nikita Korobov <nkorobov@nvidia.com> Co-authored-by: Jiqun Tu <jtu@nvidia.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| .gitkeep | ||
| modeling_auto.py | ||
| modeling_bert.py | ||
| modeling_clip.py | ||
| modeling_deepseekv3.py | ||
| modeling_llama.py | ||
| modeling_llava_next.py | ||
| modeling_mamba_hybrid.py | ||
| modeling_mistral.py | ||
| modeling_mixtral.py | ||
| modeling_mllama.py | ||
| modeling_multimodal_encoder.py | ||
| modeling_multimodal_utils.py | ||
| modeling_nemotron_h.py | ||
| modeling_nemotron_nas.py | ||
| modeling_nemotron.py | ||
| modeling_qwen2vl.py | ||
| modeling_qwen3_moe.py | ||
| modeling_qwen3.py | ||
| modeling_qwen_moe.py | ||
| modeling_qwen.py | ||
| modeling_siglip.py | ||
| modeling_utils.py | ||
| modeling_vila.py | ||