mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* add qwen3 dense model pytorch backend support, initial commit solve the results error issue add qwen3 moe model pytorch backend support reformat the code * perf - use flash_infer rmsnorm for qwen3 * feat - support qwen3 moe rmsnorm * Put the computation of Q and K norm (in attn) into a single CUDA stream, and get a 5% - 8% throughput improvement on Qwen3 4B and Qwen3 - moe 30B - A3B. * Put the computation of Q and K norm (in attn) into a single CUDA stream, and get a 5% - 8% throughput improvement on Qwen3 4B and Qwen3 - moe 30B - A3B. -- Forgot to update all modifications. * fix bugs of running qwen3 public models and fp8 models Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com> * fix bugs due to rebase Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com> * fix bugs captured by pre-commi Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com> * fix bug of attention Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com> --------- Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com> Co-authored-by: Keddy Jin <jin.gq@aliyun.com> Co-authored-by: Jiying Dong <87510204+dongjiyingdjy@users.noreply.github.com> Co-authored-by: shao <shao@nvidia.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| .gitkeep | ||
| modeling_auto.py | ||
| modeling_bert.py | ||
| modeling_deepseekv3.py | ||
| modeling_llama.py | ||
| modeling_llava_next.py | ||
| modeling_mamba_hybrid.py | ||
| modeling_mistral.py | ||
| modeling_mixtral.py | ||
| modeling_mllama.py | ||
| modeling_multimodal_encoder.py | ||
| modeling_multimodal_utils.py | ||
| modeling_nemotron_h.py | ||
| modeling_nemotron_nas.py | ||
| modeling_nemotron.py | ||
| modeling_qwen2vl.py | ||
| modeling_qwen3_moe.py | ||
| modeling_qwen3.py | ||
| modeling_qwen_moe.py | ||
| modeling_qwen.py | ||
| modeling_utils.py | ||
| modeling_vila.py | ||