mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-22 03:35:00 +08:00
* Enable NOPE, Fix a rotary embedding bug for gptj_stype_rope, Address PR comment, Properly skip the rotary_embdding for Llama4 ROPE layers * Add support for FP8 checkpoint, Fix ckpt weighting loading for FP8 * Temporarily disable min_latency_mode for llama4 --------- Co-authored-by: Yilin Fan <yilinf@nvidia.com> Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| attention.py | ||
| decoder_layer.py | ||
| embedding.py | ||
| fused_moe.py | ||
| gated_mlp.py | ||
| linear.py | ||
| logits_procesor.py | ||
| mamba.py | ||
| mlp.py | ||
| rms_norm.py | ||
| rotary_embedding.py | ||