TensorRT-LLMs/tensorrt_llm/_torch
Chang Liu b8818b45be
fix: llama4: address couple of issues in llama4 attention module (#3491)
* fix attn module for llama4

* Address comments

* Rebase to accommodate latest attn refactor and refactor l4attn

* Remove aux_stream from classic attn

* Use RMSNorm for L2Norm

* Update tensorrt_llm/_torch/models/modeling_llama.py

Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
Signed-off-by: Chang Liu <lc9114@gmail.com>

* Add typing informations for _attn_qkv

* Remove redundant comment

* Simplify llama4 DecoderLayer logic

---------

Signed-off-by: Chang Liu <lc9114@gmail.com>
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
2025-04-18 01:54:59 +00:00
..
attention_backend Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
auto_deploy chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
compilation feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371) 2025-04-11 21:25:29 +08:00
custom_ops Fix fused_moe fallback issue. (#3652) 2025-04-17 23:17:04 +08:00
distributed chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
models fix: llama4: address couple of issues in llama4 attention module (#3491) 2025-04-18 01:54:59 +00:00
modules fix: llama4: address couple of issues in llama4 attention module (#3491) 2025-04-18 01:54:59 +00:00
peft added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
pyexecutor feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
speculative Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
pipeline_interface.py Update (#2978) 2025-03-23 16:39:35 +08:00
utils.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00