TensorRT-LLMs/tensorrt_llm/_torch/modules
Jin Li d49374bc45
[TRTLLM-7408][feat] Wrap MOE with custom op. (#7277)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-09-09 12:18:56 -04:00
..
fused_moe [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
mamba [None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589) 2025-09-09 11:42:22 +03:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py [https://nvbugs/5434424][fix] A quick fix for the wrong output issue of SM89 blocked scaling batched GEMM when the input tensor is non-contiguous. (#7615) 2025-09-09 08:58:15 -04:00
decoder_layer.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
embedding.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
gated_mlp.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
layer_norm.py [#6187][feat] add LayerNorm module (#6625) 2025-08-12 21:43:30 +02:00
linear.py [OMNIML-2336][feat] Add NVFP4 x FP8 (#6809) 2025-09-04 09:03:38 -07:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py feat: add LLmArgs option to force using dynamic quantization (#5346) 2025-07-01 12:16:09 -07:00
multi_stream_utils.py [None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615) 2025-08-19 09:58:44 +08:00
qk_norm_attention.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
rms_norm.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
rotary_embedding.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
swiglu.py [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
triton_linear.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00