TensorRT-LLMs/tensorrt_llm/_torch/modules
Nikita Korobov a419b77fb5
[None][fix] mxfp4 padding bug for TRT-LLM and CUTLASS MoE backends (#7214)
Signed-off-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
2025-08-28 10:08:05 -07:00
..
fused_moe [None][fix] mxfp4 padding bug for TRT-LLM and CUTLASS MoE backends (#7214) 2025-08-28 10:08:05 -07:00
mamba [TRTLLM-4921][feat] Enable chunked prefill for Nemotron-H (#6334) 2025-08-22 12:15:20 -04:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py [None][fix] Remove and fuse some element-wise ops in the ds-r1-fp8 model (#7238) 2025-08-27 10:35:38 +08:00
decoder_layer.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
embedding.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
gated_mlp.py [TRTLLM-6898][feat] make fused_moe_cute_dsl work on blackwell (#6616) 2025-08-08 15:03:48 +08:00
layer_norm.py [#6187][feat] add LayerNorm module (#6625) 2025-08-12 21:43:30 +02:00
linear.py [None][feat] Apply AutoTuner to fp8_block_scale_deep_gemm to trigger JIT ahead of time. (#7113) 2025-08-25 10:48:31 +08:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py feat: add LLmArgs option to force using dynamic quantization (#5346) 2025-07-01 12:16:09 -07:00
multi_stream_utils.py [None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615) 2025-08-19 09:58:44 +08:00
rms_norm.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
rotary_embedding.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
swiglu.py [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
triton_linear.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00