TensorRT-LLMs/tensorrt_llm/_torch/modules
Yuening Li 1f8ae2b2db
[TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629)
Signed-off-by: Yuening Li <62227368+yueningl@users.noreply.github.com>
2025-08-15 17:15:49 -04:00
..
fused_moe [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
mamba [TRTLLM-6174][feat] Enable FP32 mamba ssm cache (#6574) 2025-08-10 16:27:51 -04:00
__init__.py
attention.py [https://nvbugs/5427801][fix] Torch compile support for Llama4 and Ea… (#6858) 2025-08-15 11:14:20 -04:00
decoder_layer.py
embedding.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
gated_mlp.py [TRTLLM-6898][feat] make fused_moe_cute_dsl work on blackwell (#6616) 2025-08-08 15:03:48 +08:00
layer_norm.py [#6187][feat] add LayerNorm module (#6625) 2025-08-12 21:43:30 +02:00
linear.py [https://nvbugs/5378031] [feat] Hopper W4A8 MoE supports ModelOpt ckpt for PyT backend (#6200) 2025-08-13 21:24:40 +08:00
logits_processor.py
mlp.py
multi_stream_utils.py
rms_norm.py [#6187][feat] add LayerNorm module (#6625) 2025-08-12 21:43:30 +02:00
rotary_embedding.py
swiglu.py [TRTLLM-6263][feat] Enable fp8 SwiGLU to minimize host overhead (#6540) 2025-08-06 10:42:19 +08:00
triton_linear.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00