TensorRT-LLMs/tensorrt_llm/_torch/modules
yuxianq 7225bd8b91
chore: Refine attention backend interface. (#3271)
Refine attention backend interface.

Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-09 02:34:53 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py chore: Refine attention backend interface. (#3271) 2025-04-09 02:34:53 +08:00
decoder_layer.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
embedding.py Update (#2978) 2025-03-23 16:39:35 +08:00
fused_moe.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
gated_mlp.py perf: Add optimizations for deepseek in min latency mode (#3093) 2025-04-02 09:05:24 +08:00
linear.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00
logits_procesor.py Update (#2978) 2025-03-23 16:39:35 +08:00
mamba.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
mlp.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
rms_norm.py Update (#2978) 2025-03-23 16:39:35 +08:00
rotary_embedding.py Update (#2978) 2025-03-23 16:39:35 +08:00