TensorRT-LLMs/tensorrt_llm/_torch
Yukun He 0ae7017342
Unify two versions of AllReduce custom op (#3032)
* Rewrite unit test for unified allreduce op. Removing the legacy unit test.
* Revise formats, fusion_op bindings. Put all tensors as optional inputs.
* Move the MoeAllreduceOp to a separate custom op.
* Move all the fusion patterns to the new version of the AllReduce fusion kernel. Remove the AllReduce strategy config. Revise the AllReduce strategies and fusion pattern definitions.
* Add more TODOs, fixing minor bugs, and remove legacy code.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-22 21:58:42 +08:00
..
attention_backend feat: Introduce feature properties for attention backend. (#3659) 2025-04-19 12:37:27 +08:00
auto_deploy chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
compilation Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
custom_ops Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
distributed Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
models Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
modules feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
peft added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
pyexecutor chore: remove useless allgather (#3751) 2025-04-22 21:26:22 +08:00
speculative Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
pipeline_interface.py Clean up modeling_deepseek.py (#3640) 2025-04-18 17:54:33 -07:00
utils.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00