TensorRT-LLMs/tests/unittest/_torch/modules
Xiaowei Wang 32dfdfba30
feat: fuse w4a8 moe pre-quant scale on Hopper (#5613)
Signed-off-by: Xiaowei Wang <100599594+xiaoweiw-nv@users.noreply.github.com>
2025-07-01 23:02:41 -04:00
..
tests_lora_modules added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
test_fused_moe.py feat: fuse w4a8 moe pre-quant scale on Hopper (#5613) 2025-07-01 23:02:41 -04:00
test_moe_host_sharer.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
test_moe_load_balancer.py feat: large-scale EP(part 8: Online EP load balancer integration for PCIe fp8) (#5226) 2025-06-25 22:25:13 -07:00
test_moe_routing.py [https://nvbugspro.nvidia.com/bug/5332927][fix] Fix the bug in the routing unit test (#5065) 2025-06-11 09:44:35 +08:00