TensorRT-LLMs/tests/unittest/_torch/modeling
hlu1 320195dc0d
[Architecture] Refactor FusedMoE (#4790)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
Co-authored-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
2025-06-03 14:02:19 +08:00
..
test_modeling_bert.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_clip.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
test_modeling_deepseek.py [Architecture] Refactor FusedMoE (#4790) 2025-06-03 14:02:19 +08:00
test_modeling_llama.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
test_modeling_mixtral.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
test_modeling_mllama.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_nemotron_h.py [TRTLLM-5085][fix] Nemotron H correctness test (#4444) 2025-05-20 17:55:25 +08:00
test_modeling_nemotron_nas.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
test_modeling_nemotron.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_out_of_tree.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_modeling_qwen_moe.py feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369) 2025-04-10 22:45:57 +08:00
test_modeling_qwen.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_siglip.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
test_modeling_vila.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00