TensorRT-LLMs/tests/unittest/_torch/modeling
milesial 362a8272f8
feat: llama4 input processor (#3383)
Signed-off-by: Alexandre Milesi <30204471+milesial@users.noreply.github.com>
Signed-off-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
Co-authored-by: Alexandre Milesi <30204471+milesial@users.noreply.github.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-04-25 16:47:14 -07:00
..
test_modeling_bert.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_llama.py add passing E2E LoRA flow (#3788) 2025-04-23 18:38:06 +03:00
test_modeling_mixtral.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
test_modeling_mllama.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_nemotron_h.py feat: Nemotron-H model support (#3430) 2025-04-16 14:05:56 -07:00
test_modeling_nemotron_nas.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
test_modeling_nemotron.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_out_of_tree.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_modeling_qwen_moe.py feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369) 2025-04-10 22:45:57 +08:00
test_modeling_qwen.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_vila.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00