TensorRT-LLMs/tests/unittest/_torch/modeling
tomeras91 06d9f1e2f6
[test] Use LLM API for Nemotron-H correctness test (#5097)
Signed-off-by: Tomer Asida <57313761+tomeras91@users.noreply.github.com>
2025-06-12 09:54:46 +03:00
..
test_modeling_bert.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_clip.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
test_modeling_deepseek.py [Architecture] Refactor FusedMoE (#4790) 2025-06-03 14:02:19 +08:00
test_modeling_llama_min_latency.py test: add unit tests for Llama4 min_latency code (#4980) 2025-06-10 12:10:26 -07:00
test_modeling_llama.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
test_modeling_mixtral.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
test_modeling_mllama.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_nemotron_h.py [test] Use LLM API for Nemotron-H correctness test (#5097) 2025-06-12 09:54:46 +03:00
test_modeling_nemotron_nas.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
test_modeling_nemotron.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_out_of_tree.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_modeling_qwen_moe.py feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369) 2025-04-10 22:45:57 +08:00
test_modeling_qwen.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_siglip.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
test_modeling_vila.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00