TensorRT-LLMs/tests/unittest/_torch/modeling
Luis Vega 0bda1f9780
feat: Nemotron-H model support (#3430)
* added files for nemotron-h

Signed-off-by: Luis Vega <lvega@nvidia.com>

* use try/except to import RMSNorm

Signed-off-by: Luis Vega <lvega@nvidia.com>

---------

Signed-off-by: Luis Vega <lvega@nvidia.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-16 14:05:56 -07:00
..
test_modeling_bert.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_llama.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_mixtral.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
test_modeling_mllama.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_nemotron_h.py feat: Nemotron-H model support (#3430) 2025-04-16 14:05:56 -07:00
test_modeling_nemotron_nas.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
test_modeling_nemotron.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_out_of_tree.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_modeling_qwen_moe.py feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369) 2025-04-10 22:45:57 +08:00
test_modeling_qwen.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_vila.py fix vila test (#3042) 2025-04-04 14:30:06 +08:00