| .. |
|
test_modeling_bert.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
test_modeling_clip.py
|
feat: add Pytorch support of Vision Encoder for multimodal models (#3791)
|
2025-05-03 05:13:47 +08:00 |
|
test_modeling_deepseek.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
test_modeling_llama_min_latency.py
|
test: add unit tests for Llama4 min_latency code (#4980)
|
2025-06-10 12:10:26 -07:00 |
|
test_modeling_llama.py
|
feat: support multi lora adapters and TP (#3885)
|
2025-05-08 23:45:45 +08:00 |
|
test_modeling_mixtral.py
|
feat: Add FP8 support for SM 120 (#3248)
|
2025-04-14 16:05:41 -07:00 |
|
test_modeling_mllama.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_modeling_nemotron_h.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
test_modeling_nemotron_nas.py
|
[fix][test] Speedup Nemotron NAS unittests (#5202)
|
2025-06-15 11:26:03 +03:00 |
|
test_modeling_nemotron.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_modeling_out_of_tree.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
test_modeling_qwen_moe.py
|
feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369)
|
2025-04-10 22:45:57 +08:00 |
|
test_modeling_qwen.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
test_modeling_siglip.py
|
feat: add Pytorch support of Vision Encoder for multimodal models (#3791)
|
2025-05-03 05:13:47 +08:00 |
|
test_modeling_vila.py
|
feat: llama4 input processor (#3383)
|
2025-04-25 16:47:14 -07:00 |