| .. |
|
test_modeling_bert.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
test_modeling_clip.py
|
feat: add Pytorch support of Vision Encoder for multimodal models (#3791)
|
2025-05-03 05:13:47 +08:00 |
|
test_modeling_deepseek.py
|
Breaking change: perf: Enable scheduling overlap by default (#4174)
|
2025-05-15 14:27:36 +08:00 |
|
test_modeling_llama.py
|
feat: support multi lora adapters and TP (#3885)
|
2025-05-08 23:45:45 +08:00 |
|
test_modeling_mixtral.py
|
feat: Add FP8 support for SM 120 (#3248)
|
2025-04-14 16:05:41 -07:00 |
|
test_modeling_mllama.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_modeling_nemotron_h.py
|
feat: Add pp support for hybrid attn/mamba model (#4358)
|
2025-05-19 14:47:45 +08:00 |
|
test_modeling_nemotron_nas.py
|
feat: Support cos_sin_cache in all cases. (#3517)
|
2025-04-16 13:48:44 +08:00 |
|
test_modeling_nemotron.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_modeling_out_of_tree.py
|
Breaking change: perf: Enable scheduling overlap by default (#4174)
|
2025-05-15 14:27:36 +08:00 |
|
test_modeling_qwen_moe.py
|
feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369)
|
2025-04-10 22:45:57 +08:00 |
|
test_modeling_qwen.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
test_modeling_siglip.py
|
feat: add Pytorch support of Vision Encoder for multimodal models (#3791)
|
2025-05-03 05:13:47 +08:00 |
|
test_modeling_vila.py
|
feat: llama4 input processor (#3383)
|
2025-04-25 16:47:14 -07:00 |