| .. |
|
auto_deploy
|
feat:[AutoDeploy] Enhance RoPE support (#3115)
|
2025-04-11 23:51:24 +08:00 |
|
compilation
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
modeling
|
feat: Nemotron-H model support (#3430)
|
2025-04-16 14:05:56 -07:00 |
|
modules/tests_lora_modules
|
added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455)
|
2025-04-17 12:48:27 +08:00 |
|
multi_gpu
|
feat: Support cos_sin_cache in all cases. (#3517)
|
2025-04-16 13:48:44 +08:00 |
|
multi_gpu_modeling
|
Add Llama 4 (#3302)
|
2025-04-09 03:35:21 +08:00 |
|
speculative
|
Support CUDA graphs for EAGLE3 (#3176)
|
2025-04-17 04:53:50 +08:00 |
|
thop
|
waive test_fp8_scaled_mm (#3637)
|
2025-04-16 15:07:30 -07:00 |
|
deep_gemm_tests.py
|
feat: use NVRTC for DeepGEMM JIT compilation (#3239)
|
2025-04-07 20:29:23 +08:00 |
|
helpers.py
|
Update TensorRT-LLM (#2936)
|
2025-03-18 21:25:19 +08:00 |
|
pattern_watcher.py
|
Update TensorRT-LLM (#2936)
|
2025-03-18 21:25:19 +08:00 |
|
test_attention_no_cache.py
|
feat: no-cache attention in PyTorch workflow (#3085)
|
2025-04-05 01:54:32 +08:00 |
|
test_attention.py
|
reduce num layers in attention test (#3509)
|
2025-04-14 12:43:59 +08:00 |
|
test_autotuner.py
|
feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151)
|
2025-04-08 14:28:36 +08:00 |
|
test_flashinfer_attention.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
test_flashinfer_star_attn.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
test_fp4_bmm_quantize.py
|
feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151)
|
2025-04-08 14:28:36 +08:00 |
|
test_fp4_gemm_quantize.py
|
Fix test_fp4_quantize_gemm_torch (#3551)
|
2025-04-14 23:58:31 -07:00 |
|
test_fp4_linear.py
|
feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151)
|
2025-04-08 14:28:36 +08:00 |
|
test_fp8_batched_gemm.py
|
feat: Adding FP8 BMM from Codegen (#3541)
|
2025-04-16 10:37:15 +02:00 |
|
test_fp8_block_scale_gemm.py
|
feat: enable DeepGEMM by default (#3341)
|
2025-04-08 13:58:57 +08:00 |
|
test_fp8_linear.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_fp8_quantize.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_fused_moe.py
|
feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151)
|
2025-04-08 14:28:36 +08:00 |
|
test_moe_routing.py
|
Update TensorRT-LLM (#2936)
|
2025-03-18 21:25:19 +08:00 |
|
test_moe.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_overlap_scheduler_input.json
|
Update TensorRT-LLM (#2936)
|
2025-03-18 21:25:19 +08:00 |
|
test_overlap_scheduler.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
test_pytorch_model_engine.py
|
chore: move all distributed related codes into _torch.distributed directory (#3511)
|
2025-04-15 08:39:17 +08:00 |
|
test_resource_manager.py
|
Feat/ Integrate peftCacheManager in PyExecutor creation (#3372)
|
2025-04-15 15:14:43 +08:00 |
|
test_vanilla_attention.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |