..
auto_deploy
[ #5048 ][enhance] AutoDeploy: Optimize prepare_inputs ( #6634 )
2025-08-10 13:55:04 +03:00
compilation
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
debugger
Fix: fix nvbug 5356427 ( #5464 )
2025-06-25 22:24:26 +08:00
modeling
[TRTLLM-5252][fix] Propagate mapping to intermediate layers ( #6611 )
2025-08-08 01:50:36 -04:00
modules
[TRTLLM-6898][feat] make fused_moe_cute_dsl work on blackwell ( #6616 )
2025-08-08 15:03:48 +08:00
multi_gpu
[None][feat] Add NCCL Symmetric Integration for All Reduce ( #4500 )
2025-08-07 17:28:14 -07:00
multi_gpu_modeling
[TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend ( #5752 )
2025-07-16 16:42:59 +08:00
multimodal
[TRTLLM-6654][feat] Add support for external multimodal embeddings ( #6263 )
2025-07-30 10:00:15 -04:00
speculative
[TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default ( #6216 )
2025-08-07 22:19:37 -04:00
thop
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
helpers.py
Deepseek R1 FP8 Support on Blackwell ( #6486 )
2025-08-01 10:26:28 +08:00
pattern_watcher.py
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
test_attention_mla.py
[None][feat] : Add FP8 context MLA support for SM120 ( #6059 )
2025-08-07 16:16:34 +08:00
test_attention_no_cache.py
refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests ( #3919 )
2025-04-29 10:08:04 +08:00
test_attention.py
reduce num layers in attention test ( #3509 )
2025-04-14 12:43:59 +08:00
test_autotuner.py
feat: Enhance AutoTuner inference path and code readability ( #4466 )
2025-06-04 10:53:11 +08:00
test_beam_search.py
[TRTLLM-6650][fix] Enhance CUDA graph + Beam search to correctly handle padding ( #6665 )
2025-08-08 14:00:33 +02:00
test_best_of_n.py
[TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default ( #6216 )
2025-08-07 22:19:37 -04:00
test_custom_ops.py
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
test_executor_request_queue.py
[TRTLLM-5271][feat] best_of/n for pytorch workflow ( #5997 )
2025-08-04 14:08:06 +02:00
test_flashinfer_attention.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py
fix: [5328141] increase tolerance for test_fp8_block_scale_gemm ( #5849 )
2025-07-22 12:48:00 +08:00
test_group_rmn_norm.py
feat: Add heuristic for GroupRMSNorm kernel selection. ( #4047 )
2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py
feat: Add MNNVL MoE A2A support ( #3504 )
2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json
refactor: Unify request order in TRT and PyTorch workflow ( #4096 )
2025-05-20 18:49:27 +02:00
test_overlap_scheduler.py
[TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default ( #6216 )
2025-08-07 22:19:37 -04:00
test_pytorch_model_engine.py
[TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats ( #5372 )
2025-07-17 00:50:30 +08:00
test_resource_manager.py
[TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter ( #6510 )
2025-08-07 09:05:36 +03:00
test_return_logits.py
[TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default ( #6216 )
2025-08-07 22:19:37 -04:00
test_share_tensor.py
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes ( #5396 )
2025-07-10 05:12:53 +09:00
test_trtllm_sampler.py
[TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default ( #6216 )
2025-08-07 22:19:37 -04:00
test_vanilla_attention.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_virtual_memory.py
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory ( #5034 )
2025-08-04 13:51:01 +08:00