TensorRT-LLMs/tests/unittest/_torch
William Zhang 2dd3ebf037
[#9150][feat] Add code for nano v3 to custom implementation in AD (#9465)
* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of #9150.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-12-02 08:56:44 -08:00
..
attention [None] [feat] Optimize the algorithm part of RocketKV (#9333) 2025-12-01 09:04:09 +08:00
auto_deploy [#9150][feat] Add code for nano v3 to custom implementation in AD (#9465) 2025-12-02 08:56:44 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
misc [None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (#9211) 2025-11-28 13:32:21 +08:00
modeling [FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (#9261) 2025-12-02 13:40:20 +08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (#9486) 2025-12-01 08:37:07 +08:00
multi_gpu [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00
ray_orchestrator [TRTLLM-9144][fix] enhance RPC robustness (#8711) 2025-12-02 21:37:59 +08:00
sampler [TRTLLM-6756][feat] Add Beam Search to TorchSampler (#8509) 2025-12-01 18:48:04 +01:00
speculative [TRTLLM-6756][feat] Add Beam Search to TorchSampler (#8509) 2025-12-01 18:48:04 +01:00
thop [None][feat] Unify nvfp4 gemm backend (#8963) 2025-12-02 11:03:51 +08:00
helpers.py [TRTLLM-8521][chore] remove circular dependency between model engine and cuda graph runner (#7572) 2025-11-11 10:13:45 -08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00