TensorRT-LLMs/tests/unittest/_torch
William Zhang a4049fc557
[#9413][fix] Minor fixes to nemotron H and custom models in AD (#9416)
* Why?

There were a couple of issues with the recently merged custom model
injection for AutoDeploy + the reference implementation of nemotron
H:
- `d_mlp` was left in despite being mathematically always null (could
  lead to runtime issues during sharding).
- the custom model mapping was inherited by children factories.

* What?

This commit fixes these issues, and refactors the key of the custom
implementation to be based on the name of the configuration class as
well.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-11-24 20:17:33 -08:00
..
attention [TRTLLM-8777][feat] Update DeepGEMM to the latest commit to include optimizations for DeepSeek-v3.2 (#9380) 2025-11-25 08:58:08 +08:00
auto_deploy [#9413][fix] Minor fixes to nemotron H and custom models in AD (#9416) 2025-11-24 20:17:33 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-8650][fix] beam search request validation (#8433) (#9228) 2025-11-21 04:08:45 -08:00
misc [TRTLLM-7963][feat] Use CUDAGraph to improve the tuning accuracy for AutoTuner. (#9089) 2025-11-20 08:54:29 +08:00
modeling [TRTLLM-7967][feat] Adding Starcoder2 PyTorch Backend Support (#8923) 2025-11-24 11:23:22 -08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [TRTLLM-9370][feat] Integration of CuteDSL NVFP4 grouped GEMM (Part 2: SwiGLU Fusion and Finalize Fusion) (#9288) 2025-11-21 14:03:38 -08:00
multi_gpu [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00
ray_orchestrator [None][chore] Add placement test for ray executor (#9122) 2025-11-14 23:10:59 -08:00
sampler [TRTLLM-9302][chore] Move build config from BaseLlmArgs to TrtLlmArgs (#9249) 2025-11-24 10:54:41 +08:00
speculative [https://nvbugs/5590408][fix] Exclude num of draft tokens from mMaxSeqLenKv (#9210) 2025-11-18 15:41:56 -05:00
thop [None][feat] Support Yarn on QwQ-32B model (#9059) 2025-11-25 07:27:28 +08:00
helpers.py [TRTLLM-8521][chore] remove circular dependency between model engine and cuda graph runner (#7572) 2025-11-11 10:13:45 -08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00