TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Yechan Kim 67208f1512
[None][fix] InputProcessor config naming convention fix (#8705)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-11-03 22:29:21 -08:00
..
compile [https://nvbugs/5606166][fix] AutoDeploy: use tuples for cudagraph shape lookup (#8658) 2025-10-28 10:52:43 -07:00
config [None][feat] AutoDeploy: Add FP8 MOE for Nemotron (#8599) 2025-10-25 15:26:45 -04:00
custom_ops [None][perf] AutoDeploy optimize _get_unique_value (#8822) 2025-10-31 04:57:10 -07:00
distributed [#8781][fix] Cache the AllReduce wrapper to avoid re-allocating workspace which caused a hang (#8803) 2025-11-02 15:30:39 +02:00
export [#4585][feat] Replace unified attention before export (#8303) 2025-10-23 18:02:04 -04:00
models [#8245][feat] Autodeploy: Guided Decoding Support (#8551) 2025-10-28 09:29:57 +08:00
shim [TRTLLM-8836][chore] Create ModelEngine from LlmArgs (#8600) 2025-11-01 05:26:06 -07:00
transform [TRTLLM-8734][feat] AutoDeploy: Enable the nvfp4 for Nemotron MOE (#8737) 2025-10-30 12:33:08 -07:00
utils [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
llm.py [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00