TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Frida Hou de99e23696
[#5860][feat] Add ModelOPT INT4 awq fake quant support in AutoDeploy (#7770)
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
2025-10-01 13:13:45 -07:00
..
compile [#7675][feat] CapturedGraph to support max_batch_size > max(cuda_graph_batch_sizes) (#7888) 2025-09-24 10:11:44 -04:00
config [#5860][feat] Add ModelOPT INT4 awq fake quant support in AutoDeploy (#7770) 2025-10-01 13:13:45 -07:00
custom_ops [#5860][feat] Add ModelOPT INT4 awq fake quant support in AutoDeploy (#7770) 2025-10-01 13:13:45 -07:00
distributed [#7308] [feat] AutoDeploy: graph-less transformers mode for HF (#7635) 2025-09-18 10:44:24 +08:00
export [None][chore] Upgrade transformers to 4.56.0 (#7523) 2025-09-22 22:20:16 +08:00
models [#5860][feat] Add ModelOPT INT4 awq fake quant support in AutoDeploy (#7770) 2025-10-01 13:13:45 -07:00
shim [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00
transform [#5860][feat] Add ModelOPT INT4 awq fake quant support in AutoDeploy (#7770) 2025-10-01 13:13:45 -07:00
transformations [#7308] [feat] AutoDeploy: graph-less transformers mode for HF (#7635) 2025-09-18 10:44:24 +08:00
utils [#5860][feat] Add ModelOPT INT4 awq fake quant support in AutoDeploy (#7770) 2025-10-01 13:13:45 -07:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00
llm.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00