TensorRT-LLMs/tensorrt_llm
danielafrimi 2b58dba0f6 [https://nvbugs/5524714][fix] Fix TP sharding of fused-QKV weight scales in W4A16 AWQ (#8432)
Signed-off-by: Daniel Afrimi <dafrimi@nvidia.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-11-04 16:42:31 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [https://nvbugs/5524714][fix] Fix TP sharding of fused-QKV weight scales in W4A16 AWQ (#8432) 2025-11-04 16:42:31 +08:00
bench [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
commands [TRTLLM-8214][feat] Support Qwen3 tool parser (#8216) 2025-10-29 15:48:29 +08:00
evaluate [TRTLLM-6928][fix] Refactor multimodal unittest (#8453) 2025-11-03 06:01:07 -08:00
executor [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
inputs [None][fix] InputProcessor config naming convention fix (#8705) 2025-11-03 22:29:21 -08:00
layers [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
llmapi [https://nvbugs/5625380][chore] Remove multimodal related fields from decoder llm input (#8846) 2025-11-02 17:44:08 -08:00
metrics [None][feat] Add opentelemetry tracing (#5897) 2025-10-27 18:51:07 +08:00
models [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
plugin [TRTLLM-8683][chore] Migrate PluginConfig to Pydantic (#8277) 2025-10-17 16:13:22 -04:00
quantization [None][perf] Use fp8 quant kernel in DS3.2 indexer module (#8701) 2025-10-29 12:45:09 +08:00
runtime [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
scaffolding [None][feat] Add benchmark to DeepConf (#8776) 2025-11-03 16:05:50 +08:00
serve [https://nvbugs/5523315][fix] Fix serve benchmark test (#8255) 2025-11-03 00:30:13 -08:00
tools [None][fix] Layer wise benchmarks: use local models, lint (#8799) 2025-10-30 09:47:46 -07:00
__init__.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
_common.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
_mnnvl_utils.py [https://nvbugs/5477730][fix] Fix the alltoall case when tp_size larger than ep_size (#7331) 2025-09-04 08:10:03 -04:00
_ray_utils.py [TRTLLM-8507][fix] Fix ray resource cleanup and error handling in LoRA test (#8175) 2025-10-14 23:46:30 +08:00
_utils.py [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
builder.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
disaggregated_params.py [TRTLLM-7328][feat] E-PD Disagg Support via llmapi (3/N) (#7577) 2025-09-22 19:07:18 -07:00
functional.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
lora_helper.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
lora_manager.py [https://nvbugs/5510879][fix] Fix pytorch & TRT-python flows fused LoRA adapter modules weight split with TP>1 (#8063) 2025-10-12 12:29:52 -07:00
mapping.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
network.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
ray_stub.py [TRTLLM-8507][fix] Fix ray resource cleanup and error handling in LoRA test (#8175) 2025-10-14 23:46:30 +08:00
sampling_params.py [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
top_model_mixin.py [TRTLLM-8683][chore] Migrate PluginConfig to Pydantic (#8277) 2025-10-17 16:13:22 -04:00
version.py [None][chore] Bump version to 1.2.0rc2 (#8562) 2025-10-22 14:35:05 +08:00