TensorRT-LLMs/tensorrt_llm
JunyiXu-nv ac0df0a393
[None][feat] Cherry-pick Responses API and multiple postprocess workers support for chat harmony (#7600)
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Co-authored-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
Co-authored-by: Tao Li @ NVIDIA <tali@nvidia.com>
2025-09-09 19:28:29 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [None][chore] Make low_precision_combine as a llm arg (#7598) 2025-09-08 17:22:04 -04:00
auto_parallel [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
bench [None][fix] Fix data type of KV Cache percentage in bench. (#7230) 2025-08-26 12:28:09 -04:00
commands [None][feat] Add logging for OAI disagg server (#7232) 2025-08-26 21:02:03 -07:00
evaluate [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
executor [TRTLLM-6994][feat] FP8 Context MLA integration. (#7581) 2025-09-08 10:10:29 +08:00
inputs [None][fix] Fix mm_placholder_counts extraction issue. (#7118) 2025-08-22 12:28:30 +08:00
layers [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
llmapi [None][chore] Make low_precision_combine as a llm arg (#7598) 2025-09-08 17:22:04 -04:00
metrics [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
models [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization [None][fix] Remove and fuse some element-wise ops in the ds-r1-fp8 model (#7238) 2025-08-27 10:35:38 +08:00
runtime [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
scaffolding [None][docs] Update Dynasor paper info (#7137) 2025-08-29 18:47:47 -07:00
serve [None][feat] Cherry-pick Responses API and multiple postprocess workers support for chat harmony (#7600) 2025-09-09 19:28:29 +08:00
tools [None] [feat] nsys profile output kernel classifier (#7020) 2025-08-23 00:57:37 -04:00
__init__.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
_mnnvl_utils.py [TRTLLM-6747][feat] Merge add sparse exp and shared exp into local re… (#7422) 2025-08-31 23:15:05 -07:00
_utils.py [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
builder.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
disaggregated_params.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
functional.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_helper.py [TRTLLM-6825][fix] Update lora for phi4-mm (#6817) 2025-08-21 22:00:04 -04:00
lora_manager.py [https://nvbugs/5467232][fix] Fix load_torch_hf_lora to override lora_config.trtllm_modules_to_hf_modules with default only when it has no value (#7132) 2025-08-24 15:00:24 +03:00
mapping.py [TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816) 2025-08-14 09:00:02 -07:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
top_model_mixin.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
version.py [None][chore] Bump version to 1.1.0rc2.post2 (#7582) 2025-09-07 23:09:48 +08:00