TensorRT-LLMs/tensorrt_llm
Yan Chunwei 5342c607cd [https://nvbugs/5516710][fix] fix Llama 3.3 TP PP case (#7717)
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-09-25 21:02:35 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [https://nvbugs/5516710][fix] fix Llama 3.3 TP PP case (#7717) 2025-09-25 21:02:35 +08:00
auto_parallel [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
bench [None][fix] refine backend option handling for commands (#7829) 2025-09-24 10:54:33 +08:00
commands [None][fix] refine backend option handling for commands (#7829) 2025-09-24 10:54:33 +08:00
evaluate [TRTLLM-7728][feat] batched sampling by strategy (supersedes enable_mixed_sampler, cf. TRTLLM-7156) (#7294) 2025-09-23 16:05:05 -07:00
executor [None][fix] Revert "[None][feat] Return topk logprobs in torch backend (#7756)" (#7969) 2025-09-24 15:36:38 -07:00
inputs [TRTLLM-7328][feat] E-PD Disagg Support via llmapi (3/N) (#7577) 2025-09-22 19:07:18 -07:00
layers [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
llmapi [None][fix] Revert "[None][feat] Return topk logprobs in torch backend (#7756)" (#7969) 2025-09-24 15:36:38 -07:00
metrics [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
models [https://nvbugs/5496960][fix] Fix Gemma model forward. (#7509) 2025-09-22 14:28:38 +08:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization [OMNIML-2336][feat] Add NVFP4 x FP8 (#6809) 2025-09-04 09:03:38 -07:00
runtime [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
scaffolding [None][fix] Revert "[None][feat] Return topk logprobs in torch backend (#7756)" (#7969) 2025-09-24 15:36:38 -07:00
serve [TRTLLM-5235][feat] Enable regex and EBNF grammar in trtllm-serve (#7925) 2025-09-24 18:30:23 +08:00
tools [None] [feat] nsys profile output kernel classifier (#7020) 2025-08-23 00:57:37 -04:00
__init__.py [https://nvbugs/5367180][fix] Fix xgrammar import before loading tensorrt_llm binary (#7906) 2025-09-23 00:29:57 -07:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
_mnnvl_utils.py [https://nvbugs/5477730][fix] Fix the alltoall case when tp_size larger than ep_size (#7331) 2025-09-04 08:10:03 -04:00
_utils.py [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
builder.py [TRTLLM-5930][doc] 1.0 Documentation. (#6696) 2025-09-09 12:16:03 +08:00
disaggregated_params.py [TRTLLM-7328][feat] E-PD Disagg Support via llmapi (3/N) (#7577) 2025-09-22 19:07:18 -07:00
functional.py [TRTLLM-6341][feature] Support SWA KV cache reuse (#6768) 2025-09-24 14:28:24 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
lora_helper.py [TRTLLM-6825][fix] Update lora for phi4-mm (#6817) 2025-08-21 22:00:04 -04:00
lora_manager.py [https://nvbugs/5467232][fix] Fix load_torch_hf_lora to override lora_config.trtllm_modules_to_hf_modules with default only when it has no value (#7132) 2025-08-24 15:00:24 +03:00
mapping.py [TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571) 2025-09-17 09:41:32 +08:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py [TRTLLM-7015] [feat] Enable prompt_logprobs in pytorch backend (#7580) 2025-09-23 18:48:10 -07:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
top_model_mixin.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
version.py [None][chore] Version bump for 1.1.0rc6 (#7824) 2025-09-18 11:13:56 +08:00