TensorRT-LLMs/tensorrt_llm
Balaram Buddharaju 2989bf5b39
[None][feat] Add new helix kernels for MNNVL-based codepath (#11433)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2026-02-14 09:39:24 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [None][feat] Add new helix kernels for MNNVL-based codepath (#11433) 2026-02-14 09:39:24 +08:00
bench [None][chore] Print correct backend name in benchmark report (#10597) 2026-01-12 14:46:00 -05:00
commands [TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462) 2026-02-14 06:11:11 +08:00
evaluate [https://nvbugs/5810940][fix] Update lm_eval to 4.9.10 and re-enable Skip Softmax Attention tests on CI. (#11176) 2026-02-11 00:54:40 -05:00
executor [TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462) 2026-02-14 06:11:11 +08:00
grpc [#11037][fix] Fix proto-to-SamplingParams conversion bugs and add gRPC tests (#11292) 2026-02-05 05:00:29 -05:00
inputs [#11170][fix] Fix for mm placeholder counts (#11461) 2026-02-14 09:12:03 +08:00
layers [None][fix] [Gemma3] Fix RoPE for local attention for Gemma3 (#9961) 2025-12-27 11:50:59 -08:00
llmapi [TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462) 2026-02-14 06:11:11 +08:00
metrics [None][feat] Add trtllm_ prefix for exposed metrics (#8845) 2025-11-06 15:27:18 +08:00
models [TRTLLM-9465][fix] Swap TP-CP grouping order (#10350) 2026-01-05 20:08:03 +08:00
plugin [https://nvbugs/5788127][fix] Use uint64_t as the dtype of lamport_buffer_size to avoid overflow (#10499) 2026-01-13 17:16:22 +08:00
quantization [None][chore] docs: clarify LoRA is not supported with --use_fp8_rowwise in Fp8RowwiseAttention (see #2603) (#10320) 2026-01-19 04:38:00 -05:00
runtime [None][feat] Enhance support for complex models (#11254) 2026-02-05 17:28:26 +08:00
scaffolding [None][feat] Deep Research Implemented with Scaffolding (#8452) 2025-11-06 10:33:28 +08:00
serve [#11170][fix] Fix for mm placeholder counts (#11461) 2026-02-14 09:12:03 +08:00
tokenizer [https://nvbugs/5684820][fix] fix the detokenizer issue for DeepSeek-v3.2 (#10106) 2025-12-22 10:56:33 +08:00
tools [None][feat] Optimize NemotronH model with elementwise and nvfp4 fusion (#11273) 2026-02-12 09:25:31 -05:00
__init__.py [https://nvbugs/5761391][fix] Include triton-kernels as a packaged dependency (#10471) 2026-01-28 19:56:32 -08:00
_common.py [None][feat] Hang detection for executor loop and worker. (#10480) 2026-01-13 02:34:32 -05:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [None][refactor] Unify the usage of MPIDist and TorchDist. (#10380) 2026-01-14 14:05:47 +08:00
_mnnvl_utils.py [https://nvbugs/5791900][fix] Fix HelixCpMnnvlMemory init with PP (#10533) 2026-01-13 15:48:42 -05:00
_ray_utils.py [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
_utils.py [TRTLLM-10487][feat] Add user-provided UUID support for multimodal KV cache identification. (#11075) 2026-02-12 00:48:47 -05:00
builder.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
disaggregated_params.py [TRTLLM-8921][feat] implement gen-first disagg_service (#11020) 2026-02-03 15:46:11 -05:00
functional.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
lora_helper.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
lora_manager.py [https://nvbugs/5510879][fix] Fix pytorch & TRT-python flows fused LoRA adapter modules weight split with TP>1 (#8063) 2025-10-12 12:29:52 -07:00
mapping.py [None][feat] Adding torch ext API for FusedAddRMSNormQuant kernel (#9905) 2026-01-15 07:29:15 +08:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
network.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
parameter.py
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
ray_stub.py [TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462) 2026-02-14 06:11:11 +08:00
sampling_params.py [TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675) 2026-01-16 10:52:41 -08:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [https://nvbugs/5775021] [fix] Replace pickle.load with restricted Unpickler (#10622) 2026-01-21 11:42:54 +08:00
top_model_mixin.py [TRTLLM-8683][chore] Migrate PluginConfig to Pydantic (#8277) 2025-10-17 16:13:22 -04:00
version.py [None][chore] Bump version to 1.3.0rc4 (#11485) 2026-02-12 16:55:23 -05:00