| .. |
|
_tensorrt_engine
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
_torch
|
[None][fix] AutoDeploy: Use tmp folder for the load_moe_align (#9101)
|
2025-11-12 14:59:49 -08:00 |
|
bench
|
[TRTLLM-9065][chore] remove PyTorchConfig completely (#8856)
|
2025-11-06 22:37:03 -08:00 |
|
commands
|
[TRTLLM-8214][feat] Support Qwen3 tool parser (#8216)
|
2025-10-29 15:48:29 +08:00 |
|
evaluate
|
[TRTLLM-8119][feat] Update doc/tests/chat_template for nano-v2-vlm (#8840)
|
2025-11-11 07:48:23 -08:00 |
|
executor
|
[https://nvbugs/5556998][fix] init_hf_modules in worker_main for models with trust_remote=true (#8931)
|
2025-11-11 10:30:37 +08:00 |
|
inputs
|
[TRTLLM-8119][feat] Update doc/tests/chat_template for nano-v2-vlm (#8840)
|
2025-11-11 07:48:23 -08:00 |
|
layers
|
[TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629)
|
2025-08-15 17:15:49 -04:00 |
|
llmapi
|
[TRTLLM-7723][feat] sampling using FlashInfer.sampling (#8581)
|
2025-11-11 03:21:19 -08:00 |
|
metrics
|
[None][feat] Add trtllm_ prefix for exposed metrics (#8845)
|
2025-11-06 15:27:18 +08:00 |
|
models
|
[TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330)
|
2025-10-28 09:17:26 -07:00 |
|
plugin
|
[TRTLLM-8683][chore] Migrate PluginConfig to Pydantic (#8277)
|
2025-10-17 16:13:22 -04:00 |
|
quantization
|
[None][perf] Use fp8 quant kernel in DS3.2 indexer module (#8701)
|
2025-10-29 12:45:09 +08:00 |
|
runtime
|
[TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330)
|
2025-10-28 09:17:26 -07:00 |
|
scaffolding
|
[None][feat] Deep Research Implemented with Scaffolding (#8452)
|
2025-11-06 10:33:28 +08:00 |
|
serve
|
[TRTLLM-8598][feat] enable n > 1 in OpenAI API with PyTorch backend (#8951)
|
2025-11-07 17:47:35 -08:00 |
|
tools
|
[None][fix] Layer wise benchmarks: use local models, lint (#8799)
|
2025-10-30 09:47:46 -07:00 |
|
__init__.py
|
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
|
2025-10-22 20:53:08 -04:00 |
|
_common.py
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
_dlpack_utils.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
_ipc_utils.py
|
[TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520)
|
2025-10-04 08:12:24 +08:00 |
|
_mnnvl_utils.py
|
[https://nvbugs/5477730][fix] Fix the alltoall case when tp_size larger than ep_size (#7331)
|
2025-09-04 08:10:03 -04:00 |
|
_ray_utils.py
|
[TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302)
|
2025-11-04 10:19:24 -08:00 |
|
_utils.py
|
[TRTLLM-8994][infra] upgrade to DLFW 25.10 and pytorch 2.9.0 / triton 3.5.0 (#8838)
|
2025-11-04 18:59:34 +08:00 |
|
builder.py
|
[TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330)
|
2025-10-28 09:17:26 -07:00 |
|
disaggregated_params.py
|
[TRTLLM-7328][feat] E-PD Disagg Support via llmapi (3/N) (#7577)
|
2025-09-22 19:07:18 -07:00 |
|
functional.py
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
graph_rewriting.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
logger.py
|
[None][chore] Mass integration of release/1.0 - 3rd (#7519)
|
2025-09-08 14:03:04 +08:00 |
|
lora_helper.py
|
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
|
2025-10-22 20:53:08 -04:00 |
|
lora_manager.py
|
[https://nvbugs/5510879][fix] Fix pytorch & TRT-python flows fused LoRA adapter modules weight split with TP>1 (#8063)
|
2025-10-12 12:29:52 -07:00 |
|
mapping.py
|
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
|
2025-10-22 20:53:08 -04:00 |
|
math_utils.py
|
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318)
|
2025-06-26 14:03:56 +08:00 |
|
module.py
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
network.py
|
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
|
2025-10-22 20:53:08 -04:00 |
|
parameter.py
|
fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909)
|
2025-05-15 11:16:45 +08:00 |
|
profiler.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
prompt_adapter_manager.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
python_plugin.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
ray_stub.py
|
[TRTLLM-8507][fix] Fix ray resource cleanup and error handling in LoRA test (#8175)
|
2025-10-14 23:46:30 +08:00 |
|
sampling_params.py
|
[None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127)
|
2025-10-27 13:12:31 -04:00 |
|
scheduling_params.py
|
[None][feat] Add support of scheduling attention dp request (#6246)
|
2025-08-01 20:38:01 -04:00 |
|
serialization.py
|
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
|
2025-10-22 20:53:08 -04:00 |
|
top_model_mixin.py
|
[TRTLLM-8683][chore] Migrate PluginConfig to Pydantic (#8277)
|
2025-10-17 16:13:22 -04:00 |
|
version.py
|
[None][chore] Bump version to 1.2.0rc3 (#9004)
|
2025-11-07 01:24:32 -08:00 |