..
_tensorrt_engine
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default ( #5312 )
2025-06-20 03:01:10 +08:00
_torch
[None][feat] Skip prefetching consolidated safetensors when appropriate ( #7225 )
2025-08-26 09:40:17 -07:00
auto_parallel
[TRTLLM-4971]: Use safe deserialization in ParallelConfig ( #4630 )
2025-06-27 09:58:41 +08:00
bench
[ https://nvbugs/5451342 ][fix] Use runtime max_batch_size when cuda_graph_config.max_batch_size is not provided in trtllm-bench ( #7031 )
2025-08-26 08:10:35 -04:00
commands
[TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec ( #6379 )
2025-08-05 07:47:41 +00:00
evaluate
test: Add LLGuidance test and refine guided decoding ( #5348 )
2025-06-25 14:12:56 +08:00
executor
[ https://nvbugs/5451296 ][fix] zmq nonblock bug with retry ( #7019 )
2025-08-21 08:34:46 +08:00
inputs
[TRTLLM-6654][feat] Add support for external multimodal embeddings ( #6263 )
2025-07-30 10:00:15 -04:00
layers
feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm ( #6353 )
2025-07-30 09:20:16 -07:00
llmapi
[TRTLLM-7030][fix] BREAKING CHANGE: Mismatch between docs and actual commands ( #7191 )
2025-08-25 20:21:43 +08:00
models
[None][feat] Add Qwen3 MoE support to TensorRT backend ( #6470 )
2025-08-06 17:02:35 +08:00
plugin
feat: Add support for fp8 rowwise quantization ( #4876 )
2025-06-14 06:37:48 -07:00
quantization
Deepseek R1 FP8 Support on Blackwell ( #6486 )
2025-08-01 10:26:28 +08:00
runtime
[nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache ( #5974 )
2025-07-25 18:10:40 -04:00
scaffolding
[ https://nvbugs/5387375 ] fix(scaffolding): fix scaffolding aime test in test_e2e ( #6140 )
2025-07-18 10:34:37 +08:00
serve
[ https://nvbugs/5450074 ][fix] Reduce the device memory requirements for testing ( #6990 )
2025-08-22 17:33:30 +08:00
tools
[ https://nvbugs/5429689 ][fix] Fix mllama model structure update with transformers issue ( #6699 )
2025-08-11 10:48:35 +08:00
__init__.py
feat: TRTLLM-5941 Upgrade xgrammar to 0.1.18 ( #5364 )
2025-07-01 20:12:55 +08:00
_common.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
_dlpack_utils.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
_ipc_utils.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
_mnnvl_utils.py
[NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) ( #5902 )
2025-07-12 15:50:31 +09:00
_utils.py
[TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter ( #6786 )
2025-08-11 14:31:39 -04:00
builder.py
feat: nanobind bindings ( #6185 )
2025-07-21 08:56:57 +01:00
disaggregated_params.py
[fix]: Skip prompt length checking for generation only requests ( #6146 )
2025-07-19 21:26:37 +08:00
functional.py
feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm ( #6353 )
2025-07-30 09:20:16 -07:00
graph_rewriting.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
logger.py
[None][fix] fix log_once usage ( #7210 )
2025-08-26 19:13:03 +08:00
lora_manager.py
[ https://nvbugs/5467232 ][fix] Fix load_torch_hf_lora to override lora_config.trtllm_modules_to_hf_modules with default only when it has no value ( #7168 )
2025-08-25 15:37:57 +08:00
mapping.py
fix: Mapping rank boundary check bug ( #4935 )
2025-06-27 07:27:59 +08:00
math_utils.py
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf ( #5318 )
2025-06-26 14:03:56 +08:00
module.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
network.py
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
parameter.py
fix: https://nvbugs/5234033 enable starcoder trt-flow with transforme… ( #3909 )
2025-05-15 11:16:45 +08:00
profiler.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
python_plugin.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
sampling_params.py
[TRTLLM-6761][refactor] Replace LogitBiasLogitsProcessor with embedding bias tensor system ( #6464 )
2025-08-05 07:14:24 -07:00
scheduling_params.py
[None][feat] Add support of scheduling attention dp request ( #6246 )
2025-08-01 20:38:01 -04:00
serialization.py
[TRTLLM-4971]: Use safe deserialization in ParallelConfig ( #4630 )
2025-06-27 09:58:41 +08:00
top_model_mixin.py
linting(python): Enable ruff on more files (wave 1/N) ( #5140 )
2025-06-14 19:19:34 +08:00
version.py
[None][chore] Bump version to 1.0.0 ( #6652 )
2025-08-07 14:15:34 +08:00