| .. |
|
_tensorrt_engine
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
_torch
|
Cherry-pick https://github.com/NVIDIA/TensorRT-LLM/pull/5947 (#5989)
|
2025-07-16 01:33:12 +09:00 |
|
auto_parallel
|
[TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630)
|
2025-06-27 09:58:41 +08:00 |
|
bench
|
feat/add latency support for trtllm bench (#3730)
|
2025-07-15 13:13:49 -07:00 |
|
commands
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
evaluate
|
test: Add LLGuidance test and refine guided decoding (#5348)
|
2025-06-25 14:12:56 +08:00 |
|
executor
|
[NvBug 5370718, 5371538] fix: Fix incremental detokenization (#5825)
|
2025-07-10 16:30:00 +08:00 |
|
inputs
|
feat(models): Mistral3.1 VLM pytorch backend support (#5529)
|
2025-07-09 13:17:40 -07:00 |
|
layers
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |
|
llmapi
|
chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie… (#6003)
|
2025-07-15 15:50:03 +09:00 |
|
models
|
fix: Unable to load phi4-model with tp_size>1 (#5962)
|
2025-07-16 11:39:41 +08:00 |
|
plugin
|
feat: Add support for fp8 rowwise quantization (#4876)
|
2025-06-14 06:37:48 -07:00 |
|
quantization
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |
|
runtime
|
[nvbugs/5385972][nvbugs/5387423][Fix] Minor fix for llava_next/llava_onevision (#5998)
|
2025-07-15 10:01:35 -04:00 |
|
scaffolding
|
feat(scaffolding): add streaming scaffolding_llm.generate_async support (#5345)
|
2025-07-08 15:08:40 +09:00 |
|
serve
|
[nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201)
|
2025-07-14 17:17:30 +08:00 |
|
tools
|
[nvbugs/5385972][nvbugs/5387423][Fix] Minor fix for llava_next/llava_onevision (#5998)
|
2025-07-15 10:01:35 -04:00 |
|
__init__.py
|
feat: TRTLLM-5941 Upgrade xgrammar to 0.1.18 (#5364)
|
2025-07-01 20:12:55 +08:00 |
|
_common.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
_dlpack_utils.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
_ipc_utils.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
_mnnvl_utils.py
|
[NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902)
|
2025-07-12 15:50:31 +09:00 |
|
_utils.py
|
fix: adjust window sizes of VSWA at torch backend (#5880)
|
2025-07-15 17:41:54 +08:00 |
|
builder.py
|
fix: build_config in TorchLlmArgs and avoid arbitrary args (#4972)
|
2025-06-15 17:51:56 -07:00 |
|
disaggregated_params.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
functional.py
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
graph_rewriting.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
logger.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
lora_manager.py
|
[TRTLLM-5921][feat] Prevent serialization of entire LoRA adapters in each request (#5080)
|
2025-06-26 08:15:06 +03:00 |
|
mapping.py
|
fix: Mapping rank boundary check bug (#4935)
|
2025-06-27 07:27:59 +08:00 |
|
math_utils.py
|
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318)
|
2025-06-26 14:03:56 +08:00 |
|
module.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
network.py
|
chore: remove usernames from comments (#3291)
|
2025-04-05 13:44:28 +08:00 |
|
parameter.py
|
fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909)
|
2025-05-15 11:16:45 +08:00 |
|
profiler.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
prompt_adapter_manager.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
python_plugin.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
sampling_params.py
|
[NvBug 5370718, 5371538] fix: Fix incremental detokenization (#5825)
|
2025-07-10 16:30:00 +08:00 |
|
serialization.py
|
[TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630)
|
2025-06-27 09:58:41 +08:00 |
|
top_model_mixin.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
version.py
|
chore: Bump version to 1.0.0rc4 (#6086)
|
2025-07-16 13:02:23 +08:00 |