..
_torch
fix: Fixing issue with first gen token being returned twice in streaming ( #3427 )
2025-04-13 22:45:09 -04:00
auto_parallel
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
bench
fix: Add nested aliases for Llama 4 ( #3381 )
2025-04-10 10:18:53 +08:00
commands
Use llm.tokenizer in OpenAIServer ( #3199 )
2025-04-08 14:55:02 +08:00
evaluate
test: Accuracy test improvement (Part 3.2): Move Qwen tests (NvBug 5135332) ( #3219 )
2025-04-02 17:29:57 +08:00
executor
fix: Fixing issue with first gen token being returned twice in streaming ( #3427 )
2025-04-13 22:45:09 -04:00
inputs
make LLM-API slurm examples executable ( #3402 )
2025-04-13 21:42:45 +08:00
layers
feat: Add Gemma3 text-only model support ( #3247 )
2025-04-10 12:34:58 +08:00
llmapi
make LLM-API slurm examples executable ( #3402 )
2025-04-13 21:42:45 +08:00
models
feat: Add NVFP4 UB pattern optimization pass in torch compile ( #3371 )
2025-04-11 21:25:29 +08:00
plugin
Update TensorRT-LLM ( #2936 )
2025-03-18 21:25:19 +08:00
quantization
fix:update the default excluded_modules value for fp8rowwise recipe. ( #3477 )
2025-04-12 16:00:21 +08:00
runtime
Feat: Variable-Beam-Width-Search (VBWS) part3 ( #3338 )
2025-04-08 23:51:27 +08:00
scaffolding
feat: Make scaffolding Controller more generic #3408 ( #3416 )
2025-04-12 21:35:38 +08:00
serve
feat: Add support of chat completion in PD ( #2985 )
2025-04-11 17:53:28 +08:00
tools
Add Llama 4 ( #3302 )
2025-04-09 03:35:21 +08:00
__init__.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
_common.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
_ipc_utils.py
fix: Fix PP for llama. ( #3449 )
2025-04-12 17:20:27 +08:00
_utils.py
feat: Support TLLM_OVERRIDE_LAYER_NUM and TLLM_TRACE_MODEL_FORWARD for debugging ( #3417 )
2025-04-10 13:18:30 +08:00
builder.py
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
disaggregated_params.py
Update TensorRT-LLM ( #2936 )
2025-03-18 21:25:19 +08:00
functional.py
perf: Add optimizations for deepseek in min latency mode ( #3093 )
2025-04-02 09:05:24 +08:00
graph_rewriting.py
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
logger.py
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
lora_manager.py
feat: Support PeftCacheManager in Torch ( #3186 )
2025-04-04 12:38:08 +08:00
mapping.py
fix: Fix PP for llama. ( #3449 )
2025-04-12 17:20:27 +08:00
module.py
Update ( #2978 )
2025-03-23 16:39:35 +08:00
network.py
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
parameter.py
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
profiler.py
Update TensorRT-LLM ( #2936 )
2025-03-18 21:25:19 +08:00
prompt_adapter_manager.py
Update TensorRT-LLM ( #2333 )
2024-10-15 15:28:40 +08:00
python_plugin.py
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
sampling_params.py
v1.2 ( #3082 )
2025-03-26 23:31:29 +08:00
top_model_mixin.py
Update TensorRT-LLM ( #2053 )
2024-07-30 21:25:01 +08:00
version.py
chore: bump version to 0.19.0.dev2025041500 ( #3360 )
2025-04-08 20:45:27 +08:00