TensorRT-LLMs/tensorrt_llm
yuxianq 16c8f39fc5
feat: Support TLLM_OVERRIDE_LAYER_NUM and TLLM_TRACE_MODEL_FORWARD for debugging (#3417)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-10 13:18:30 +08:00
..
_torch feat: Support TLLM_OVERRIDE_LAYER_NUM and TLLM_TRACE_MODEL_FORWARD for debugging (#3417) 2025-04-10 13:18:30 +08:00
auto_parallel chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
bench fix: Add nested aliases for Llama 4 (#3381) 2025-04-10 10:18:53 +08:00
commands Use llm.tokenizer in OpenAIServer (#3199) 2025-04-08 14:55:02 +08:00
evaluate test: Accuracy test improvement (Part 3.2): Move Qwen tests (NvBug 5135332) (#3219) 2025-04-02 17:29:57 +08:00
executor Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
inputs Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
layers feat: Add Gemma3 text-only model support (#3247) 2025-04-10 12:34:58 +08:00
llmapi Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
models feat: Add Gemma3 text-only model support (#3247) 2025-04-10 12:34:58 +08:00
plugin Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
quantization feat: Add Gemma3 text-only model support (#3247) 2025-04-10 12:34:58 +08:00
runtime Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
scaffolding feat: Enhance the integrated robustness of scaffolding with __init__.py #3305 (#3312) 2025-04-09 21:13:47 +08:00
serve Use llm.tokenizer in OpenAIServer (#3199) 2025-04-08 14:55:02 +08:00
tools Add Llama 4 (#3302) 2025-04-09 03:35:21 +08:00
__init__.py Update (#2978) 2025-03-23 16:39:35 +08:00
_common.py Update (#2978) 2025-03-23 16:39:35 +08:00
_ipc_utils.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
_utils.py feat: Support TLLM_OVERRIDE_LAYER_NUM and TLLM_TRACE_MODEL_FORWARD for debugging (#3417) 2025-04-10 13:18:30 +08:00
builder.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
disaggregated_params.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
functional.py perf: Add optimizations for deepseek in min latency mode (#3093) 2025-04-02 09:05:24 +08:00
graph_rewriting.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
logger.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
lora_manager.py feat: Support PeftCacheManager in Torch (#3186) 2025-04-04 12:38:08 +08:00
mapping.py fix: fix for cp > kvHeadNum (#3002) 2025-03-26 12:39:02 +08:00
module.py Update (#2978) 2025-03-23 16:39:35 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
profiler.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
prompt_adapter_manager.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
python_plugin.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
sampling_params.py v1.2 (#3082) 2025-03-26 23:31:29 +08:00
top_model_mixin.py Update TensorRT-LLM (#2053) 2024-07-30 21:25:01 +08:00
version.py chore: bump version to 0.19.0.dev2025041500 (#3360) 2025-04-08 20:45:27 +08:00