TensorRT-LLMs/tensorrt_llm
QI JUN 112f716155
chore: move all distributed related codes into _torch.distributed directory (#3511)
* move all distributed related codes into _torch.distributed directory

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-15 08:39:17 +08:00
..
_torch chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
auto_parallel chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
bench fix: Add nested aliases for Llama 4 (#3381) 2025-04-10 10:18:53 +08:00
commands Use llm.tokenizer in OpenAIServer (#3199) 2025-04-08 14:55:02 +08:00
evaluate test: Accuracy test improvement (Part 3.2): Move Qwen tests (NvBug 5135332) (#3219) 2025-04-02 17:29:57 +08:00
executor fix: Fixing issue with first gen token being returned twice in streaming (#3427) 2025-04-13 22:45:09 -04:00
inputs make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
layers feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
llmapi fix: fix max_seq_len in executor_config (#3487) 2025-04-14 15:13:29 +08:00
models chore: unify pp_layers helpers (#3429) 2025-04-15 04:49:17 +08:00
plugin Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
quantization feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
runtime test: Fix breaking Phi3 multimodal tests (#3544) 2025-04-15 08:02:34 +08:00
scaffolding feat: Make scaffolding Controller more generic #3408 (#3416) 2025-04-12 21:35:38 +08:00
serve feat: Add support of chat completion in PD (#2985) 2025-04-11 17:53:28 +08:00
tools test: Fix breaking Phi3 multimodal tests (#3544) 2025-04-15 08:02:34 +08:00
__init__.py Update (#2978) 2025-03-23 16:39:35 +08:00
_common.py Update (#2978) 2025-03-23 16:39:35 +08:00
_ipc_utils.py fix: Fix PP for llama. (#3449) 2025-04-12 17:20:27 +08:00
_utils.py feat: Support TLLM_OVERRIDE_LAYER_NUM and TLLM_TRACE_MODEL_FORWARD for debugging (#3417) 2025-04-10 13:18:30 +08:00
builder.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
disaggregated_params.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
functional.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
graph_rewriting.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
logger.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
lora_manager.py feat: Support PeftCacheManager in Torch (#3186) 2025-04-04 12:38:08 +08:00
mapping.py chore: unify pp_layers helpers (#3429) 2025-04-15 04:49:17 +08:00
module.py Update (#2978) 2025-03-23 16:39:35 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
profiler.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
prompt_adapter_manager.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
python_plugin.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
sampling_params.py v1.2 (#3082) 2025-03-26 23:31:29 +08:00
top_model_mixin.py Update TensorRT-LLM (#2053) 2024-07-30 21:25:01 +08:00
version.py chore: bump version to 0.19.0rc0 (#3535) 2025-04-14 18:11:20 +08:00