TensorRT-LLMs/tensorrt_llm/_torch/modules
QI JUN d167cbd5bb
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
* remove tensorrt_llm._torch.distributed.ParallelConfig

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* clean

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix embedding test

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix comments

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* polish

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* rebase

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
2025-04-11 15:34:20 -07:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
decoder_layer.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
embedding.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
fused_moe.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
gated_mlp.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
linear.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
logits_procesor.py Update (#2978) 2025-03-23 16:39:35 +08:00
mamba.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
mlp.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
rms_norm.py Add Llama 4 (#3302) 2025-04-09 03:35:21 +08:00
rotary_embedding.py feat: support llama4 nope layers; support FP8 checkpoint loading; (#3382) 2025-04-10 10:16:42 -07:00