TensorRT-LLMs/tensorrt_llm
xiweny eaf8bec88b
fix: Disaggregate serving with attention DP (#4993)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
2025-07-08 16:15:03 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch fix: Disaggregate serving with attention DP (#4993) 2025-07-08 16:15:03 +08:00
auto_parallel [TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630) 2025-06-27 09:58:41 +08:00
bench Revert "chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie…" (#5818) 2025-07-08 13:15:30 +09:00
commands [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
evaluate test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
executor feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522) 2025-07-07 18:03:12 -07:00
inputs feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522) 2025-07-07 18:03:12 -07:00
layers [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
llmapi Revert "chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie…" (#5818) 2025-07-08 13:15:30 +09:00
models [TRTLLM-6291] feat: Add user-provided speculative decoding support (#5204) 2025-07-07 16:30:43 +02:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
runtime [nvbug/5354825] Fix nougat test image url (#5496) 2025-07-01 20:12:55 +08:00
scaffolding feat(scaffolding): add streaming scaffolding_llm.generate_async support (#5345) 2025-07-08 15:08:40 +09:00
serve chore: log stack trace on error in openai server (#5749) 2025-07-07 14:54:36 +08:00
tools chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
__init__.py feat: TRTLLM-5941 Upgrade xgrammar to 0.1.18 (#5364) 2025-07-01 20:12:55 +08:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_mnnvl_utils.py [TRTLLM-5331] perf: Replace allgaher with AllToAllPrepare (#5570) 2025-06-30 13:06:09 +08:00
_utils.py [feat] Add llm args to tune python gc threshold (#5141) 2025-06-16 17:45:22 +08:00
builder.py fix: build_config in TorchLlmArgs and avoid arbitrary args (#4972) 2025-06-15 17:51:56 -07:00
disaggregated_params.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
functional.py Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_manager.py [TRTLLM-5921][feat] Prevent serialization of entire LoRA adapters in each request (#5080) 2025-06-26 08:15:06 +03:00
mapping.py fix: Mapping rank boundary check bug (#4935) 2025-06-27 07:27:59 +08:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
serialization.py [TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630) 2025-06-27 09:58:41 +08:00
top_model_mixin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
version.py chore: bump version to 1.0.0rc3 (#5819) 2025-07-08 16:04:40 +09:00