TensorRT-LLMs/tensorrt_llm
Ziyi Xiong 58d22a72f1
[TRTLLM-6352][feat] Migrate EAGLE3 and draft/target speculation to Drafter (#6007)
Signed-off-by: ziyixiong-nv <fxiong@nvidia.com>
2025-07-17 21:15:01 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [TRTLLM-6352][feat] Migrate EAGLE3 and draft/target speculation to Drafter (#6007) 2025-07-17 21:15:01 +08:00
auto_parallel [TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630) 2025-06-27 09:58:41 +08:00
bench [TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend (#5752) 2025-07-16 16:42:59 +08:00
commands chore:[BREAKING CHANGE] use cacheTransceiverConfig as knobs for disagg service (#5234) 2025-07-17 17:42:07 +08:00
evaluate test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
executor chore:[BREAKING CHANGE] use cacheTransceiverConfig as knobs for disagg service (#5234) 2025-07-17 17:42:07 +08:00
inputs feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
layers [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
llmapi chore:[BREAKING CHANGE] use cacheTransceiverConfig as knobs for disagg service (#5234) 2025-07-17 17:42:07 +08:00
models [nvbug/5387226] chore: add propogation for trust_remote_code to AutoConfig (#6001) 2025-07-16 16:05:38 +08:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
runtime [nvbugs/5385972][nvbugs/5387423][Fix] Minor fix for llava_next/llava_onevision (#5998) 2025-07-15 10:01:35 -04:00
scaffolding feat(scaffolding): add streaming scaffolding_llm.generate_async support (#5345) 2025-07-08 15:08:40 +09:00
serve feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
tools [nvbugs/5385972][nvbugs/5387423][Fix] Minor fix for llava_next/llava_onevision (#5998) 2025-07-15 10:01:35 -04:00
__init__.py feat: TRTLLM-5941 Upgrade xgrammar to 0.1.18 (#5364) 2025-07-01 20:12:55 +08:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_mnnvl_utils.py [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
_utils.py fix: adjust window sizes of VSWA at torch backend (#5880) 2025-07-15 17:41:54 +08:00
builder.py fix: Update trtllm args issues with extra nested config (#5996) 2025-07-16 12:41:45 -04:00
disaggregated_params.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
functional.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_manager.py [TRTLLM-5921][feat] Prevent serialization of entire LoRA adapters in each request (#5080) 2025-06-26 08:15:06 +03:00
mapping.py fix: Mapping rank boundary check bug (#4935) 2025-06-27 07:27:59 +08:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py [NvBug 5370718, 5371538] fix: Fix incremental detokenization (#5825) 2025-07-10 16:30:00 +08:00
serialization.py [TRTLLM-4971]: Use safe deserialization in ParallelConfig (#4630) 2025-06-27 09:58:41 +08:00
top_model_mixin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
version.py chore: Bump version to 1.0.0rc4 (#6086) 2025-07-16 13:02:23 +08:00