TensorRT-LLMs/tensorrt_llm/_torch
Fanrong Li e6b482ef47
fix: change the seq_lens sync copy to an async one (#3786)
---------

Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-04-29 23:56:49 +08:00
..
attention_backend fix: change the seq_lens sync copy to an async one (#3786) 2025-04-29 23:56:49 +08:00
auto_deploy fix: [AutoDeploy] update hf loading for e_score_correction_bias (#3847) 2025-04-26 02:03:47 +08:00
compilation Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
custom_ops chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
distributed chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
models Support NemotronH FP8 Quantization 2025-04-29 18:51:43 +03:00
modules fix: get head_dim from model’s config. (#3916) 2025-04-29 23:04:29 +08:00
peft add passing E2E LoRA flow (#3788) 2025-04-23 18:38:06 +03:00
pyexecutor fix: change the seq_lens sync copy to an async one (#3786) 2025-04-29 23:56:49 +08:00
speculative fix: change the seq_lens sync copy to an async one (#3786) 2025-04-29 23:56:49 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py Fix fp8 kvcache (#3877) 2025-04-29 10:31:10 +08:00
pipeline_interface.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
utils.py refactor: (part1) Add contraints doc for fusedMoe module. (#3882) 2025-04-29 22:23:02 +08:00