TensorRT-LLMs/cpp/tensorrt_llm
Daniel Cámpora 64d5eba9c7
Fix: max_num_sequences calculation with overlap scheduling into release/0.20 (#4889)
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-06-04 22:33:12 +08:00
..
batch_manager Fix: max_num_sequences calculation with overlap scheduling into release/0.20 (#4889) 2025-06-04 22:33:12 +08:00
common [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
cutlass_extensions/include/cutlass_extensions [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
executor feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels fix: [nvbugs/5298600] fix illegal memory access on mrope_position_deltas (#4830) 2025-06-03 14:56:50 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins [https://nvbugs/5295389][fix]fix moe fp4 on sm120 (#4624) 2025-05-29 09:50:47 -07:00
pybind Fix: max_num_sequences calculation with overlap scheduling into release/0.20 (#4889) 2025-06-04 22:33:12 +08:00
runtime fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
CMakeLists.txt feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00