TensorRT-LLMs/cpp/tensorrt_llm
Erin c44cf34373
fix: update checks that broke medusa tests when use_py_session=True (#4339)
fix check

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2025-05-15 15:47:28 -07:00
..
batch_manager refactor: use x is None instead of x == None. (#4244) 2025-05-15 20:00:04 +08:00
common feat: support kv cache reuse for MLA (#3571) 2025-05-15 15:22:21 +08:00
cutlass_extensions/include/cutlass_extensions [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
executor [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels fix: update checks that broke medusa tests when use_py_session=True (#4339) 2025-05-15 15:47:28 -07:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins fix: Eagle decoding in TRT flow (#4229) 2025-05-14 16:10:49 +02:00
pybind Revert "feat: Low Precision Allreduce for PCIe based GPU" (#4340) 2025-05-15 09:52:39 +08:00
runtime [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
CMakeLists.txt Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00