TensorRT-LLMs/tensorrt_llm
Jin Li 2189a2f3ff
[https://nvbugs/5483615][fix] Remove unnecessary assertion to let mai… (#7441)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-09-05 10:56:21 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [https://nvbugs/5483615][fix] Remove unnecessary assertion to let mai… (#7441) 2025-09-05 10:56:21 +08:00
auto_parallel [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
bench [None][fix] Fix data type of KV Cache percentage in bench. (#7230) 2025-08-26 12:28:09 -04:00
commands [None][feat] Add logging for OAI disagg server (#7232) 2025-08-26 21:02:03 -07:00
evaluate [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
executor [None][chore] Remove two unused parameters in create_py_executor (#7458) 2025-09-04 07:31:31 +08:00
inputs [TRTLLM-7410][feat] Support hashing and KV cache reuse for videos (#7360) 2025-09-04 14:39:23 -04:00
layers [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
llmapi [None][feat] Support NVFP4 KV Cache (#6244) 2025-09-01 09:24:52 +08:00
metrics [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
models [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization [OMNIML-2336][feat] Add NVFP4 x FP8 (#6809) 2025-09-04 09:03:38 -07:00
runtime [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
scaffolding [#3325][feat] Add MCTS and TOT tree-based inference controllers to Scaffolding (#7490) 2025-09-04 19:46:49 -07:00
serve [https://nvbugs/5369366] [fix] Report failing requests (#7060) 2025-09-04 12:56:23 -07:00
tools [None] [feat] nsys profile output kernel classifier (#7020) 2025-08-23 00:57:37 -04:00
__init__.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
_mnnvl_utils.py [https://nvbugs/5477730][fix] Fix the alltoall case when tp_size larger than ep_size (#7331) 2025-09-04 08:10:03 -04:00
_utils.py [https://nvbugs/5485102][fix] Correctly set stride for piecewise outp… (#7442) 2025-09-04 10:48:15 +08:00
builder.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
disaggregated_params.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
functional.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_helper.py [TRTLLM-6825][fix] Update lora for phi4-mm (#6817) 2025-08-21 22:00:04 -04:00
lora_manager.py [https://nvbugs/5467232][fix] Fix load_torch_hf_lora to override lora_config.trtllm_modules_to_hf_modules with default only when it has no value (#7132) 2025-08-24 15:00:24 +03:00
mapping.py [TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816) 2025-08-14 09:00:02 -07:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
top_model_mixin.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
version.py [None][chore] Bump version to 1.1.0rc4 (#7525) 2025-09-04 16:30:47 +08:00