| .. |
|
_tensorrt_engine
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
_torch
|
[https://nvbugs/5517023][fix] Pass allreduce strategy and force NCCL on pre-Blackwell arch (#7768)
|
2025-09-22 14:28:38 +08:00 |
|
auto_parallel
|
[None][fix] Migrate to new cuda binding package name (#6700)
|
2025-08-07 16:29:55 -04:00 |
|
bench
|
[None][chore] Fix error when running trtllm-bench without cuda graph. (#7725)
|
2025-09-15 20:30:23 -07:00 |
|
commands
|
[TRTLLM-6577][feat] Support nano_v2_vlm in pytorch backend (#7207)
|
2025-09-18 16:26:20 +08:00 |
|
evaluate
|
[TRTLLM-6771][feat] Support MMMU for multimodal models (#6828)
|
2025-08-21 08:54:12 +08:00 |
|
executor
|
[TRTLLM-8188][chore] refactor GenerationExecutorWorker with WorkerBase for better code reusing (#7840)
|
2025-09-20 06:24:22 -07:00 |
|
inputs
|
[TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843)
|
2025-09-14 20:10:10 -07:00 |
|
layers
|
[TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629)
|
2025-08-15 17:15:49 -04:00 |
|
llmapi
|
[None][doc] Enhance api reference doc by labeling stable APIs (#7751)
|
2025-09-22 14:28:38 +08:00 |
|
metrics
|
[None][feat] Core Metrics Implementation (#5785)
|
2025-08-09 02:48:53 -04:00 |
|
models
|
[https://nvbugs/5496960][fix] Fix Gemma model forward. (#7509)
|
2025-09-22 14:28:38 +08:00 |
|
plugin
|
feat: Add support for fp8 rowwise quantization (#4876)
|
2025-06-14 06:37:48 -07:00 |
|
quantization
|
[OMNIML-2336][feat] Add NVFP4 x FP8 (#6809)
|
2025-09-04 09:03:38 -07:00 |
|
runtime
|
[None][fix] Migrate to new cuda binding package name (#6700)
|
2025-08-07 16:29:55 -04:00 |
|
scaffolding
|
[https://nvbugs/5517260][fix] move scaffolding contrib module's import to subdirectory (#7758)
|
2025-09-17 11:36:33 +08:00 |
|
serve
|
[None] [chore] cherry pick changes on slurm scripts from release/1.1.0rc2 (#7750)
|
2025-09-16 16:07:13 +08:00 |
|
tools
|
[None] [feat] nsys profile output kernel classifier (#7020)
|
2025-08-23 00:57:37 -04:00 |
|
__init__.py
|
[TRTLLM-7326][feat] Add standalone multimodal encoder (#6743)
|
2025-08-19 21:42:50 -07:00 |
|
_common.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
_dlpack_utils.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
_ipc_utils.py
|
[None][fix] Migrate to new cuda binding package name (#6700)
|
2025-08-07 16:29:55 -04:00 |
|
_mnnvl_utils.py
|
[https://nvbugs/5477730][fix] Fix the alltoall case when tp_size larger than ep_size (#7331)
|
2025-09-04 08:10:03 -04:00 |
|
_utils.py
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
builder.py
|
[TRTLLM-5930][doc] 1.0 Documentation. (#6696)
|
2025-09-09 12:16:03 +08:00 |
|
disaggregated_params.py
|
[None][fix] Refactoring to avoid circular import when importing torch models (#6720)
|
2025-08-11 18:00:42 -04:00 |
|
functional.py
|
[None] [feat] Add Tencent HunYuanMoEV1 model support (#5521)
|
2025-08-15 06:56:44 +08:00 |
|
graph_rewriting.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
logger.py
|
[None][chore] Mass integration of release/1.0 - 3rd (#7519)
|
2025-09-08 14:03:04 +08:00 |
|
lora_helper.py
|
[TRTLLM-6825][fix] Update lora for phi4-mm (#6817)
|
2025-08-21 22:00:04 -04:00 |
|
lora_manager.py
|
[https://nvbugs/5467232][fix] Fix load_torch_hf_lora to override lora_config.trtllm_modules_to_hf_modules with default only when it has no value (#7132)
|
2025-08-24 15:00:24 +03:00 |
|
mapping.py
|
[TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571)
|
2025-09-17 09:41:32 +08:00 |
|
math_utils.py
|
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318)
|
2025-06-26 14:03:56 +08:00 |
|
module.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
network.py
|
chore: remove usernames from comments (#3291)
|
2025-04-05 13:44:28 +08:00 |
|
parameter.py
|
fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909)
|
2025-05-15 11:16:45 +08:00 |
|
profiler.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
prompt_adapter_manager.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
python_plugin.py
|
linting(python): Enable ruff on more files (wave 1/N) (#5140)
|
2025-06-14 19:19:34 +08:00 |
|
sampling_params.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
scheduling_params.py
|
[None][feat] Add support of scheduling attention dp request (#6246)
|
2025-08-01 20:38:01 -04:00 |
|
serialization.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
top_model_mixin.py
|
[None][fix] Refactoring to avoid circular import when importing torch models (#6720)
|
2025-08-11 18:00:42 -04:00 |
|
version.py
|
[None][chore] Version bump for 1.1.0rc6 (#7824)
|
2025-09-18 11:13:56 +08:00 |