TensorRT-LLMs/tests/unittest/_torch
2025-10-16 11:07:48 +08:00
..
attention [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
auto_deploy [None][feat] AutoDeploy: VLMs with subgraphs + cudagraph/compile (#8203) 2025-10-13 17:34:09 -07:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-8477][chore] Replace KvCacheConfigCpp with KvCacheConfig inside PyExecutor (#8259) 2025-10-13 14:55:36 +08:00
misc [TRTLLM-4501][feat] Add input tensor pre-hook function API for the tuning process. (#6924) 2025-10-15 21:18:11 +08:00
modeling [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [https://nvbugs/5378031] [feat] W4A8 AWQ MoE supports Per Expert Pre-quant Scale Factor for PyT backend (#7286) 2025-10-16 11:07:48 +08:00
multi_gpu [https://nvbugs/5501820][fix] Add requirements for numba-cuda version to WAR mem corruption (#7992) 2025-10-10 10:18:27 +08:00
multi_gpu_modeling [https://nvbugs/5541545][fix] Remove test_llama4 (#8031) 2025-10-08 15:20:15 -07:00
multimodal [https://nvbugs/5542867][fix] Fix the non-determinism issue in the mm_encoder test (#8033) 2025-09-29 09:45:16 -07:00
ray_orchestrator [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
sampler [TRTLLM-8551][feat] add cache_salt in LLM.generate and refactor test_return_logits.py (#8317) 2025-10-15 02:53:57 -07:00
speculative [TRTLLM-7412][feat] Turn off spec decode when the rolling average acceptance length drops below threshold. (#7283) 2025-10-13 15:51:14 -07:00
thop [OMNIML-2336][feat] w4a8 nvfp4 fp8 exports scale factor properly (#8180) 2025-10-15 13:41:27 +08:00
helpers.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00