mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Prefetching safetensors files so that they are stored in the system file cache. This significantly speeds up the model weight loading for the very first run after entering the docker container. This is beneficial because model weight loading is done layer-by-layer, which means reading from the safetensors chunk-by-chunk, and that cannot utilize the internet bandwidth very well, assuming that these files are stored in some network drives. Instead, loading the whole files in bulk can achieve higher internet bandwidth utilization. When running with world_size>1, all ranks collaboratedly prefetch these files. In theory, we should add heuristics to decide whether to prefetch the files or not, but that is beyond the scope of this commit. For example, when the CPU memory is small, doing prefetching may result in file cache thrashing, resulting in slower weight loading time. Signed-off-by: Po-Han Huang <pohanh@nvidia.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _util.py | ||
| config.py | ||
| cuda_graph_runner.py | ||
| decoder.py | ||
| guided_decoder.py | ||
| kv_cache_transceiver.py | ||
| layerwise_nvtx_marker.py | ||
| llm_request.py | ||
| model_engine.py | ||
| py_executor_creator.py | ||
| py_executor.py | ||
| resource_manager.py | ||
| scheduler.py | ||
| seq_slot_manager.py | ||