CUDA out of Memory max_split_size_mb
嘻嘻发布于2023-06-26
浏览最近在本地搭建stable diffusion webui,准备跑图片生成和视频生成,但是经常出现图片尺寸过大就出现CUDA out of Memory 问题,应用就需要重启,效率非常低下,从网上找了下原因解决下。
问题原因
PyTorch 显存分配方式导致的,显存碎片化导致在分配显存时溢出。
提示报错信息如下:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
设置PyTorch 显存分配大小
window上设置
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:512
set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --xformers
linux上设置
export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:512;
COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --xformers