系统重新安装之后就没用过这个东西,今天心血来潮,运行的时候发现跑不动了。由于盘符全部都变了,只能重新配置环境,安装好之后发现依然报错。
1.cuda 以及pytorch
UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 10020). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.) return torch._C._cuda_getDeviceCount() > 0 Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
解决办法:
1)安装cuda。
https://developer.nvidia.com/cuda-downloads?target_os=Windows&target_arch=x86_64&target_version=11&target_type=exe_network
底部显示的版本为:
Installation Instructions: Double click cuda_12.3.1_windows_network.exe Follow on-screen prompts
cuda_12.3.1
2)重新安装pytorch
https://pytorch.org
对应版本选择:
执行对应的pip命令:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
2.git错误:
venv "e:\Pycharm_Projects\stable-diffusion-webui\venv2\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: <none> Traceback (most recent call last): File "E:\Pycharm_Projects\stable-diffusion-webui\launch.py", line 355, in <module> prepare_environment() File "E:\Pycharm_Projects\stable-diffusion-webui\launch.py", line 289, in prepare_environment git_clone(taming_transformers_repo, repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash) File "E:\Pycharm_Projects\stable-diffusion-webui\launch.py", line 143, in git_clone current_hash = run(f'"{git}" -C "{dir}" rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip() File "E:\Pycharm_Projects\stable-diffusion-webui\launch.py", line 97, in run raise RuntimeError(message) RuntimeError: Couldn't determine Taming Transformers's hash: 24268930bf1dce879235a7fddd0b2355b84d7ea6. Command: "git" -C "E:\Pycharm_Projects\stable-diffusion-webui\repositories\taming-transformers" rev-parse HEAD Error code: 128 stdout: <empty> stderr: fatal: detected dubious ownership in repository at 'E:/Pycharm_Projects/stable-diffusion-webui/repositories/taming-transformers' 'E:/Pycharm_Projects/stable-diffusion-webui/repositories/taming-transformers' is owned by: 'S-1-5-21-3786237627-226699005-530654045-1001' but the current user is: 'S-1-5-21-802586706-3590648291-1096587674-1001' To add an exception for this directory, call: git config --global --add safe.directory E:/Pycharm_Projects/stable-diffusion-webui/repositories/taming-transformers
通过执行提示的命令git config –global –add safe.directory E:/Pycharm_Projects/stable-diffusion-webui/repositories/taming-transformers 无效,直接修改文件夹所有者属性即可:
3. AsyncConnectionPool.__init__() got an unexpected keyword argument ‘socket_options’
执行以下命令修复:
pip install -U httpcore pip install -U httpx==0.24.1
4. Can’t load tokenizer for ‘openai/clip-vit-large-patch14’
env "e:\Pycharm_Projects\stable-diffusion-webui\venv2\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: 22bcc7be428c94e9408f589966c2040187245d81 Installing requirements for Web UI Installing SD-CN-Animation requirement: scikit-image==0.19.2 Launching Web UI with arguments: No module 'xformers'. Proceeding without it. e:\Pycharm_Projects\stable-diffusion-webui\venv2\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional. warnings.warn( ControlNet v1.1.150 ControlNet v1.1.150 Loading weights [5bbaabc045] from E:\Pycharm_Projects\stable-diffusion-webui\models\Stable-diffusion\taiwanDollLikeness_v1.safetensors Creating model from config: E:\Pycharm_Projects\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Failed to create model quickly; will retry using slow method. LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. loading stable diffusion model: OSError Traceback (most recent call last): File "E:\Pycharm_Projects\stable-diffusion-webui\webui.py", line 139, in initialize modules.sd_models.load_model() File "E:\Pycharm_Projects\stable-diffusion-webui\modules\sd_models.py", line 438, in load_model sd_model = instantiate_from_config(sd_config.model) File "E:\Pycharm_Projects\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "E:\Pycharm_Projects\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__ self.instantiate_cond_stage(cond_stage_config) File "E:\Pycharm_Projects\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage model = instantiate_from_config(config) File "E:\Pycharm_Projects\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config return get_obj_from_str(config["target"])(**config.get("params", dict())) File "E:\Pycharm_Projects\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 103, in __init__ self.tokenizer = CLIPTokenizer.from_pretrained(version) File "e:\Pycharm_Projects\stable-diffusion-webui\venv2\lib\site-packages\transformers\tokenization_utils_base.py", line 1785, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
这个就直接挂梯子吧。
37 comments
虫子在哪? SD我常用,暂时未发现有什么不妥的地方啊
环境里的虫子,不是sd的虫子。它是被吃的那个,
不瞒你,我是被那两图吸引过来的,有点符合我的喜好了哈选图 :-D,这样的选图肯定不会有误判了哈
这个叫什么风格呢?也不是ol 。
ai生成的还不错吧?
新年快乐!万事遂愿!
元旦快乐
抓虫子都显得那么高级!没用过SD更是显得那么神秘!元旦快乐!
元旦快乐
新年开始就这么猛…
哈,那里都没去,窝家里 了
姐姐,2024,新年快乐
新年快乐啊
又是抓马、又是捉虫,真的是够累的呀。
系统重装的遗留问题
抓马捉虫,博主元旦快乐,新年好啊!
谢谢,元旦快乐啊~~
只有我看不懂吗,是爬虫吗?
ai引擎处理图片的,网上的各种妹子图有很大一部分就是这个东西生成的。
看来又是与虫子对线的一年呢
人生嘛,就是充满了虫子。
跟虫子玩,比跟人玩好玩。
新年快乐
快乐++
一本正经、煞有介事地看了三分钟,发现除了中文,其他一个都看不懂。
这个没什么东西,都是修复环境的。嘎嘎~~
以后可以生成更多美女了吗?
随时可以生成啊,理论上嘛,可以生成很多。
实不相瞒 作为一个老程序员 我的python水平 还停留在hello world
hello world 之后的就简单了~~
拜个早年
早年幸福
我这有一键整合包,需要的话发你一份~
哈?还有这么高级的东西?
我更新完修复好了,哈哈哈
还能自动下载模型,目前我找不到好的模型~
嘿,新年好..
看到代码表示脑壳痛..看不懂娃
看不懂也没神马的,新年快乐啊
不日月觉历呀。
这,字体咋还分裂了呢~~