Coder Social home page Coder Social logo

continue-revolution / sd-webui-segment-anything Goto Github PK

View Code? Open in Web Editor NEW
3.2K 31.0 192.0 417 KB

Segment Anything for Stable Diffusion WebUI

JavaScript 1.69% Python 98.27% CSS 0.04%
stable-diffusion stable-diffusion-webui stable-diffusion-webui-plugin segment-anything

sd-webui-segment-anything's Introduction

Segment Anything for Stable Diffusion WebUI

This extension aim for connecting AUTOMATIC1111 Stable Diffusion WebUI and Mikubill ControlNet Extension with segment anything and GroundingDINO to enhance Stable Diffusion/ControlNet inpainting, enhance ControlNet semantic segmentation, automate image matting and create LoRA/LyCORIS training set.

News

  • 2023/04/10: v1.0.0 SAM extension released! You can click on the image to generate segmentation masks.
  • 2023/04/12: v1.0.1 Mask expansion and API support released by @jordan-barrett-jm! You can expand masks to overcome edge problems of SAM.
  • 2023/04/15: v1.1.0 GroundingDINO support released! You can enter text prompts to generate bounding boxes and segmentation masks.
  • 2023/04/18: v1.2.0 ControlNet V1.1 inpainting support released! You can copy SAM generated masks to ControlNet to do inpainting. Note that you must update ControlNet extension to use it. ControlNet inpainting has far better performance compared to general-purposed models, and you do not need to download inpainting-specific models anymore.
  • 2023/04/24: v1.3.0 Automatic segmentation support released! Functionalities with * require you to have ControlNet extension installed. This update includes support for
    • *ControlNet V1.1 semantic segmentation
    • EditAnything un-semantic segmentation
    • Image layout generation (single image + batch process)
    • *Image masking with categories (single image + batch process)
    • *Inpaint not masked for ControlNet inpainting on txt2img panel
  • 2023/04/29: v1.4.0 API has been completely refactored. You can access all features for single image process through API. API documentation has been moved to wiki.
  • 2023/05/22: v1.4.1 EditAnything is ready to use! You can generate random segmentation and copy the output to EditAnything ControlNet.
  • 2023/05/29: v1.4.2 You may now do SAM inference on CPU by checking "Use CPU for SAM". This is for some MAC users who are not able to do SAM inference on GPU. I discourage other users from using this feature because it is significantly slower than CUDA.
  • 2023/06/01: v1.5.0 You may now choose to use local GroundingDINO to bypass C++ problem. See FAQ-1 for more detail.
  • 2023/06/04: v1.5.1 Upload Mask to ControlNet Inpainting comes back in response to ControlNet inpaint improvement. You should see a new tab beside AutoSAM after updating the extension. This feature will again be removed once ControlNet extension has its own uploading feature.
  • 2023/06/13: v1.6.0 SAM-HQ supported by @SpenserCai and me. This is an "upgraded" SAM, created by researchers at ETH Zurich & HKUST. However, I cannot guarantee which one is better and you should make your own choice based on your own experiments. Go to Installation to get the link to the models.
  • 2023/06/29: v1.6.1 MobileSAM supported. This is a tiny version of SAM, created by researchers at Kyung Hee University. Visit here for more information.
  • 2023/08/31: v1.6.2 Support WebUI v1.6.0, Gradio v3.41.2

Note that support for some other variations of SAM, such as Matting-Anything and FastSAM are still on the way. Support for these models, unlike MobileSAM, are non-trivial, especially FastSAM, which utilize a completely different pipeline, ultralytics/YOLO. Introducing these new works to the current codebase will make the original ugly-enough codebase more ugly. They will be supported once I finish a major refactor of the current codebase.

FAQ

Thanks for suggestions from github issues, reddit and bilibili to make this extension better.

There are already at least two great tutorials on how to use this extension. Check out this video (Chinese) from @ThisisGameAIResearch and this video (Chinese) from @OedoSoldier. You can also check out my demo.

You should know the following before submitting an issue.

  1. Due to the overwhelming complaints about GroundingDINO installation and the lack of substitution of similar high-performance text-to-bounding-box library, I decide to modify the source code of GroundingDINO and push to this repository. Starting from v1.5.0, you can choose to use local GroundingDINO by checking Use local groundingdino to bypass C++ problem on Settings/Segment Anything. This change should solve all problems about ninja, pycocotools, _C and any other problems related to C++/CUDA compilation.

    If you did not modify the setting described above, This script will firstly try to install GroundingDINO and check if your environment has successfully built the C++ dynamic library (the annoying _C). If so, this script will use the official implementation of GroundingDINO. This is to show respect to the authors of GroundingDINO. If the script failed to install GroundingDINO, it will use local GroundingDINO instead.

    If you'd still like to resolve the install problem of GroundingDINO, I observe some common problems for Windows users:

    • pycocotool: here.
    • _C: here. DO NOT skip steps.

    If you are still unable to install GroundingDINO on Windows AND you cannot resolve this problem AFTER searching for issues inside here here and here, You may refer to #98 and watch the videos there. Note that I develop on linux, so I cannot guarantee that any video tutorials may or may not work.

  2. If you

    The problem is most likely due to some other extensions which might also change the position inside the extension list to control ControlNet. The easiest solution is here. This change will precede SAM extension to be before ControlNet, bypassing the internal preceding code, and will not prevent you from receiving any updates from me. I am not planning to refactor my code to bypass this problem. I did not expect to control ControlNet when I created this extension, but ControlNet indeed grow much faster than my expectation.

  3. This extension has almost moved into maintenance phase. Although I don't think there will be huge updates in the foreseeable future, Mikubill ControlNet Extension is still fast developing, and I'm looking forward to more opportunities to connect my extension to ControlNet. Despite of this, I will continue to deal with issues, and monitor new research works to see if they are worth supporting. I welcome any community contribution and any feature requests.

  4. You must use gradio>=3.23.0 and WebUI>=22bcc7be to use this extension. A1111 WebUI is stable, and some integrated package authors have also updated their packages (for example, if you are using the package from @Akegarasu, i.e. 秋叶整合包, it has already been updated according to this video). Also, supporting different versions of WebUI will be a huge time commitment, during which I can create many more features. Please update your WebUI and it is safe to use. I'm not planning to support some old commits of WebUI, such as a9fed7c3.

  5. It is impossible to support the following features, at least at this moment, due to gradio/A1111 limitations. I will closely monitor gradio/A1111 update to see if it becomes possible to support them:

    • color inpainting, because gradio wierdly enlarge the input image which slows down your browser, or even freeze your page. I have already implemented this feature, though, but I made it invisible. Note that ControlNet v1.1 inpainting model is very strong, and you do not need to rely on the traditional inpainting anymore. ControlNet v1.1 does not support color inpainting.
    • edit mask/explicit copy, because gradio Image component cannot accept image+mask as an output, which is the required way of explicitly copying a masked image to img2img inpaint/inpaint sketch/ControlNet (i.e. you can see the actual masked image on the panel, instead of a mysterious internal copying). Without this, you will not be able to edit mask.
  6. Inpaint-Anything and EditAnything and A LOT of other popular SAM extensions have been supported. For Inpaint-Anything, you may check this issue for how to use. For EditAnything, please check how to use. I am always open to support any other interesting applications, submit a feature request if you find another interesting one.

  7. If you have a job opportunity and think I am a good fit, please feel free to send me an email.

  8. If you want to sponsor me, please go to sponsor section and scan the corresponding QR code.

Installation

Download this extension to ${sd-webui}/extensions via whatever way you like (git clone or install from UI)

Choose one or more of the models below and put them to ${sd-webui}/models/sam or ${sd-webui-segment-anything}/models/sam (Choose one, not both. Remove the former folder if you choose to use the latter.). Do not change model name, otherwise this extension may fail due to a bug inside segment anything.

We support several variations of segmentation models:

  1. SAM from Meta AI.

    I myself tested vit_h on NVIDIA 3090 Ti which is good. If you encounter VRAM problem, you should switch to smaller models.

  2. SAM-HQ from SysCV.

  3. MobileSAM from Kyung Hee University.

We plan to (NOT supported yet) support some other variations of segmentation models after a major refactor of the codebase:

  1. Matting-Anything from SHI-Labs. This is a post-processing model for any variation of SAM. Put the model under ${sd-webui-segment-anything}/models/sam

  2. FastSAM from CASIA-IVA-Lab. This is a YOLO variation of SAM.

GroundingDINO packages, GroundingDINO models and ControlNet annotator models will be automatically installed the first time you use them.

If your network does not allow you to access huggingface via the terminal, download GroundingDINO models from huggingface and put them under ${sd-webui-segment-anything}/models/grounding-dino. Please note that GroundingDINO still need to access huggingface to download bert vocabularies. There is no alternative at this time. Read here to find a way to resolve this problem. I will try to find an alternative in the near future.

GroundingDINO

GroundingDINO has been supported in this extension. It has the following functionalities:

  1. You can use text prompt to automatically generate bounding boxes. You can separate different category names with .. SAM can convert these bounding boxes to masks
  2. You can use point prompts with ONE bounding box to generate masks
  3. You can go to Batch Process tab to do image matting and get LoRA/LyCORIS training set

However, there are some existing problems with GroundingDINO:

  1. GroundingDINO will be install when you firstly use GroundingDINO features, instead of when you initiate the WebUI. Make sure that your terminal can have access to GitHub, otherwise you have to install GroundingDINO manually. GroundingDINO models will be automatically downloaded from huggingface. If your terminal cannot visit HuggingFace, please manually download the model and put it under ${sd-webui-segment-anything}/models/grounding-dino.
  2. If you want to use local groundingdino to bypass ALL the painful C++/CUDA/ninja/pycocotools problems, please read FAQ-1. GroundingDINO requires your device to compile C++, which might take a long time and throw tons of exceptions. If you encounter _C problem, it's most probably because you did not install CUDA Toolkit. Follow steps decribed here. DO NOT skip steps. Otherwise, please go to Grounded-SAM Issue Page and submit an issue there. Despite of this, you can still use this extension for point prompts->segmentation masks even if you cannot install GroundingDINO, don't worry.
  3. If you want to use point prompts, SAM can at most accept one bounding box. This extension will check if there are multiple bounding boxes. If multiple bounding boxes, this extension will disgard all point prompts; otherwise all point prompts will be effective. You may always select one bounding box you want.

For more detail, check How to Use and Demo.

AutoSAM

Automatic Segmentation has been supported in this extension. It has the following functionalities:

  1. You can use SAM to enhance semantic segmentation and copy the output to control_v11p_sd15_seg
  2. You can generate random segmentation and copy the output to EditAnything ControlNet
  3. You can generate image layout and edit them inside PhotoShop. Both single image and batch process are supported.
  4. You can generate masks according to category IDs. This tend to be more accurate compared to purely SAM+GroundingDINO segmentation, if what you want is a large object.

However, there are some existing problems with AutoSAM:

  1. You are required to install Mikubill ControlNet Extension to use functionality 1 and 4. Please do not change the directory name (sd-webui-controlnet).
  2. You are required to open WebUI via administrative mode the first time you access this feature if you are using Windows. This is because Windows does not allow visitors to create symbolic links via python.
  3. You can observe drastic improvement if you combine seg_ufade20k and SAM. You may only observe some slight improvement if you combine one of the Oneformer preprocessors (seg_ofade20k&seg_ofcoco). This is because Oneformer is already very strong, compared to Uniformer, for semantic segmentation. SAM can only improve some details of semantic segmentation instead of showing some categories semantic models cannot show, because SAM is NOT a semantic-recognizable model.
  4. Image layout generation has a pretty bad performance for anime images. I discourage you from using this functionality if you are dealing with anime images. I'm not sure about the performance for real images.

How to Use

If you have previously enabled other copies while using this extension, you may want to click Uncheck all copies at the bottom of this extension UI, to prevent other copies from affecting your current page.

Single Image

  1. Upload your image
  2. Optionally add point prompts on the image. Left click for positive point prompt (black dot), right click for negative point prompt (red dot), left click any dot again to cancel the prompt. You must add point prompt if you do not wish to use GroundingDINO.
  3. Optionally check Enable GroundingDINO, select GroundingDINO model you want, write text prompt (separate different categories with .) and pick a box threshold (I highly recommend the default setting. High threshold may result in no bounding box). You must write text prompt if you do not wish to use point prompts.
  4. Optionally enable previewing GroundingDINO bounding box and click Generate bounding box. You must write text prompt to preview bounding box. After you see the boxes with number marked on the top-left corner, uncheck all the boxes you do not want. If you uncheck all boxes, you will have to add point prompts to generate masks.
  5. Click Preview Segmentation button. Due to the limitation of SAM, if there are multiple bounding boxes, your point prompts will not take effect when generating masks.
  6. Choose your favorite segmentation.
  7. Optionally check Expand Mask and specify the amount, then click Update Mask.
  8. [VERY IMPORTANT] Update your ControlNet and check Allow other script to control this extension (MUST) on your ControlNet settings.

txt2img

  1. You may only copy image and mask to ControlNet inpainting.
  2. Optionally check ControlNet inpaint not masked to invert mask colors and inpaint regions outside of the mask.
  3. Select the correct ControlNet index where you are using inpainting, if you wish to use Multi-ControlNet.
  4. Configurate ControlNet panel. Click Enable, preprocessor choose inpaint_global_harmonious, model choose control_v11p_sd15_inpaint [ebff9138]. There is no need to upload image to the ControlNet inpainting panel.
  5. Write your prompts, configurate A1111 panel and click Generate.

img2img

  1. Check Copy to Inpaint Upload & ControlNet Inpainting. There is no need to select ControlNet index.
  2. Configurate ControlNet panel. Click Enable, preprocessor choose inpaint_global_harmonious, model choose control_v11p_sd15_inpaint [ebff9138]. There is no need to upload image to the ControlNet inpainting panel.
  3. Click Switch to Inpaint Upload button. There is no need to upload another image or mask, just leave them blank. Write your prompts, configurate A1111 panel and click Generate.

Batch Process

  1. Choose your SAM model, GroundingDINO model, text prompt, box threshold and mask expansion amount. Enter the source and destination directories of your images.
  2. Choose Output per image to configurate the number of masks per bounding box. I highly recommend 3, since some masks might be wierd.
  3. Click/unclick several checkboxes to configurate the images you want to save. See demo for what type of images these checkboxes represent.
  4. Click Start batch process and wait. If you see "Done" below this button, you are all set.

AutoSAM

  1. Install and update Mikubill ControlNet Extension before using it.
  2. Configurate AutoSAM tunnable parameters according to descriptions here. Use default if you cannot understand.

ControlNet

  1. Choose preprocessor.
    • seg_ufade20k, seg_ofade20k and seg_ofcoco are from ControlNet annotators. I highly recommend one of seg_ofade20k and seg_ofcoco because their performance are far better than seg_ufade20k. They are all compatible with control_v11p_sd15_seg. Optionally enable pixel-perfect to automatically pick the best preprocessor resolution. Configure your target width and height on txt2img/img2img default panel before preview if you wish to enable pixel perfect. Otherwise you need to manually set a preprocessor resolution.
    • random is for EditAnything. There is no need to set preprocessor resolution for random preprocessor since it does not contain semantic segmentation, but you need to pick an image from the AutoSeg output gallery to copy to ControlNet. 1 represents random colorization of different mask regions which is reserved for future ControlNet, 2 represents fixed colorization which can be EditAnything ControlNet control image.
  2. Click preview segmentation image. For semantic semgentations, you will see 4 images where the left 2 are without SAM and the right 2 are with SAM. For random preprocessor, you will see 3 images where the top-left is the blended image, the top-right is random colorized masks and the down-left is for EditAnything ControlNet.
  3. Check Copy to ControlNet Segmentation and select the correct ControlNet index where you are using ControlNet segmentation models if you wish to use Multi-ControlNet.
  4. Configurate ControlNet panel. Click Enable, preprocessor choose none, model choose control_v11p_sd15_seg [e1f51eb9]. There is no need to upload image to the ControlNet segmentation panel.
  5. Write your prompts, configurate A1111 panel and click Generate.
  6. If you want to use EditAnything, you need to modify some steps above:
    • In step 1: you need to choose random preprocessor.
    • Between step 3 & 4: download
      • SD 1.5 weight to ${a1111-webui}/models/ControlNet or ${sd-webui-controlnet}/models, config to ${sd-webui-controlnet}/models
      • SD 2.1 weight to ${a1111-webui}/models/ControlNet or ${sd-webui-controlnet}/models, config to ${sd-webui-controlnet}/models
    • In step 4: model choose control_v11p_sd15_seg [e1f51eb9]

Image Layout

  1. For single image, simply upload image, enter output path and click generate. You will see a lot of images inside the output directory.
  2. For batch process, simply enter source and destination directories and click generate. You will see a lot of images inside ${destination}/{image_filename} directory.

Mask by Category

  1. Choose preprocessor similar to ControlNet step 1. This is pure semantic segmentation so there is no random preprocessor.
  2. Enter category IDs separated by +. Visit here for ade20k and here for coco to get category->id map. Note that coco jumps some numbers, so the actual ID is line_number - 21. For example, if you want bed+person, your input should be 7+12 for ade20k and 59+0 for coco.
  3. For single image, upload image, click preview and configurate copy similar to here for txt2img and here for img2img.
  4. For batch process, it is similar to Batch process step 2-4.

Demo

Point prompts demo (also so-called Remove/Fill Anything)

demo.mp4

GroundingDINO demo

demo_dino.mp4

Batch process demo

Configuration Image

Input Image Output Image Output Mask Output Blend
Input Image Output Image Output Mask Output Blend

Semantic segmentation demo

video1408033456.mp4

Mask by Category demo (also so-called Replace Anything)

video1941660269.mp4

Mask by Category batch demo

image

Input Image Output Image Output Mask Output Blend
1NHa6Wc 1NHa6Wc_0_output 1NHa6Wc_0_mask 1NHa6Wc_0_blend

Contribute

Disclaimer: I have not thoroughly tested this extension, so there might be bugs. Bear with me while I'm fixing them :)

If you encounter a bug, please submit an issue. Please at least provide your WebUI version, your extension version, your browser version, errors on your browser console log if there is any, error on your terminal log if there is any, to save both of our time.

I welcome any contribution. Please submit a pull request if you want to contribute

Star History

Star History Chart

Sponsor

You can sponsor me via WeChat, AliPay or PayPal.

WeChat AliPay PayPal
216aff0250c7fd2bb32eeb4f7aae623 15fe95b4ada738acf3e44c1d45a1805 IMG_1419_

sd-webui-segment-anything's People

Contributors

chace20 avatar continue-revolution avatar jordan-barrett-jm avatar kristopolous avatar light-and-ray avatar missionfloyd avatar spensercai avatar stimeke avatar storyicon avatar sxy9699 avatar szriru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-segment-anything's Issues

ERROR:can't use the extension

sd version:226d840
Error loading script: api.py
Traceback (most recent call last):
File "Y:\stable-diffusion-webui_23-03-10\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "Y:\stable-diffusion-webui_23-03-10\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "Y:\stable-diffusion-webui_23-03-10\extensions\sd-webui-segment-anything\scripts\api.py", line 9, in
from scripts.sam import init_sam_model, dilate_mask, sam_predict, sam_model_list
File "Y:\stable-diffusion-webui_23-03-10\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'

Error loading script: sam.py
Traceback (most recent call last):
File "Y:\stable-diffusion-webui_23-03-10\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "Y:\stable-diffusion-webui_23-03-10\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "Y:\stable-diffusion-webui_23-03-10\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'

[Bug]: ControlNet shows controlnet is enabled but no input image is given in Txt2Img

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

What happened?

When following the instructions for ControlNet Inpainting:

ControlNet Inpainting

Check Copy to ControlNet Inpaint and select the ControlNet panel for inpainting if you want to use multi-ControlNet. You can >be either at img2img tab or at txt2img tab to use this functionality.
Configurate ControlNet panel. Click Enable, preprocessor choose inpaint_global_harmonious, model choose >control_v11p_sd15_inpaint [ebff9138]. There is no need to upload image to the ControlNet inpainting panel, as SAM extension will >help you to do that. Write your prompts, configurate A1111 panel and click Generate.

When using Txt2Img get a message to say there is no image input:
Error running process: D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\modules\scripts.py", line 417, in process script.process(p, *script_args) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 779, in process raise ValueError('controlnet is enabled but no input image is given') ValueError: controlnet is enabled but no input image is given

If I follow the same steps in Img2Img a different error appears, however it does seem to actually generate the image correctly.
Error in Img2Img is:
Error running process: D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py Traceback (most recent call last): File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\modules\scripts.py", line 417, in process script.process(p, *script_args) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 808, in process detected_map, is_image = preprocessor(input_image, res=unit.processor_res, thr_a=unit.threshold_a, thr_b=unit.threshold_b) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\processor.py", line 57, in inpaint mask = resize_image(img[:, :, 3:4], res) File "D:\Code\Stable-Diffusion\AUTOMATIC1111\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\util.py", line 33, in resize_image img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) cv2.error: Unknown C++ exception from OpenCV code

Steps to reproduce the problem

Included images of my settings:

image
image
image
image

What should have happened?

The instructions state that I do not need to put an image into the ControlNet area as the extension will handle that. However it is giving an error saying there is no input image.

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: 4ee968b

What browsers do you use to access the UI ?

No response

Command Line Arguments

--opt-sdp-attention --no-half-vae --opt-channelslast

Console logs

Included in the error message above

Additional information

No response

[Bug]: API GroundingDino does not work ?

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

What happened?

I try the GroundingDino API.

Steps to reproduce the problem

Here is the test script :

import base64
import requests
from PIL import Image
from io import BytesIO

url = "http://127.0.0.1:7860/sam-webui/image-mask";

def image_to_base64(img_path: str) -> str:
    with open(img_path, "rb") as img_file:
        img_base64 = base64.b64encode(img_file.read()).decode()
    return img_base64

payload = {
    "image": image_to_base64("out1.png"),
    "prompt": "body",
    "box_threshold": 0.3
}
res = requests.post(url, json=payload)

print(res)

for dct in res.json():
    image_data = base64.b64decode(dct['image'])
    image = Image.open(BytesIO(image_data))
    image.show()

The Execution is :

C:\Users\Jyce\Desktop>stable-diffusion-webui\venv\Scripts\python.exe sagd.py
<Response [500]>
Traceback (most recent call last):
  File "C:\Users\Jyce\Desktop\sagd.py", line 23, in <module>
    image_data = base64.b64decode(dct['image'])
TypeError: string indices must be integers

What should have happened?

Generate the mask images ?

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: a5c000f

What browsers do you use to access the UI ?

No response

Command Line Arguments

Launching Web UI with arguments: --xformers --api --gradio-img2img-tool color-sketch

Console logs

Start SAM Processing
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
final text_encoder_type: bert-base-uncased
C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Initializing SAM
Running SAM Inference (512, 512, 3)
SAM inference with 2 boxes, point prompts disgarded
Creating output image
API error: POST: http://127.0.0.1:7860/sam-webui/image-mask {'error': 'AttributeError', 'detail': '', 'body': '', 'errors': "'list' object has no attribute 'save'"}
Traceback (most recent call last):
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 78, in call_next
    message = await recv_stream.receive()
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\modules\api\api.py", line 145, in exception_handling
    return await call_next(request)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
    raise app_exc
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 108, in __call__
    response = await self.dispatch_func(request, call_next)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\modules\api\api.py", line 110, in log_and_time
    res: Response = await call_next(req)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 84, in call_next
    raise app_exc
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 70, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in __call__    await route.handle(scope, receive, send)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
    return await dependant.call(**values)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 53, in process_image
    response = [{"image": pil_image_to_base64(mask)} for mask in masks]
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 53, in <listcomp>
    response = [{"image": pil_image_to_base64(mask)} for mask in masks]
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 28, in pil_image_to_base64
    img.save(buffered, format="JPEG")
AttributeError: 'list' object has no attribute 'save'

Additional information

Extra bonus : Is it possible to add an option on the API to select the "expand mask" value ?

Expand mask

Hi,

Works great! A small problem I see is that the generated masks leave a very thin edge that is not inpainted. I tried playing with inpaint options, mainly with mask blur without success to remove edges

I think some some kind of mask expansion to the normal direction would be nice!

Today,after update sd-webui-segment-anything, when i use text2 img I got error from log,and can’t got the image

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

Do you know that you should use the newest ControlNet extension and enable external control if you want SAM extension to control ControlNet?

  • I have updated ControlNet extension and enabled "Allow other script to control this extension"

What happened?

Today,after update sd-webui-segment-anything, when i use text2 img I got error from log,and can’t got the image

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Today,after update sd-webui-segment-anything, when i use text2 img I got error from log,and can’t got the image

Commit where the problem happens

webui: can't use to make image
extension: after update

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

nohup bash webui.sh >/home/ubuntu/stable-diffusion-webui/nohup.log 2>&1  &

Console logs

Traceback (most recent call last):
  File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1073, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 962, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
  File "/home/ubuntu/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/components.py", line 1203, in preprocess
    return self.choices.index(x)
ValueError: '0' is not in list

Additional information

No response

[Feature]: API endpoint for points?

Maybe I’m misunderstanding the API, but it seems like it only works with groundingDINO right now.

Would it be possible to add a field to the API endpoint that takes in a series of points that are the equivalent of where you would click in the UI? Or maybe a mask of green and/or red that gets translated into include/exclude points?

这是什么错误?

Traceback (most recent call last):
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\extensions\sd-webui-segment-anything\scripts\sam.py", line 187, in sam_predict
masks, _, _ = predictor.predict(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\predictor.py", line 154, in predict
masks, iou_predictions, low_res_masks = self.predict_torch(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\predictor.py", line 222, in predict_torch
sparse_embeddings, dense_embeddings = self.model.prompt_encoder(
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\modeling\prompt_encoder.py", line 155, in forward
point_embeddings = self._embed_points(coords, labels, pad=(boxes is None))
File "D:\SD2.2b(简中)+DirectML3.14+CN1.1整合包\python\lib\site-packages\segment_anything\modeling\prompt_encoder.py", line 85, in _embed_points
labels = torch.cat([labels, padding_label], dim=1)
RuntimeError

[Bug]: Fails to load if control net is not installed. TypeError: '<' not supported between instances of 'NoneType' and 'int'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

What happened?

i tried to install this extension but it fails with this error

Traceback (most recent call last):
  File "X:\stable-diffusion-webui\modules\scripts.py", line 270, in wrap_call
    res = func(*args, **kwargs)
  File "X:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 329, in ui
    priorize_sam_scripts(is_img2img)
  File "X:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 304, in priorize_sam_scripts
    if cnet_idx < sam_idx:c
TypeError: '<' not supported between instances of 'NoneType' and 'int'`

When I look at it, it appears to fail on cnet_idx < sam_idx: and I believe that is because I don't have the control net extension installed so it fails because cnet_idx is None.

Steps to reproduce the problem

  1. Don't have control net extension installed.
  2. Install this sd-webui-segment-anything extension
  3. Start the app, and view the error 'TypeError: '<' not supported between instances of 'NoneType' and 'int'`'

What should have happened?

The app should load and I should see this extension appear.

Commit where the problem happens

webui: ebd3758129d3dbfc9796273fea2022e0ef4e6daf ( should be latest )
extension: 4ee968b

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

no

Console logs

not relevant

Additional information

maybe you can initialize cnet_idx to a high number like 100000000 instead of None, then it wouldn't fail the comparison.

[Bug]: ModuleNotFoundError: No module named 'segment_anything'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

after instalation. Trying to reproduce video guide.

Looks like a python prerequisite.

Steps to reproduce the problem

  1. Lauch webUI and see the message

What should have happened?

Normal launch, extension "section" should be in img2img but it is not

Commit where the problem happens

webui: 22bcc7b
controlnet:

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

set COMMANDLINE_ARGS=--xformers --medvram

Console logs

Traceback (most recent call last):
  File "E:\Files\SD_auto\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "E:\Files\SD_auto\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\Files\SD_auto\stable-diffusion-webui\extensions\sd-webui-segment-everything\scripts\sam.py", line 20, in <module>
    from segment_anything import SamPredictor, build_sam
ModuleNotFoundError: No module named 'segment_anything'

Additional information

No response

是否存在和ControlNet的seg模式关联的可能性?

ControlNet的seg模式十分强大,可以用来控制构图和画面内容,但似乎controlNet使用的seg模型识别的颜色与语义的映射,与segment-anything不太一样。但segment-anything拥有更加强大的识别和分割能力。是否有这样的可能性:使用segment-anything分割和识别一张图片,并输出Controlnet的Seg模式能使用的图呢。

注:该问题可能问得非常的外行,属于处于直觉的一个疑问。

感谢大佬

[Feature]: Add inpaint not masked to controlnet as mask mode

Given that automatic1111 has mask mode of inpaint not masked, controlnet should also have that. Since segment anything has a controlnet option, there should be a mask mode to send to controlnet from SAM. This way I can mask a small part of the problem image which I do not want to be disturbed and change the rest of it with controlnet

[Feature]: 希望把模型下载目录放到stable diffusion/models下面

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

Do you know that you should use the newest ControlNet extension and enable external control if you want SAM extension to control ControlNet?

  • I have updated ControlNet extension and enabled "Allow other script to control this extension"

What happened?

好几个插件都在用seg和dino...一个seg模型2.5G,连续下几个太麻烦了。
现在用软连接放各个插件里面,还是希望能像controlnet那样以后迁移到models文件夹下面。

Steps to reproduce the problem

目录迁移

What should have happened?

目录迁移

Commit where the problem happens

目录迁移

What browsers do you use to access the UI ?

No response

Command Line Arguments

目录迁移

Console logs

目录迁移

Additional information

No response

[Feature] Drop down menu for GroundingDINO?

The segment anything menu now looks fairly cluttered, and there is a lot of unnecessary space taken up by GroundingDINO even when it is not being used. This could be solved by putting GroundingDINO inside of a drop down menu like what is done with the ControlNet canvas sliders.

关于界面美化的两个小问题

1.请问你的双语界面样式是自己改的吗?我看插件库里的双语插件是上下排布很不美观,你的样式局部更加合理

Snipaste_2023-04-12_01-33-58

Snipaste_2023-04-12_01-28-54

2.我看你的插件界面非常工整,请问插件的位置顺序可以改的吗?
Snipaste_2023-04-12_01-29-10

[Bug]: Mac M2 issue:mlir module expected element type f32 but received si32

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

What happened?

/AppleInternal/Library/BuildRoots/9e200cfa-7d96-11ed-886f-a23c4f261b56/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1309: failed assertion `Incompatible element type for parameter at index 0, mlir module expected element type f32 but received si32'

Steps to reproduce the problem

  1. add a black point
  2. click “preview segmentation”

What should have happened?

none

Commit where the problem happens

webui:
extension:

What browsers do you use to access the UI ?

No response

Command Line Arguments

none

Console logs

/AppleInternal/Library/BuildRoots/9e200cfa-7d96-11ed-886f-a23c4f261b56/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1309: failed assertion `Incompatible element type for parameter at index 0, mlir module expected element type f32 but received si32'

Additional information

No response

点击图片没反应

image
点击图片没反应
使用谷歌的colab部署
希望帮忙看下,非常感谢!

[Bug]: Clicking preview does nothing

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Can upload pictures
Clicking preview does nothing
python version 3.10

in txt2img left click add a black dot, but right click add a black dot too.
in img2img left click and right click both show nothing.

Steps to reproduce the problem

  1. img2img
  2. upload image
  3. click

What should have happened?

Clicking preview should show something

Commit where the problem happens

webui: yes
controlnet:yes

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

webui.sh --listen --share --xformers --enable-insecure-extension-access --disable-nan-check'

Console logs

no information showed about this extension。

stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 911, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range

this is the error about uploading image in img2img box.

Additional information

none

ERROR: Could not build wheels for groundingdino, pycocotools, which is required to install pyproject.toml-based projects

pycocotools_mask.c(6): fatal error C1083: \xce޷\xa8\xb4򿪰\xfc\xc0\xa8\xceļ\xfe: \xa1\xb0Python.h\xa1\xb1: No such file or directory

  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

ERROR: Failed building wheel for pycocotools

ERROR: Could not build wheels for groundingdino, pycocotools, which is required to install pyproject.toml-based projects

秋叶整合版安装GroundingDINO安装时候报错

 D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\c10/util/Optional.h(554): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<T>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

          with

          [

              T=std::vector<at::Tensor,std::allocator<at::Tensor>>

          ]

  D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\torch\csrc\api\include\torch/optim/lbfgs.h(50): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<at::Tensor,std::allocator<at::Tensor>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

  D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base<T>\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

          with

          [

              T=std::vector<at::Tensor,std::allocator<at::Tensor>>

          ]

  D:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\torch\include\torch/csrc/python_headers.h(12): fatal error C1083: \xce޷\xa8\xb4򿪰\xfc\xc0\xa8\xceļ\xfe: \xa1\xb0Python.h\xa1\xb1: No such file or directory

  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

ERROR: Failed building wheel for groundingdino

error: subprocess-exited-with-error

× Building wheel for pycocotools (pyproject.toml) did not run successfully.

│ exit code: 1

╰─> [23 lines of output]

  running bdist_wheel

  running build

  running build_py

  creating build

  creating build\lib.win-amd64-cpython-310

  creating build\lib.win-amd64-cpython-310\pycocotools

  copying pycocotools\coco.py -> build\lib.win-amd64-cpython-310\pycocotools

  copying pycocotools\cocoeval.py -> build\lib.win-amd64-cpython-310\pycocotools

  copying pycocotools\mask.py -> build\lib.win-amd64-cpython-310\pycocotools

  copying pycocotools\__init__.py -> build\lib.win-amd64-cpython-310\pycocotools

  running build_ext

  building 'pycocotools._mask' extension

  creating build\temp.win-amd64-cpython-310

  creating build\temp.win-amd64-cpython-310\Release

  creating build\temp.win-amd64-cpython-310\Release\common

  creating build\temp.win-amd64-cpython-310\Release\pycocotools

  "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\numpy\core\include" -I./common "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\include" "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tc./common/maskApi.c /Fobuild\temp.win-amd64-cpython-310\Release\./common/maskApi.obj

  maskApi.c

  ./common/maskApi.c(151): warning C4101: \xa1\xb0xp\xa1\xb1: δ\xd2\xfd\xd3õľֲ\xbf\xb1\xe4\xc1\xbf

None
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\lib\site-packages\numpy\core\include" -I./common "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\include" "-ID:\tools\Stable diffusion\SD-webui-aki-v3\py310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcpycocotools/_mask.c /Fobuild\temp.win-amd64-cpython-310\Release\pycocotools/_mask.obj
GroundingDINO install failed. Please submit an issue to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues.

  _mask.c

  c1: fatal error C1083: \xce޷\xa8\xb4\xf2\xbf\xaaԴ\xceļ\xfe: \xa1\xb0pycocotools/_mask.c\xa1\xb1: No such file or directory

  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2

  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

ERROR: Failed building wheel for pycocotools

ERROR: Could not build wheels for pycocotools, which is required to install pyproject.toml-based projects

求助,有人知道错在哪里,该怎么弄么?

[Bug]: Can't add point prompts or pick a box threshold

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

What happened?

After dropping image to Segment Anyting tab I can't add point prompts or pick a box threshold after checking Enable GroundingDINO.

Steps to reproduce the problem

  1. Go to img2img
  2. Open Segment Anyting tab
  3. Select a sam model
  4. Drop image under selecting model tab
  5. Try to click on the image with right and left mouse buttons
  6. Check Enable GroundingDINO.
  7. ???
  8. Profit
    image

What should have happened?

Dots should appear on the image after clicking at it.

Commit where the problem happens

webui: latest A11111 as for 14.04.23
extension: sd-webui-segment-anything 16.04.23

What browsers do you use to access the UI ?

No response

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --xformers --no-half-vae
set ATTN_PRECISION=fp16
call webui.bat

Console logs

No errors when clicking on dropped image

This error after entering text in GroundingDINO Detection Prompt:

127.0.0.1/:1 Uncaught (in promise) API Error
Promise.then (async)
(anonymous) @ index.4395ab38.js:76
(anonymous) @ index.4395ab38.js:4
le @ index.4395ab38.js:4
x @ index.4395ab38.js:79
(anonymous) @ index.4395ab38.js:4
(anonymous) @ index.4395ab38.js:4
u @ index.4395ab38.js:78
t.$$.update @ index.4395ab38.js:78
ql @ index.4395ab38.js:4
bt @ index.4395ab38.js:4
Promise.then (async)
yo @ index.4395ab38.js:4
Yl @ index.4395ab38.js:4
(anonymous) @ index.4395ab38.js:4
g @ index.4395ab38.js:34
i @ index.4395ab38.js:34
(anonymous) @ index.4395ab38.js:4
S @ index.4395ab38.js:79
i @ index.4395ab38.js:79
(anonymous) @ index.4395ab38.js:4
k @ index.4395ab38.js:78

Additional information

No response

新版本update Mask报错

提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Dilation Amount: 18
Traceback (most recent call last):
File "C:\novelai-webui-aki\py310\lib\site-packages\gradio\routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "C:\novelai-webui-aki\py310\lib\site-packages\gradio\blocks.py", line 1075, in process_api
result = await self.call_function(
File "C:\novelai-webui-aki\py310\lib\site-packages\gradio\blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\novelai-webui-aki\py310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\novelai-webui-aki\py310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\novelai-webui-aki\py310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\novelai-webui-aki\extensions\sd-webui-segment-anything\scripts\sam.py", line 63, in update_mask
binary_img = np.array(mask_image.convert('1'))
AttributeError: 'list' object has no attribute 'convert'

[Bug]: 使用了copy to inpaint进行图生图重绘后无法保存图片

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

What happened?

image
勾选了copy to inpaint 进行重绘后,对生成的图片进行保存,发生报错,图片无法被正确的保存。在不使用本插件进行图生图能正确保存,但如果已经报错了,后面即使不使用本插件,也一样无法进行保存。

Steps to reproduce the problem

  1. 上传一张图片到图生图
  2. 把这张图片也上传到seganything插件生成蒙版
  3. 勾选copy to inpaint
  4. 填写tag和参数进行重绘
  5. 正确完成了图片重绘
  6. 点击保存按钮,进行图片保存,发生了报错,图片没有被保存

What should have happened?

应当正确将图片保存到指定目录

Commit where the problem happens

webui: sd-webui
extension: multidiffusion-upscaler-for-automatic1111,同时使用了这个插件

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

NO

Console logs

Initializing SAM
Running SAM Inference (1024, 512, 3)
Creating output image
Initializing SAM
Running SAM Inference (1024, 512, 3)
Creating output image
Initializing SAM
Running SAM Inference (1024, 512, 3)
Creating output image
[Tiled VAE] VAE is on CPU. Please enable 'Move VAE to GPU' to use Tiled VAE.

Error completing request
Arguments: ('{"prompt": "blue hair,(pixel art:1.4), (retro aesthetics:1.2), nostalgic charm, blocky textures, limited color palette, digital design, 8-bit style", "all_prompts": ["blue hair,(pixel art:1.4), (retro aesthetics:1.2), nostalgic charm, blocky textures, limited color palette, digital design, 8-bit style"], "negative_prompt": "(EasyNegative),(worst quality, low quality:1.4), (bad anatomy), (inaccurate limb:1.2),poorly eyes, extra digit,fewer digits,six fingers,(extra arms,extra legs:1.2),text,cropped,jpegartifacts,(signature), (watermark), username,blurry,more than five fingers in one palm,no thumb,no nails, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,", "all_negative_prompts": ["(EasyNegative),(worst quality, low quality:1.4), (bad anatomy), (inaccurate limb:1.2),poorly eyes, extra digit,fewer digits,six fingers,(extra arms,extra legs:1.2),text,cropped,jpegartifacts,(signature), (watermark), username,blurry,more than five fingers in one palm,no thumb,no nails, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,"], "seed": 1033529053, "all_seeds": [1033529053], "subseed": 374074384, "all_subseeds": [374074384], "subseed_strength": 0, "width": 512, "height": 1024, "sampler_name": "DPM++ 2M Karras", "cfg_scale": 9, "steps": 70, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_hash": "1d1e459f9f", "seed_resize_from_w": 0, "seed_resize_from_h": 0, "denoising_strength": 0.6, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["blue hair,(pixel art:1.4), (retro aesthetics:1.2), nostalgic charm, blocky textures, limited color palette, digital design, 8-bit style\\nNegative prompt: (EasyNegative),(worst quality, low quality:1.4), (bad anatomy), (inaccurate limb:1.2),poorly eyes, extra digit,fewer digits,six fingers,(extra arms,extra legs:1.2),text,cropped,jpegartifacts,(signature), (watermark), username,blurry,more than five fingers in one palm,no thumb,no nails, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,\\nSteps: 70, Sampler: DPM++ 2M Karras, CFG scale: 9, Seed: 1033529053, Size: 512x1024, Model hash: 1d1e459f9f, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337"], "styles": [], "job_timestamp": "20230415105520", "clip_skip": 2, "is_using_inpainting_conditioning": false}', [{'name': 'C:\\Users\\admin\\AppData\\Local\\Temp\\tmps6i6f9i1.png', 'data': 'http://127.0.0.1:7860/file=C:\\Users\\admin\\AppData\\Local\\Temp\\tmps6i6f9i1.png', 'is_file': True}], False, 5) {}
Traceback (most recent call last):
  File "F:\SD_WebUI_launcher\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\SD_WebUI_launcher\modules\ui_common.py", line 56, in save_files
    images = [images[index]]
IndexError: list index out of range
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Additional information

推测可能和生成的蒙版图片有关?

秋叶整合版安装GroundingDino失败:can not install groundingdion : Command '['ninja', '-v']' returned non-zero exit status 1.

ע\xd2\xe2: \xb0\xfc\xba\xac\xceļ\xfe: D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\include\ATen/ops/special_airy_ai.h

      with

      [

          T=c10::SymInt

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::SymInt\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/TensorImpl.h(1602): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalc10::SymInt\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::SymInt

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::basic_string<char,std::char_traits,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type_base.h(452): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::basic_string<char,std::char_traits,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::basic_string<char,std::char_traits<char>,std::allocator<char>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::QualifiedName

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::QualifiedName

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::QualifiedName

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::QualifiedName\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type_base.h(700): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalc10::QualifiedName\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::QualifiedName

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=at::TensorBase

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=at::TensorBase

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=at::TensorBase

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBaseat::TensorBase\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBase.h(933): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalat::TensorBase\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=at::TensorBase

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=at::Tensor

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=at::Tensor

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=at::Tensor

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBaseat::Tensor\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBody.h(518): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalat::Tensor\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=at::Tensor

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=at::Generator

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=at::Generator

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=at::Generator

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBaseat::Generator\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBody.h(597): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalat::Generator\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=at::Generator

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::Scalar

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::Scalar

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::Scalar

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::Scalar\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/TensorBody.h(628): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optionalc10::Scalar\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::Scalar

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::shared_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::shared_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::shared_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::shared_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/ivalue.h(1437): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::shared_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::shared_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::weak_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::weak_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::weak_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::weak_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/ivalue.h(1438): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::weak_ptrtorch::jit::CompilationUnit>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::weak_ptr<torch::jit::CompilationUnit>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(484): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::ShapeSymbol,std::allocatorc10::ShapeSymbol>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::ShapeSymbol,std::allocator<c10::ShapeSymbol>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<bool,std::allocator<bool>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<bool,std::allocator<bool>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<bool,std::allocator<bool>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<bool,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(443): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<bool,std::allocator>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<bool,std::allocator<bool>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(569): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::optionalc10::Stride,std::allocator<c10::optionalc10::Stride>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(845): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::VaryingShapec10::Stride\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::optional<c10::Stride>,std::allocator<c10::optional<c10::Stride>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(569): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(615): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::VaryingShape<__int64>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::optional<__int64>,std::allocator<c10::optional<__int64>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<__int64,std::allocator<__int64>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<__int64,std::allocator<__int64>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<__int64,std::allocator<__int64>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<__int64,std::allocator<__int64>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/jit_type.h(728): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<__int64,std::allocator<__int64>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<__int64,std::allocator<__int64>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/impl/InlineDeviceGuard.h(427): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<c10::impl::InlineDeviceGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/DeviceGuard.h(178): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::impl::InlineOptionalDeviceGuardc10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::impl::InlineDeviceGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/impl/InlineStreamGuard.h(197): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<c10::impl::InlineStreamGuardc10::impl::VirtualGuardImpl>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/StreamGuard.h(139): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::impl::InlineOptionalStreamGuardc10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::impl::InlineStreamGuard<c10::impl::VirtualGuardImpl>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::impl::VirtualGuardImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::VirtualGuardImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::VirtualGuardImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBasec10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/impl/InlineStreamGuard.h(232): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::impl::VirtualGuardImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/core/StreamGuard.h(162): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::impl::InlineMultiStreamGuardc10::impl::VirtualGuardImpl\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::impl::VirtualGuardImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type>,std::allocator<c10::weak_intrusive_ptr<TTarget,c10::detail::intrusive_target_default_null_type>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          TTarget=c10::StorageImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/core/ivalue_inl.h(884): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type>,std::allocator<c10::weak_intrusive_ptr<TTarget,c10::detail::intrusive_target_default_null_type>>>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          TTarget=c10::StorageImpl

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=std::vector<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>,std::allocator<c10::weak_intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<c10::StorageImpl>>>>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(212): warning C4624: \xa1\xb0c10::constexpr_storage_t\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::SmallVector<__int64,5>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(411): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::constexpr_storage_t\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::SmallVector<__int64,5>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

      with

      [

          T=c10::SmallVector<__int64,5>

      ]

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(549): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xb1\xf0\xc3\xfb ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::OptionalBase<c10::SmallVector<__int64,5>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\ATen/TensorIterator.h(918): note: \xb2鿴\xb6\xd4\xd5\xfd\xd4ڱ\xe0\xd2\xeb\xb5\xc4 \xc0\xe0 ģ\xb0\xe5 ʵ\xc0\xfd\xbb\xaf\xa1\xb0c10::optional<c10::SmallVector<__int64,5>>\xa1\xb1\xb5\xc4\xd2\xfd\xd3\xc3

D:/novelai-webui-aki-v3/py310/lib/site-packages/torch/include\c10/util/Optional.h(446): warning C4624: \xa1\xb0c10::trivially_copyable_optimization_optional_base\xa1\xb1: \xd2ѽ\xab\xce\xf6\xb9\xb9\xba\xaf\xca\xfd\xd2\xfeʽ\xb6\xa8\xd2\xe5Ϊ\xa1\xb0\xd2\xd1ɾ\xb3\xfd\xa1\xb1

      with

      [

          T=c10::SmallVector<__int64,5>

      ]

ninja: build stopped: subcommand failed.

Traceback (most recent call last):

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build

  subprocess.run(

File "subprocess.py", line 526, in run

subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "<string>", line 2, in <module>

File "<pip-setuptools-caller>", line 34, in <module>

File "C:\Users\80450\AppData\Local\Temp\pip-req-build-rp9ihets\setup.py", line 192, in <module>

  setup(

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\__init__.py", line 87, in setup

  return distutils.core.setup(**attrs)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup

  return run_commands(dist)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands

  dist.run_commands()

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands

  self.run_command(cmd)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\dist.py", line 1217, in run_command

  super().run_command(command)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command

  cmd_obj.run()

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\install.py", line 68, in run

  return orig.install.run(self)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\install.py", line 698, in run

  self.run_command('build')

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command

  self.distribution.run_command(command)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\dist.py", line 1217, in run_command

  super().run_command(command)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command

  cmd_obj.run()

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build.py", line 132, in run

  self.run_command(cmd_name)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command

  self.distribution.run_command(command)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\dist.py", line 1217, in run_command

  super().run_command(command)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command

  cmd_obj.run()

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_ext.py", line 84, in run

  _build_ext.run(self)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run

  _build_ext.build_ext.run(self)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run

  self.build_extensions()

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 843, in build_extensions

  build_ext.build_extensions(self)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\Cython\Distutils\old_build_ext.py", line 195, in build_extensions

  _build_ext.build_ext.build_extensions(self)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 468, in build_extensions

  self._build_extensions_serial()

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 494, in _build_extensions_serial

  self.build_extension(ext)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension

  _build_ext.build_extension(self, ext)

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 549, in build_extension

  objects = self.compiler.compile(

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 815, in win_wrap_ninja_compile

  _write_ninja_file_and_compile_objects(

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 1574, in _write_ninja_file_and_compile_objects

  _run_ninja_build(

File "D:\novelai-webui-aki-v3\py310\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build

  raise RuntimeError(message) from e

RuntimeError: Error compiling objects for extension
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.

error: legacy-install-failure

× Encountered error while trying to install package.

╰─> groundingdino

note: This is an issue with the package mentioned above, not pip.

hint: See above for output from the failure.

None
GroundingDINO install failed. Please submit an issue to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues.

使用的是aki最新整合包的最新版本,gradio= 3.23.0 ,WebUI=22bcc7be
C:\Users\80450>python
Python 3.10.5 (tags/v3.10.5:f377153, Jun 6 2022, 16:14:13) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

from torch.utils.cpp_extension import CUDA_HOME
print(CUDA_HOME)
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
CUDA也是正确版本,现在每次安装GroundingDino的时候都会出错

list index out of range

Traceback (most recent call last):
File "E:\stable-diffusion-webui_23-02-27_onedrive\python\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "E:\stable-diffusion-webui_23-02-27_onedrive\python\lib\site-packages\gradio\blocks.py", line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "E:\stable-diffusion-webui_23-02-27_onedrive\python\lib\site-packages\gradio\blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range

[Bug]: NameError: name '_C' is not defined

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

What happened?

Trying to use the grounding dino mode.
I get an error when I press "generate bounding box"
The error is :

C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Traceback (most recent call last):
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api    result = await self.call_function(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 208, in dino_predict
    boxes_filt, install_success = dino_predict_internal(input_image, dino_model_name, text_prompt, box_threshold)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 138, in dino_predict_internal
    boxes_filt = get_grounding_output(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 114, in get_grounding_output
    outputs = model(image[None], captions=[caption])
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\groundingdino.py", line 313, in forward
    hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 258, in forward
    memory, memory_text = self.encoder(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 576, in forward
    output = checkpoint.checkpoint(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward
    outputs = run_function(*args)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 785, in forward
    src2 = self.self_attn(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 338, in forward
    output = MultiScaleDeformableAttnFunction.apply(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 53, in forward
    output = _C.ms_deform_attn_forward(
NameError: name '_C' is not defined

Steps to reproduce the problem

  1. Load an image in segment anything
  2. Select SAM model
  3. click on "Enable GroundingDINO"
  4. Select the grounding dino model
  5. Write my GroundingDINO Detection Prompt
  6. Select "I want to preview GroundingDINO detection result and select the boxes I want."
  7. Click "Generate bounding box"

What should have happened?

Generating the bounding boxes images

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: 1664834

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

Launching Web UI with arguments: --xformers --api --gradio-img2img-tool color-sketch

Console logs

venv "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Installing None
Installing onnxruntime-gpu...
Installing None
Installing opencv-python...
Installing None
Installing Pillow...



Installing sd-webui-controlnet requirement: fvcore
Installing sd-webui-controlnet requirement: pycocotools



Launching Web UI with arguments: --xformers --api --gradio-img2img-tool color-sketch
Loading weights [f93e6a50ac] from C:\Users\Jyce\Desktop\stable-diffusion-webui\models\Stable-diffusion\uberRealisticPornMerge_urpmv13.safetensors
Creating model from config: C:\Users\Jyce\Desktop\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Textual inversion embeddings skipped(1): nrealfixer
Model loaded in 5.7s (load weights from disk: 0.2s, create model: 0.4s, apply weights to model: 2.8s, apply half(): 0.5s, move model to device: 0.7s, load textual inversion embeddings: 1.0s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.7s (import torch: 1.5s, import gradio: 0.9s, import ldm: 0.6s, other imports: 1.0s, setup codeformer: 0.1s, load scripts: 3.2s, load SD checkpoint: 6.1s, create ui: 1.9s, gradio launch: 0.2s).
Installing sd-webui-segment-anything requirement: groundingdino
GroundingDINO install success.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
final text_encoder_type: bert-base-uncased
Downloading (…)/main/tokenizer.json: 100%|██████████████████████████████████████████| 466k/466k [00:00<00:00, 6.83MB/s]
Downloading model.safetensors: 100%|████████████████████████████████████████████████| 440M/440M [00:06<00:00, 73.0MB/s]
Downloading: "https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swint_ogc.pth" to C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\models/grounding-dino\groundingdino_swint_ogc.pth
100%|███████████████████████████████████████████████████████████████████████████████| 662M/662M [00:09<00:00, 69.8MB/s]
C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\transformers\modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Traceback (most recent call last):
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1075, in process_api    result = await self.call_function(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 884, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 208, in dino_predict
    boxes_filt, install_success = dino_predict_internal(input_image, dino_model_name, text_prompt, box_threshold)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 138, in dino_predict_internal
    boxes_filt = get_grounding_output(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 114, in get_grounding_output
    outputs = model(image[None], captions=[caption])
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\groundingdino.py", line 313, in forward
    hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 258, in forward
    memory, memory_text = self.encoder(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 576, in forward
    output = checkpoint.checkpoint(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 249, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\utils\checkpoint.py", line 107, in forward
    outputs = run_function(*args)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\transformer.py", line 785, in forward
    src2 = self.self_attn(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 338, in forward
    output = MultiScaleDeformableAttnFunction.apply(
  File "C:\Users\Jyce\Desktop\stable-diffusion-webui\venv\lib\site-packages\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 53, in forward
    output = _C.ms_deform_attn_forward(
NameError: name '_C' is not defined

Additional information

No response

怎么保存真正透明的png图片

segment计算后的mask结果,肉眼来看是一张背景完全透明的png图片
image
如果在PS里擦除多余选择的内容,放进img2img重绘后,会发现png变成了一张“图片”(这里是用Control net线稿看得比较清楚一点)
企业微信截图_16819731489035
而直接保存后未修改的png却不会(如图)
image

segment计算后的结果并不一定完全准确,需要经过后期调整,有办法可以解决这个问题吗

[Feature]: Threshold, padding and smoothing.

Firstly kudos and thanks for getting SAM working with webui.

If one could make a request, it'd be nice to have the ability to control

  1. Threshold (strong or weak) of the selection.
  2. Padding around the selection.
  3. Smoothing of the selection.

There could be sliders for each of these parameters.

小白疑惑,为什么没有这个模块呢 amd用户

Python 版本 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)]
Commit hash值: 3715ece0adce7bf7c5e9c5ab3710b2fdc3848f39

本整合包由 NovelAI 中文频道 出品,严禁倒卖

正在启动 WebUI 中...

Web UI 运行参数为: --autolaunch --no-half --precision full --opt-sub-quad-attention
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: D:\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper-main\setting.json
Error loading script: api.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 9, in
from scripts.sam import init_sam_model, dilate_mask, sam_predict, sam_model_list
File "D:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'

Error loading script: sam.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 17, in
from segment_anything import SamPredictor, sam_model_registry
ModuleNotFoundError: No module named 'segment_anything'

SD-Webui API layer loaded

[Bug]: GroundingDINO doesn't respect `--device-id` flag

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

Do you know that you should use the newest ControlNet extension and enable external control if you want SAM extension to control ControlNet?

  • I have updated ControlNet extension and enabled "Allow other script to control this extension"

What happened?

GroundingDINO always access GPU 0 even if --device-id is set to non-zero value, and trigger illegal memory access CUDA error when you generate bounding box again.

Steps to reproduce the problem

  1. Start WebUI on multi-GPU server with non-zero GPU ID, such as ./webui.sh --device-id 1
  2. Check Enable GroundingDINO
  3. Select model, enter some prompts
  4. Check I want to preview GroundingDINO detection result and select the boxes I want.
  5. Click Generate bounding box
  6. Wait until finished
  7. Click Generate bounding box again
  8. You should notice error logs in terminal RuntimeError: CUDA error: an illegal memory access was encountered
  9. Run nvidia-smi in another terminal, you should notice a process named python3 using both GPU 0 and the one you specified in step 1.

What should have happened?

GroundingDINO should not access GPU 0 at any moment.

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: 724b4db

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

cmdline:

./webui.sh -f --listen --device-id 7

modified webui-user.sh:

install_dir="/mnt"

I'm running WebUI inside a docker container with:

docker run --name stable-diffusion -it --runtime nvidia --gpus all --ipc host -v ${HOME}:/mnt -p 7860:7860 pytorch/pytorch:1.13.1-cuda11.6-cudnn8-devel

Console logs

Launching Web UI with arguments: -f --listen --device-id 3
No module 'xformers'. Proceeding without it.
Loading weights [1a189f0be6] from /mnt/stable-diffusion-webui/models/Stable-diffusion/sdv1-5-pruned.safetensors
Creating model from config: /mnt/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 2.1s (load weights from disk: 0.6s, create model: 0.4s, apply weights to model: 0.2s, apply half(): 0.2s, load VAE: 0.2s, move model to device: 0.4s).
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 9.4s (import torch: 1.0s, import gradio: 1.1s, import ldm: 1.4s, other imports: 1.9s, load scripts: 1.1s, load SD checkpoint: 2.2s, create ui: 0.5s, gradio launch: 0.1s).
Start SAM Processing
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinB (938MB)
final text_encoder_type: bert-base-uncased
/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:768: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
  warnings.warn(
Initializing SAM
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/gradio/routes.py", line 394, in run_predict
    output = await app.get_blocks().process_api(
  File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1075, in process_api
    result = await self.call_function(
  File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 884, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/opt/conda/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/mnt/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 161, in sam_predict
    sam = init_sam_model(sam_model_name)
  File "/mnt/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 130, in init_sam_model
    sam_model_cache[sam_model_name] = load_sam_model(sam_model_name)
  File "/mnt/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 56, in load_sam_model
    sam.to(device=device)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
    return self._apply(convert)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
    module._apply(fn)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
    module._apply(fn)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
    module._apply(fn)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
    param_applied = fn(param)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Additional information

Generated by neofetch on host machine:

OS: Ubuntu 20.04.5 LTS x86_64
Host: X660 G45 Whitley
Kernel: 5.4.0-147-generic
Uptime: 6 hours, 5 mins
Packages: 1199 (dpkg), 4 (snap)
Shell: zsh 5.8
Resolution: 1024x768
Terminal: /dev/pts/3
CPU: Intel Xeon Platinum 8369C (128) @ 3.500GHz
GPU: NVIDIA 8e:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 56:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA e8:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 8a:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA eb:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 6b:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 71:00.0 NVIDIA Corporation Device 20b2
GPU: NVIDIA 51:00.0 NVIDIA Corporation Device 20b2
Memory: 26134MiB / 1031335MiB

[Bug]: No module named groundingdino

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

What happened?

Startup of webui does produce this error:

ModuleNotFoundError: No module named 'groundingdino'

The README doesn't say anything about needing to nave it installed. It says groundingdino is optional. If it's required, it should be automatically installed with requirements.txt or some other way.

Steps to reproduce the problem

start webui

What should have happened?

no error

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: b8f3c09

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--api --disable-safe-unpickle --no-half-vae

Console logs

No module 'xformers'. Proceeding without it.
Error loading script: dino.py
Traceback (most recent call last):
  File "K:\AI\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "K:\AI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "K:\AI\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 12, in <module>
    import groundingdino.datasets.transforms as T
ModuleNotFoundError: No module named 'groundingdino'

Error loading script: sam.py
Traceback (most recent call last):
  File "K:\AI\stable-diffusion-webui\modules\scripts.py", line 256, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "K:\AI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "K:\AI\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 18, in <module>
    from scripts.dino import dino_model_list, dino_predict_internal, show_boxes, clear_dino_cache
  File "K:\AI\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\dino.py", line 12, in <module>
    import groundingdino.datasets.transforms as T
ModuleNotFoundError: No module named 'groundingdino'

Additional information

Windows 10, not on WSL

[Bug]: User interface hangs when opening gradio accordion

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

After a fresh install of the extension, it seems that I am unable to open the gradio accordion. The whole interface freezes when I click on it to open.

Tested both soft ui reload and hard restart, same result in both cases.

Steps to reproduce the problem

  1. Go to txt2img or img2img tab, both have the same behavior
  2. Click on the accordion
  3. Nothing happens, the cursor stays in "pointy" mode, and the ui is no more responsive.

What should have happened?

The interface should not freeze when interacting with it.

Commit where the problem happens

webui: 22bcc7be
sam: e93d178

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--api --api-log --allow-code

Console logs

none

Additional information

The ui does not freeze in chrome. If anyone else has the same issue, try changing browser. I do not have any other extension than adblock. (which is disabled on localhost in my case)

[Bug]: Cannot View preview and cannot use it in img2img

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

I cannot add points by this extension in img2img, but can add points in txt2img.
Then, when I click preview, it cannot show image and report error.

Steps to reproduce the problem

  1. start webui at a serve in the local network
  2. access webui at a laptop in the local network
  3. bug occurs

What should have happened?

Successfully preview segmentation map

Commit where the problem happens

e93d178

What browsers do you use to access the UI ?

No response

Command Line Arguments

No

Console logs

Traceback (most recent call last):
  File "xxx/stablediffusion/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/routes.py", line 337, in run_predict
    output = await app.get_blocks().process_api(
  File "xxx/stablediffusion/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 1013, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "xxx/stablediffusion/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/blocks.py", line 911, in preprocess_data
    processed_input.append(block.preprocess(inputs[i]))
IndexError: list index out of range

Additional information

No response

[Bug]: MacOS M1 , "There appear to be 1 leaked semaphore objects to clean up at shutdown"

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

What happened?

When I use this plugin on mac(M1 Apple core), this error occurs even with the ViT-B SAM model with the smallest amount of parameters. Is this problem caused by insufficient video memory? If so, is it because there is no support for mac to reason through the cpu?

Steps to reproduce the problem

  1. Go to img2img generation
  2. Press segment anything tab
  3. Upload img and set black and red point
  4. Choose one sam model
  5. Press “Preview Segmentation”

What should have happened?

Properly Segmented images

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81
extension: c9340671 (Sat Mar 11 01:01:43 2023)

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

In webui-user.sh
export COMMANDLINE_ARGS="--skip-version-check --upcast-sampling --no-half-vae --skip-torch-cuda-test --no-half  --no-half-controlnet --use-cpu interrogate --api"

Console logs

Initializing SAM
Running SAM Inference (638, 918, 3)
/AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1377: failed assertion `Incompatible element type for parameter at index 0, mlir module expected element type f32 but received si32'
zsh: abort      ./webui.sh
buliuguyy@luyinyudeMacBook-Pro stable-diffusion-webui % /opt/homebrew/Cellar/[email protected]/3.10.10_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Additional information

No response

Give indication that Switch to Inpaint Upload worked

I'm using sam to mask face + inpaint upload with loopback to fix faces like people suggested. I'm assuming that Switch to Inpaint Upload is working because it only changes the part that was masked ( like pic below ).
There's no indicator that the inpaint is using your mask or not though. Can you update the Mask instead of leaving it blank and if not, some other form of indication.
Thanks for the hard work you've put on.
github-issue-1
github-issue-2
github-issue-3
github-issue-4
github-issue-5
(I clicked Switch to Inpaint Upload multiple times )

运行webui时报错

Error loading script: sam.py Traceback (most recent call last): File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module module_spec.loader.exec_module(module) File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 15, in from segment_anything import SamPredictor, sam_model_registry ModuleNotFoundError: No module named 'segment_anything'

根据错误信息说没有明为 'segment_anything'的模型,但该模型不是叫sam_vit_h_4b8939.pth,为什么会有这个报错呢?进去webui后并没有seganything区域

[Tutorial]: Guidelines to apply Inpaint Anything to Stable Diffusion WebUI

#58 mentions Inpaint Anything, but Inpaint Anything has actually been supported even in the earliest version of this extension. Given that ControlNet inpainting model has already been connected to this extension, you should expect a far better preformance if you use this extension + ControlNet extension + a good base model without the need of downloading a huge, annoying and general-purposed inpainting model.

For Remove Anything and Fill Anything, they are just mask+inpainting. Go to img2img, use point prompts and/or text prompts to get your mask, check copy to inpaint and copy to ControlNet inpaint, select appropriate index of ControlNet panel associated to inpainting, write your prompt and click Generate.

For Replace Anything, it is just mask+inpaint not masked. The only thing you need to do more is to check inpaint not masked in the img2img panel, everything else should remain the same.

That's it! Simple and easy, not mysterious at all which their big fantasy name sounds like!

My plan is to support all interesting applications which connect SAM to Stable Diffusion. When #57 is merged to master branch, you should be able to try almost all of the interesting applications which apply both SAM and Stable Diffusion. If you find another interesting application but have no idea how to use in stable diffusion, submit an issue like #58 and I can see whether I should update once more or write a tutorial like this to guide you how to do it.

[Feature] Expose this extension through an API

I'd like to write a feature to expose this model through the segment webui API. This would be pretty straightforward to accomplish but ideally, the endpoint would not accept dots as the prompt, but text. When do you think the integration with Grounded DINO would be completed? It's something I've worked on before so if you haven't started yet I could probably knock it out today.

[Bug]: ModuleNotFoundError: No module named 'modules.paths_internal'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Error loading script: sam.py
Traceback (most recent call last):
File "/home/zetaphor/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/zetaphor/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/home/zetaphor/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 10, in
from modules.paths_internal import extensions_dir
ModuleNotFoundError: No module named 'modules.paths_internal'

Steps to reproduce the problem

  1. Launch the webui
  2. The script failes to initialize

What should have happened?

The script should initialize

Commit where the problem happens

webui: a9eab236d7e8afa4d6205127904a385b2c43bb24
controlnet: 187ae88038af6f4daa91d5dc941564d9a4df90ef

What browsers do you use to access the UI ?

No response

Command Line Arguments

--api --cors-allow-origins=* --opt-split-attention --upcast-sampling --precision autocast --opt-sdp-attention

Console logs

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on zetaphor user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.9.16 (main, Mar  6 2023, 19:01:01) 
[GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
Commit hash: a9eab236d7e8afa4d6205127904a385b2c43bb24
Installing requirements for Web UI

Installing sd-dynamic-prompts requirements.txt




Launching Web UI with arguments: --api --cors-allow-origins=* --opt-split-attention --upcast-sampling --precision autocast --opt-sdp-attention
/home/zetaphor/stable-diffusion-webui/venv/lib/python3.9/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: /home/zetaphor/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper/setting.json
Civitai Helper: No setting file, use default
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
Error loading script: sam.py
Traceback (most recent call last):
  File "/home/zetaphor/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "/home/zetaphor/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/zetaphor/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 10, in <module>
    from modules.paths_internal import extensions_dir
ModuleNotFoundError: No module named 'modules.paths_internal'

Loading weights [26fc13daff] from /home/zetaphor/stable-diffusion-webui/models/Stable-diffusion/People/mishen-protogen34-5k-astria.ckpt
Creating model from config: /home/zetaphor/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: /home/zetaphor/stable-diffusion-webui/models/VAE/vae-ft-mse-840000-ema-pruned.safetensors
Applying scaled dot product cross attention optimization.
Model loaded in 4.6s (load weights from disk: 1.1s, create model: 0.4s, apply weights to model: 0.8s, apply half(): 0.5s, load VAE: 0.9s, move model to device: 0.5s, load textual inversion embeddings: 0.4s).
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 4 (delta 2), reused 3 (delta 2), pack-reused 0
Unpacking objects: 100% (4/4), 906 bytes | 906.00 KiB/s, done.
From https://github.com/zero01101/openOutpaint
   64bc673..899c2cb  main       -> origin/main
Submodule path 'app': checked out '899c2cb59262c278314e87717ed01c566a4dd769'
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.5s (import gradio: 2.2s, import ldm: 0.6s, other imports: 0.9s, list extensions: 1.1s, load scripts: 1.8s, load SD checkpoint: 4.6s, create ui: 2.7s, gradio launch: 0.5s).

Additional information

No response

Error loading script: api.py

segment_anything already installed from pip

Get error when launching webui

_Error loading script: api.py
Traceback (most recent call last):
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\api.py", line 9, in
from scripts.sam import init_sam_model, dilate_mask, sam_predict, sam_model_list
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 20, in
from scripts.auto import clear_sem_sam_cache, register_auto_sam, semantic_segmentation, sem_sam_garbage_collect, image_layer_internal, categorical_mask_image
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\auto.py", line 11, in
from modules.paths import extensions_dir
ImportError: cannot import name 'extensions_dir' from 'modules.paths' (D:\AI\StableDiffusion\stable-diffusion-webui\modules\paths.py)

Error loading script: auto.py
Traceback (most recent call last):
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\auto.py", line 11, in
from modules.paths import extensions_dir
ImportError: cannot import name 'extensions_dir' from 'modules.paths' (D:\AI\StableDiffusion\stable-diffusion-webui\modules\paths.py)

Error loading script: sam.py
Traceback (most recent call last):
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\AI\StableDiffusion\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\sam.py", line 20, in
from scripts.auto import clear_sem_sam_cache, register_auto_sam, semantic_segmentation, sem_sam_garbage_collect, image_layer_internal, categorical_mask_image
File "D:\AI\StableDiffusion\stable-diffusion-webui\extensions\sd-webui-segment-anything\scripts\auto.py", line 11, in
from modules.paths import extensions_dir
ImportError: cannot import name 'extensions_dir' from 'modules.paths' (D:\AI\StableDiffusion\stable-diffusion-webui\modules\paths.py)

[求助] apply_boxes_torch : AttributeError: 'tuple' object has no attribute 'reshape'

Hi, 想参考你的项目实现一个:
读取图片文件,并抠图后输出抠图后的文件。参考了 sam.py 这个文件做了代码调整,但是出现了问题:

  1. 代码无法正常运行,报如下错误:
    transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image_np.shape[:2])
  File "/home/jerry/go/src/github.com/facebookresearch/segment-anything/segment_anything/utils/transforms.py", line 90, in apply_boxes_torch
    boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size)
AttributeError: 'tuple' object has no attribute 'reshape'
  1. 显卡是 6 GB 内存,如果加载 两个模型就会显存撑暴。 这个在代码逻辑上可以优化,然后使用 cuda 吗?

问题代码如下

#/usr/bin/env python3
#coding=utf-8
from skimage import io,data
import argparse
import os
import cv2
import os
import copy
import numpy as np
from PIL import Image
import torch
from segment_anything import SamPredictor, sam_model_registry
import groundingdino.datasets.transforms as T
from groundingdino.models import build_model
from groundingdino.util.slconfig import SLConfig
from groundingdino.util.utils import clean_state_dict
#from modules.devices import device, torch_gc, cpu
#from modules.safe import unsafe_torch_load, load

model_dir = "/home/jerry/workbench/download"
dino_batch_dest_dir="/home/jerry/go/src/github.com/JerryZhou343/AILab/demo/base/" 
input_image_path = "/home/jerry/go/src/github.com/JerryZhou343/AILab/demo/base/20230415145253.jpg"
device = "cpu"
dino_batch_save_mask = True
dino_batch_save_image_with_mask=True
batch_dilation_amt= 10
dino_batch_output_per_image = 1

def dilate_mask(mask, dilation_amt):
    # Create a dilation kernel
    x, y = np.meshgrid(np.arange(dilation_amt), np.arange(dilation_amt))
    center = dilation_amt // 2
    dilation_kernel = ((x - center)**2 + (y - center)**2 <= center**2).astype(np.uint8)

    # Dilate the image
    dilated_binary_img = binary_dilation(mask, dilation_kernel)

    # Convert the dilated binary numpy array back to a PIL image
    dilated_mask = Image.fromarray(dilated_binary_img.astype(np.uint8) * 255)

    return dilated_mask, dilated_binary_img

def show_boxes(image_np, boxes, color=(255, 0, 0, 255), thickness=2, show_index=False):
    if boxes is None:
        return image_np

    image = copy.deepcopy(image_np)
    for idx, box in enumerate(boxes):
        x, y, w, h = box
        cv2.rectangle(image, (x, y), (w, h), color, thickness)
        if show_index:
            font = cv2.FONT_HERSHEY_SIMPLEX
            text = str(idx)
            textsize = cv2.getTextSize(text, font, 1, 2)[0]
            cv2.putText(image, text, (x, y+textsize[1]), font, 1, color, thickness)

    return image

def show_masks(image_np, masks: np.ndarray, alpha=0.5):
    image = copy.deepcopy(image_np)
    np.random.seed(0)
    for mask in masks:
        color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
        image[mask] = image[mask] * (1 - alpha) + 255 * color.reshape(1, 1, -1) * alpha
    return image.astype(np.uint8)

def load_dino_image(image_pil):
    import groundingdino.datasets.transforms as T
    transform = T.Compose(
        [
            T.RandomResize([800], max_size=1333),
            T.ToTensor(),
            T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
        ]
    )
    image, _ = transform(image_pil, None)  # 3, h, w
    return image


def load_dino_model(dino_checkpoint):
    args = SLConfig.fromfile("grd.cfg.py")
    args.device = device 
    dino = build_model(args)
    checkpoint =  torch.load(os.path.join(model_dir,dino_checkpoint), map_location="cpu")
    dino.load_state_dict(clean_state_dict(
        checkpoint['model']), strict=False)
    dino.to(device=device)
    dino.eval()
    return dino


def load_sam_model(sam_checkpoint):
    model_type = '_'.join(sam_checkpoint.split('_')[1:-1])
    sam_checkpoint = os.path.join(model_dir, sam_checkpoint)
    #torch.load = unsafe_torch_load
    sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
    sam.to(device=device)
    sam.eval()
    #torch.load = load
    return sam


def get_grounding_output(model, image, caption, box_threshold):
    caption = caption.lower()
    caption = caption.strip()
    if not caption.endswith("."):
        caption = caption + "."

    image = image.to(device)

    with torch.no_grad():
        outputs = model(image[None], captions=[caption])
    
    logits = outputs["pred_logits"].sigmoid()[0]  # (nq, 256)
    boxes = outputs["pred_boxes"][0]  # (nq, 4)

    # filter output
    logits_filt = logits.clone()
    boxes_filt = boxes.clone()
    filt_mask = logits_filt.max(dim=1)[0] > box_threshold
    logits_filt = logits_filt[filt_mask]  # num_filt, 256
    boxes_filt = boxes_filt[filt_mask]  # num_filt, 4

    return boxes_filt.cpu()

def dino_predict_internal(input_image, dino_model, text_prompt, box_threshold):
    dino_image = load_dino_image(input_image.convert("RGB"))

    boxes_filt = get_grounding_output(
        dino_model, dino_image, text_prompt, box_threshold
    )

    H, W = input_image.size[1], input_image.size[0]
    for i in range(boxes_filt.size(0)):
        boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H])
        boxes_filt[i][:2] -= boxes_filt[i][2:] / 2
        boxes_filt[i][2:] += boxes_filt[i][:2]
    #gc.collect()
    #torch_gc()
    return boxes_filt,

if __name__ == "__main__":
    parser = argparse.ArgumentParser("example", add_help=True)
    #parser.add_argument("--input_image", type=str, required=True, help="path to image file")
    #parser.add_argument("--text_prompt", type=str, required=True, help="text prompt")
    sam = load_sam_model("sam_vit_h_4b8939.pth")
    predictor = SamPredictor(sam)


    dino_model = load_dino_model("groundingdino_swinb_cogcoor.pth")


    args = parser.parse_args()

    input_image =  Image.open(input_image_path).convert("RGBA")
    image_np = np.array(input_image)    
    image_np_rgb = image_np[...,:3]

    boxes_filt = dino_predict_internal(input_image,dino_model,"head",0.3)

    predictor.set_image(image_np_rgb)
    transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image_np.shape[:2])
    masks, _, _ = predictor.predict_torch(
        point_coords=None,
        point_labels=None,
        boxes=transformed_boxes.to(device),
        multimask_output=(dino_batch_output_per_image == 1),
    )
    
    masks = masks.permute(1, 0, 2, 3).cpu().numpy()
    boxes_filt = boxes_filt.cpu().numpy().astype(int)

    filename, ext = os.path.splitext(os.path.basename(input_image_path))

    for idx, mask in enumerate(masks):
        blended_image = show_masks(show_boxes(image_np, boxes_filt), mask)
        merged_mask = np.any(mask, axis=0)
        if batch_dilation_amt:
            _, merged_mask = dilate_mask(merged_mask, batch_dilation_amt)
        image_np_copy = copy.deepcopy(image_np)
        image_np_copy[~merged_mask] = np.array([0, 0, 0, 0])
        output_image = Image.fromarray(image_np_copy)
        output_image.save(os.path.join(dino_batch_dest_dir, f"{filename}_{idx}_output{ext}"))
        if dino_batch_save_mask:
            output_mask = Image.fromarray(merged_mask)
            output_mask.save(os.path.join(dino_batch_dest_dir, f"{filename}_{idx}_mask{ext}"))
        if dino_batch_save_image_with_mask:
            output_blend = Image.fromarray(blended_image)
            output_blend.save(os.path.join(dino_batch_dest_dir, f"{filename}_{idx}_blend{ext}"))
    
    #if shared.cmd_opts.lowvram:
    #    sam.to("cpu")
    #gc.collect()
    #torch_gc()
    
    #return "Done"
                #cropped_image.save(f"path/to/your/output_{i}.jpg") 

cv小白,如何运行?只想实现抠图功能!

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you updated WebUI and this extension to the newest version?

  • I have updated WebUI and this extension to the most up-to-date version

Do you understand that you should go to https://github.com/IDEA-Research/Grounded-Segment-Anything/issues if you cannot install GroundingDINO?

  • My problem is not about installing GroundingDINO

What happened?

如何运行

Steps to reproduce the problem

What should have happened?

Commit where the problem happens

webui:
extension:

What browsers do you use to access the UI ?

No response

Command Line Arguments

Console logs

Additional information

Copy to inpaint upload doesn't work

hello! amazing extension, but for me when I tick the box "copy to inpaint upload" and then press "switch to Inpaint Upload", the mask does not transfer, and I have to do it manually.

[Feature]: 能否添加一个给蒙版上色的功能

这个插件很好用,但是在改变颜色的时候,尤其是要求指定一种颜色时,写tag的效果并不是那么有效果,想到之前的颜色蒙版,那是否可以添加这样一个功能,可以改变蒙版的颜色,做到类似之前颜色蒙版的效果。我对程序并不了解,所以不清楚这个功能是否能简单实现。最后,感谢大佬的插件。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.