Coder Social home page Coder Social logo

xiaoachen98 / semantic-segment-anything Goto Github PK

View Code? Open in Web Editor NEW

This project forked from fudan-zvg/semantic-segment-anything

0.0 0.0 0.0 86.11 MB

Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).

License: Apache License 2.0

Python 100.00%

semantic-segment-anything's Introduction

SSA Icon

Semantic Segment Anything
Jiaqi Chen, Zeyu Yang, and Li Zhang
Zhang Vision Group, Fudan Univerisity

SAM is a powerful model for arbitrary object segmentation, while SA-1B is the largest segmentation dataset to date. However, SAM lacks the ability to predict semantic categories for each mask. (I) To address above limitation, we propose a pipeline on top of SAM to predict semantic category for each mask, called Semantic Segment Anything (SSA). (II) Moreover, our SSA can serve as an automated dense open-vocabulary annotation engine called Semantic segment anything labeling engine (SSA-engine), providing rich semantic category annotations for SA-1B or any other dataset. This engine significantly reduces the need for manual annotation and associated costs.

Web demo and API

  • Try the Web Demo and API here: Replicate

๐Ÿค” Why do we need SSA?

  • SAM is a highly generalizable object segmentation algorithm that can provide precise masks. SA-1B is the largest image segmentation dataset to date, providing fine mask segmentation annotations. Neither SAM nor SA-1B provide category predictions or annotations for each mask. This makes it difficult for researchers to use the powerful SAM algorithm to directly solve semantic segmentation tasks or to utilize SA-1B to train their own models.
  • Advanced close-set segmenters like Oneformer, open-set segmenters like CLIPSeg, and image caption methods like BLIP can provide rich semantic annotations. However, their mask segmentation predictions may not be as comprehensive and accurate as those generated by SAM, which has highly precise and detailed boundaries.
  • Therefore, by combining the fine image segmentation masks from SAM and SA-1B with the rich semantic annotations provided by these advanced models, we can generate semantic segmentation models with stronger generalization ability, as well as a large-scale densely categorized image segmentation dataset.

๐Ÿ‘ What SSA can do?

  • SSA: This is the first open framework that utilizes SAM for semantic segmentation task. It supports users to seamlessly integrate their existing semantic segmenters with SAM without the need for retraining or fine-tuning SAM's weights, enabling them to achieve better generalization and more precise mask boundaries.
  • SSA-engine: SSA-engine provides dense open-vocabulary category annotations for large-scale SA-1B dataset. After manual review and refinement, these annotations can be used to train segmentation models or fine-grained CLIP models.

โœˆ๏ธ SSA: Semantic segment anything

SSA is a semantic segmentation model based on SAM, and it is an open framework that allows users to integrate any advanced semantic segmentation models. The predicted mask boundaries by the segmentor do not need to be highly accurate, as the focus is on providing category predictions. Therefore, if you have trained an older model on your dataset, you do not need to discard it and retrain a new SAM-based model. Instead, you can continue to use the older model as the Semantic Branch. SAM's powerful generalization and image segmentation ability can provide a boost to the older model.

In addition, SSA is an out-of-the-box architecture that does not require additional fine-tuning of SAM's weights. It can perform inference directly and complete semantic segmentation tasks with ease.

SSA consists of two branches, Mask Branch and Semantic Branch, as well as a voting module that determines the category for each mask.

  • (I) Mask branch (blue). SAM serves as the Mask branch and provides a set of masks with clear boundaries.

  • (II) Semantic branch (purple). This branch provides the category for each pixel, which is implemented by a semantic segmentor that users can customize in terms of the segmentor's architecture and the interested categories. The segmentor does not need to have highly detailed boundaries, but it should classify each region as accurately as possible.

  • (III) Semantic Voting module (red). This module crops out the corresponding pixel categories based on the mask's position. The top-1 category among these pixel categories is considered as the classification result for that mask.

๐Ÿš„ SSA-engine: Semantic segment anything labeling engine

SSA-engine is an automated annotation engine that serves as the initial semantic labeling for the SA-1B dataset. While human review and refinement may be required for more accurate labeling. Thanks to the combined architecture of close-set segmentation and open-vocabulary segmentation, SSA-engine produces satisfactory labeling for most samples and has the capability to provide more detailed annotations using image caption method.

This tool fills the gap in SA-1B's limited fine-grained semantic labeling, while also significantly reducing the need for manual annotation and associated costs. It has the potential to serve as a foundation for training large-scale visual perception models and more fine-grained CLIP models.

The SSA-engine consists of three components:

  • (I) Close-set semantic segmentor (green). Two close-set semantic segmentation models trained on COCO and ADE20K datasets respectively are used to segment the image and obtain rough category information. The predicted categories only include simple and basic categories to ensure that each mask receives a relevant label.
  • (II) Open-vocabulary classifier (blue). An image captioning model is utilized to describe the cropped image patch corresponding to each mask. Nouns or phrases are then extracted as candidate open-vocabulary categories. This process provides more diverse category labels.
  • (III) Final decision module (orange). The SSA-engine uses a Class proposal filter (i.e. a CLIP) to filter out the top-k most reasonable predictions from the mixed class list. Finally, the Open-vocabulary Segmentor predicts the most suitable category within the mask region based on the top-k classes and image patch.

๐Ÿ“– News

๐Ÿ”ฅ 2023/4/14: SSA benchmarks semantic segmentation on ADE20K and Cityscapes.
๐Ÿ”ฅ 2023/04/10: Semantic Segment Anything (SSA and SSA-engine) is released.
๐Ÿ”ฅ 2023/04/05: SAM and SA-1B are released.

Results

1. Inference time

Dataset model Inference time per image (s) Inference time per mask (s)
SA-1B SSA (Close set) 1.149 0.012
SA-1B SSA-engine (Open-vocabulary) 33.333 0.334

This performance was tested on a single NVIDIA A6000 GPU.

2. Close-set semantic segmentation on ADE20K and Cityscapes dataset

Dataset Model mIoU
ADE20K SSA 54.08
Cityscapes SSA 79.94

Examples

Open-vocabulary prediction on SA-1B

  • Addition example for Open-vocabulary annotations

Close-set semantic segmentation on Cityscapes

Close-set semantic segmentation on ADE20K

Close-set semantic segmentation on SA-1B

๐Ÿ’ป Requirements

  • Python 3.7+
  • CUDA 11.1+

๐Ÿ› ๏ธ Installation

git clone [email protected]:fudan-zvg/Semantic-Segment-Anything.git
cd Semantic-Segment-Anything
conda env create -f environment.yaml
conda activate ssa
python -m spacy download en_core_web_sm
# install segment-anything
cd ..
git clone [email protected]:facebookresearch/segment-anything.git
cd segment-anything; pip install -e .; cd ../Semantic-Segment-Anything

๐Ÿš€ Quick Start

1. SSA

1.1 Preparation

Dowload the ADE20K or Cityscapes dataset, and unzip them to the data folder.

Folder sturcture:

โ”œโ”€โ”€ Semantic-Segment-Anything
โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ ade
โ”‚   โ”‚   โ”œโ”€โ”€ ADEChallengeData2016
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ images
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ training
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ validation
โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ADE_val_00002000.jpg
โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ...
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ test
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ annotations
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ training
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ validation
โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ADE_val_00002000.png
โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ cityscapes
โ”‚   โ”‚   โ”œโ”€โ”€ leftImg8bit
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ train
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ val
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ frankfurt
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ lindau
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ munster
โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ munster_000173_000019_leftImg8bit.png
โ”‚   โ”‚   โ”œโ”€โ”€ gtFine
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ train
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ val
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ frankfurt
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ lindau
โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ munster
โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ munster_000173_000019_gtFine_labelTrainIds.png
โ”‚   โ”‚   โ”œโ”€โ”€ ...

Dowload the checkpoint of SAM and put it to the ckp folder.

mkdir ckp && cd ckp
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
cd ..

1.2 SSA inference

Run our SSA on ADE20K with 8 GPUs:

python scripts/main_ssa.py --ckpt_path ./ckp/sam_vit_h_4b8939.pth --save_img --world_size 8 --dataset ade20k --data_dir data/ade20k/ADEChallengeData2016/images/validation/ --gt_path data/ade20k/ADEChallengeData2016/annotations/validation/ --out_dir output_ade20k

Run our SSA on Cityscapes with 8 GPUs:

python scripts/main_ssa.py --ckpt_path ./ckp/sam_vit_h_4b8939.pth --save_img --world_size 8 --dataset cityscapes --data_dir data/cityscapes/leftImg8bit/val/ --gt_path data/cityscapes/gtFine/val/ --out_dir output_cityscapes

1.3 SSA evaluation (after inference)

Get the evaluate result of ADE20K:

python evaluation.py --gt_path data/ade20k/ADEChallengeData2016/annotations/validation --result_path output_ade20k/ --dataset ade20k

Get the evaluate result of Cityscapes:

python evaluation.py --gt_path data/cityscapes/gtFine/val/ --result_path output_cityscapes/ --dataset cityscapes

2. SSA-engine

Download the SA-1B dataset and unzip it to the data/sa_1b folder.
Or you use your own dataset.

Folder sturcture:

โ”œโ”€โ”€ Semantic-Segment-Anything
โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ sa_1b
โ”‚   โ”‚   โ”œโ”€โ”€ sa_223775.jpg
โ”‚   โ”‚   โ”œโ”€โ”€ sa_223775.json
โ”‚   โ”‚   โ”œโ”€โ”€ ...

Run our SSA-engine with 8 GPUs:

python scripts/main_ssa_engine.py --data_dir=data/examples --out_dir=output --world_size=8 --save_img

For each mask, we add two new fields (e.g. 'class_name': 'face' and 'class_proposals': ['face', 'person', 'sun glasses']). The class name is the most likely category for the mask, and the class proposals are the top-k most likely categories from Class proposal filter. k is set to 3 by default.

{
    'bbox': [81, 21, 434, 666],
    'area': 128047,
    'segmentation': {
        'size': [1500, 2250],
        'counts': 'kYg38l[18oeN8mY14aeN5\\Z1>'
    }, 
    'predicted_iou': 0.9704002737998962,
    'point_coords': [[474.71875, 597.3125]],
    'crop_box': [0, 0, 1381, 1006],
    'id': 1229599471,
    'stability_score': 0.9598413705825806,
    'class_name': 'face',
    'class_proposals': ['face', 'person', 'sun glasses']
}

๐Ÿ“ˆ Future work

We hope that excellent researchers in the community can come up with new improvements and ideas to do more work based on SSA. Some of our ideas are as follows:

  • (I) The masks in SA-1B are often in three levels: whole, part, and subpart, and SSA often cannot provide accurate descriptions for too small part or subpart regions. Instead, we use broad categories. For example, SSA may predict "person" for body parts like neck or hand. Therefore, an architecture for more detailed semantic prediction is needed.
  • (II) SSA is an ensemble of multiple models, which makes the inference speed slower compared to end-to-end models. We look forward to more efficient designs in the future.

๐Ÿ˜„ Acknowledgement

๐Ÿ“œ Citation

If you find this work useful for your research, please cite our github repo:

@misc{chen2023semantic,
    title = {Semantic Segment Anything},
    author = {Chen, Jiaqi and Yang, Zeyu and Zhang, Li},
    howpublished = {\url{https://github.com/fudan-zvg/Semantic-Segment-Anything}},
    year = {2023}
}

semantic-segment-anything's People

Contributors

jiaqi-chen-00 avatar lzrobots avatar chenxwh avatar avivsham avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.