Coder Social home page Coder Social logo

abhishekrs4 / htsm_oil_spill_segmentation Goto Github PK

View Code? Open in Web Editor NEW
12.0 1.0 2.0 5.71 MB

HTSM Masterwork

License: MIT License

Jupyter Notebook 95.33% Python 4.55% Dockerfile 0.05% Makefile 0.03% Batchfile 0.04%
oil-spill segmentation-based-detection segmentation-models semantic-segmentation deep-learning encoder-decoder-model supervised-learning computer-vision docker-deployment streamlit-application

htsm_oil_spill_segmentation's Introduction

Oil Spill Segmentation using Deep Encoder-Decoder models

Paper available on ArXiv

  • The results of this research is available in the ArXiv paper.

Info about the project

  • This repo contains the project work done as part of the HTSM Masterwork at University of Groningen.
  • Research work carried as part of the HTSM Masterwork to train deep learning CNN model for segmentation task to detect oil spills from the satellite Synthetic Aperture Radar (SAR) data.
  • This research is towards application of AI for good, particularly towards conservation of nature with AI.
  • This project was done under the supervision of Mr. Maruf A. Dhali

Dataset information

  • The details about the dataset used in this Masterwork project can be found here - dataset details.
  • This dataset contains labels for 5 classes --- sea surface, oil spill, oil spill look-alike, ship, and land.
  • ThIs dataset is a relatively smaller dataset when compared to other popular benchmark datasets for the segmentation task.

Required dependencies for training

  • To install pytorch use the following command
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
  • The other required dependencies for training are available in requirements.txt.

Instructions to run the code for training

  • For any python script, use the following to list all the command-line options
python3 script_name.py --help

Docker deployment instructions

  • The streamlit app has been developed for deployment.
  • The detailed python package requirements for the streamlit app can be found in src/requirements_deployment.txt.
  • To build the container, run the following command inside src directory
docker build -t app_oil_spill .
  • To the run the container, run the following command inside src directory
docker run -p 8000:8000 -t app_oil_spill

Huggingface deployment

  • A streamlit application, with the best performing model, has been deployed to Huggingface

Qualitative results - sample test set predictions

Sample predicted mask 1 Sample predicted mask 2 Sample predicted mask 3 Sample predicted mask 4 Sample predicted mask 5

  • Since the dataset is not publicly available, the original test set images from the dataset are not uploaded but only their predictions are included in the repo.

Quantitative results

  • The best model's class-wise and mean IoU performance is presented below.
class class IoU (%)
sea surface 96.422
oil spill 61.549
oil spill look-alike 40.773
ship 33.378
land 92.218
mean IoU 64.868

Sphinx docstring generation

  • The following are the steps to generate docstrings using sphinx
  • Create a directory named docs and go to that directory
mkdir docs && cd docs
  • Run the everyone of following commands inside docs directory
  • Run the following command with appropriate options
sphinx-quickstart
  • Open the file index.rst and add modules to it
  • In the file conf.py, make the following changes
    • set html_theme = 'sphinx_rtd_theme'
    • set extensions = ["sphinx.ext.todo", "sphinx.ext.viewcode", "sphinx.ext.autodoc"]
    • add the following to the beginning of the conf.py file
    import os
    import sys
    
    sys.path.insert(0, os.path.abspath(".."))
    
  • Run the following command
sphinx-apidoc -o . ..
  • Create html files with documentation
make html

References

htsm_oil_spill_segmentation's People

Contributors

abhishekrs4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

vchaparro drroad

htsm_oil_spill_segmentation's Issues

The original images padded with patches

Hello sir. In the data preprocessing step, the original images, in 1250 ร— 650 dimensions, were padded with patches on all 4 sides of the image to produce resulting images of 1280 ร— 672. The patch padding was done in such a way as to select a patch of pixels with the sea surface. May I ask what is the benefit of padding the original images with patches?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.