Coder Social home page Coder Social logo

s1prepro's Introduction

Pre-processing of Sentinel 1

This project aims to connect Sentinel 1 with the Open Data Cube.

It takes Synthetic Aperture Radar data (specifically GRD scenes from the Sentinel 1 platform) and prepares it for ingestion into an opendatacube instance (such as Digital Earth Australia), using the Sentinel Toolbox (SNAP) software.

NOTE: in-progress, use at own risk.

Processing steps

Prerequisites:

  • Sentinel 1 GRD (ground-range, detected amplitude) scenes.
  • Precise orbital ephemeris metadata. (Possibly also calibration?)
  • A digital model for the elevations of the scattering surface (DSM/DEM).
  • gpt (graph processing tool) from the Sentinel Toolbox software.
  • Access to a configured ODC instance.

Stages:

  1. Update metadata (i.e. orbit vectors)
  2. Trim border noise (an artifact of the S1 GRD products)
  3. Calibrate (radiometric, outputting beta-nought)
  4. Flatten (radiometric terrain correction)
  5. Range-Doppler (geometric terrain correction)
  6. Format for the AGDC (e.g. export metadata, tile and index)

Implementation

Initially will use auto-downloaded auxilliary data (ephemeris, DEM). Later, intend to use GDAL tools to subset a DSM, or test efficiency of chunked file-formats for the DSM raster.

Steps 1-4 will be combined in a gpt xml.

Step 5 will be a gpt command-line instruction. (Some operators chain together inefficiently, at least in previous gpt versions.)

Step 6 will be a python prep script.

The overall orchestration will initially be a shell script. (Other options would be a Makefile or a python cluster scheduling script.)

A jupyter notebook will demonstrate the result (using opendatacube API).

Known flaws

  • Ocean is masked out. (This is due to the nodata value used for the DEM by the terrain correction steps.)
  • Border noise is not entirely eliminated (some perimeter pixels).
  • Could conceive a more efficient unified radiometric/geometric terrain-correction operator (to reduce file IO concerning DSM)?
  • Further comparison with GAMMA software output is necessary.
  • Signal intensity units unspecified.
  • Currently autodownloading ancilliary data (e.g. using 3s SRTM, which is suboptimal).
  • Output format is ENVI raster (approximately 10x larger than input zip) rather than cloud optimised GeoTIFF.

Instructions

Process imagery

  1. Ensure the graph processing tool is available (run "ln -s ../snap6/bin/gpt gpt" after installing SNAP)
  2. Batch process some scenes (run "./bulk.sh example_list.txt" after confirming example input)

(Takes 10-15min/scene, using 4 cores and 10-15GB memory, on VDI@NCI.)

Insert into Open Data Cube

  1. Ensure the environment has been prepared (run "datacube system check")
  2. Define the products (run "datacube product add productdef.yaml")
  3. For each newly preprocessed scene, run a preparation script (e.g. "python prep.py output1.dim") to generate metadata (yaml) in an appropriate format for datacube indexing.
  4. For each of those prepared scenes, index into the datacube (e.g. "datacube dataset add output*.yaml --auto-match")
  5. Verify the data using the datacube API (e.g. a python notebook).

s1prepro's People

Contributors

benjimin avatar

Stargazers

Nikolaos Dionelis 2023 avatar Pablo Claveria avatar Hans avatar Cristina Vrinceanu avatar Ross avatar Fang Yuan avatar Menghua Li avatar Jufeng Lu avatar  avatar Chris Holden avatar Martin Valgur avatar Ben avatar

Watchers

Fang Yuan avatar  avatar Ross avatar

s1prepro's Issues

Sentinel-1 dataset Documents (yaml)

Hello,
I'm unable to run prep.py to create yaml datasets documents for indexing the data.
Can you show me an example of the generated yaml file?

Also, is it possible to create yaml datasets documents from header of ENVI only (.hdr)?

Thank You

default CRS

productdef.yaml needs a default crs, otherwise must always specify output_crs/resolution upon datacube.load

Sentinel 1 ingestion

Hello!
I am a newcomer with datacube and, in general, with geospatial data.
I am trying to ingest Sentinel 1 data into Open Data Cube.
I read the documentation, but I still have problems...

I downloaded S1 data from Copernicus portal, the data consists in different folder containing measurements (TIFF format) and metadata (and other information).

The ODC documents that I read said that for the ingestion I need different files:

  1. product definition
  2. yalm file containing metadata (which I understood it can be produced running a python script)
  3. the ingestion file

I am currenctly stuck on the 2), i.e., the yalm metadata file.
Does exist examples of python script or metadata yalm file related to Sentinel 1 data? (if you dont' have for S1, for me it is ok also Sentinel 2 documentation...)

I really appreciate if you could help me!! :D

Thanks

Custom DEM

Do not wish to auto-download default (3 arc second) DEM.

Instead, wish to use better local DSM (for higher quality product), and with more-appropriate no-data specifications (so that ocean is not masked out).

May be complications depending on file format (e.g. might need to subset DSM, for efficient performance of gpt).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.