planetlabs / notebooks Goto Github PK
View Code? Open in Web Editor NEWinteractive notebooks from Planet Engineering
Home Page: https://developers.planet.com/
License: Apache License 2.0
interactive notebooks from Planet Engineering
Home Page: https://developers.planet.com/
License: Apache License 2.0
Demonstrate temporal analysis using Planet imagery, potentially taking advantage of cloud-optimized nature of all Planet GeoTIFFs.
I am trying to clone the git repository and running this command
git clone [email protected]:planetlabs/notebooks.git
Getting the below error. Is there anything that I am missing?
**[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.**
Users may want to run notebooks without having to use docker. Add instructions to README. For an example, see the installation instructions these folks offer in their readme: https://github.com/OpenGeoscience/geonotebook
It would be beneficial to casual notebook readers if a static representation of the notebooks was available. Test generating static representations of notebooks in this repo.
Thoughts:
This bug is related to the bug I posted here: kscottz/PythonFromSpace#1
I am trying to gather assets for my items, but my queries for assets always return an empty object. Any suggestions on how I can gather these assets?
For example, I can look up an item with the following query: https://api.planet.com/data/v1/item-types/PSScene3Band/items/20170520_181725_1044/ but I am getting an empty object when I try to get the assets for that item, here:
https://api.planet.com/data/v1/item-types/PSScene3Band/items/20170520_181725_1044/assets/
Here is a more thorough example:
# Import helper modules:
import os
import json
import requests
# Setup the API Key from the `PL_API_KEY` environment variable
PLANET_API_KEY = os.getenv('PL_API_KEY')
print("PLANET_API_KEY:", PLANET_API_KEY)
# Helper function to printformatted JSON using the json module
def p(data):
print(json.dumps(data, indent=2))
# Our First Request
# Setup Planet Data API base URL
# url from the example:
# URL = "https://api.planet.com/data/v1"
# url for the 'items' api returns the item in the response:
# URL = "https://api.planet.com/data/v1/item-types/PSScene3Band/items/20170520_181725_1044"
# TODO: why does querying for that item's asset give an empty response?
URL = "https://api.planet.com/data/v1/item-types/PSScene3Band/items/20170520_181725_1044/assets/"
# Setup the session
session = requests.Session()
# Authenticate
session.auth = (PLANET_API_KEY, "")
# Make a GET request to the Planet Data API
res = session.get(URL)
print("res.status_code", res.status_code)
print("res.text:", res.text)
print(res.json())
p(res.json())
Perhaps this is a bug in the API, or maybe I am missing something? Any tips would be helpful.
We have an environment.yml file and requirements.txt file in this repo.
This situation can be a little confusing with users wondering which file to use to set up their environment and also having two places to maintain dependencies.
@sarasafavi and @digitaltopo I am curious:
environment.yml
file filling?environment.yml
is there any way to reduce it to actual high-level dependencies and remove version pinning? Not as convenient but easier to maintain.Thanks!
Some notebooks fail in the docker image. Update notebooks and docker image so that they run successfully.
In the latest notebook image, attempting to import opencv results in an error.
$>docker run -it --rm planet-notebooks python -c "import cv2"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The docker build is failing on the conda install:
Sending build context to Docker daemon 5.12kB
Step 1/12 : FROM jupyter/minimal-notebook:2c80cf3537ca
---> db464e6587fb
Step 2/12 : RUN conda install -y -c conda-forge gdal=2.4.0
---> Running in 2a389bdcb711
Fetching package metadata ...An unexpected error has occurred.
Please consider posting the following information to the
conda GitHub issue tracker at:
https://github.com/conda/conda/issues
Current conda install:
platform : linux-64
conda version : 4.3.29
conda is private : False
conda-env version : 4.3.29
conda-build version : not installed
python version : 3.6.3.final.0
requests version : 2.18.4
root environment : /opt/conda (writable)
default environment : /opt/conda
envs directories : /opt/conda/envs
/home/jovyan/.conda/envs
package cache : /opt/conda/pkgs
/home/jovyan/.conda/pkgs
channel URLs : https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.continuum.io/pkgs/main/linux-64
https://repo.continuum.io/pkgs/main/noarch
https://repo.continuum.io/pkgs/free/linux-64
https://repo.continuum.io/pkgs/free/noarch
https://repo.continuum.io/pkgs/r/linux-64
https://repo.continuum.io/pkgs/r/noarch
https://repo.continuum.io/pkgs/pro/linux-64
https://repo.continuum.io/pkgs/pro/noarch
config file : /opt/conda/.condarc
netrc file : None
offline mode : False
user-agent : conda/4.3.29 requests/2.18.4 CPython/3.6.3 Linux/4.9.125-linuxkit debian/stretch/sid glibc/2.23
UID:GID : 1000:100
`$ /opt/conda/bin/conda install -y -c conda-forge gdal=2.4.0`
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/conda/exceptions.py", line 640, in conda_exception_handler
return_value = func(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/conda/cli/main.py", line 140, in _main
exit_code = args.func(args, p)
File "/opt/conda/lib/python3.6/site-packages/conda/cli/main_install.py", line 80, in execute
install(args, parser, 'install')
File "/opt/conda/lib/python3.6/site-packages/conda/cli/install.py", line 231, in install
unknown=index_args['unknown'], prefix=prefix)
File "/opt/conda/lib/python3.6/site-packages/conda/core/index.py", line 101, in get_index
index = fetch_index(channel_priority_map, use_cache=use_cache)
File "/opt/conda/lib/python3.6/site-packages/conda/core/index.py", line 120, in fetch_index
repodatas = collect_all_repodata(use_cache, tasks)
File "/opt/conda/lib/python3.6/site-packages/conda/core/repodata.py", line 75, in collect_all_repodata
repodatas = _collect_repodatas_serial(use_cache, tasks)
File "/opt/conda/lib/python3.6/site-packages/conda/core/repodata.py", line 485, in _collect_repodatas_serial
for url, schan, pri in tasks]
File "/opt/conda/lib/python3.6/site-packages/conda/core/repodata.py", line 485, in <listcomp>
for url, schan, pri in tasks]
File "/opt/conda/lib/python3.6/site-packages/conda/core/repodata.py", line 115, in func
res = f(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/conda/core/repodata.py", line 473, in fetch_repodata
with open(cache_path, 'w') as fo:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/conda/pkgs/cache/497deca9.json'
The command '/bin/sh -c conda install -y -c conda-forge gdal=2.4.0' returned a non-zero code: 1
This looks closely related to the recent gdal version issue mentioned in #71
which inspired the upgrade of gdal in
#72
When the docker image is built, the following error occurs:
Step 2/12 : RUN conda install -y -c conda-forge gdal=2.3.1
---> Running in fc89ad22750d
Fetching package metadata .............
PackageNotFoundError: Packages missing in current channels:
- gdal 2.3.1*
It looks like the gdal version needs to be updated
To minimize Docker image build time, move python requirement installation to the end of the Dockerfile
Create documentation to guide contributors.
I'm seeing a new failure when trying to build the notebook image:
docker build --rm -t planet-notebooks .
Sending build context to Docker daemon 5.12kB
Step 1/12 : FROM jupyter/minimal-notebook:2c80cf3537ca
2c80cf3537ca: Pulling from jupyter/minimal-notebook
e0a742c2abfd: Pull complete
486cb8339a27: Pull complete
dc6f0d824617: Pull complete
4f7a5649a30e: Pull complete
672363445ad2: Pull complete
ecdd51c923e7: Pull complete
42885501cf6c: Pull complete
a91169574a99: Pull complete
4d0f6517ea26: Pull complete
95394e9265ac: Pull complete
8227c59e3779: Pull complete
074b7bf56d53: Pull complete
7acd5e85ad59: Pull complete
7f12c3d0ff9e: Pull complete
c6c3afa6f981: Pull complete
84c4870ea598: Pull complete
9f71a0e80d07: Pull complete
501394cd98d6: Pull complete
206ef30745dc: Pull complete
Digest: sha256:5fa4d62f2cf2ea7e17790ab9d5628d75fda4151b18d5dc47545cb34b0b07c2a2
Status: Downloaded newer image for jupyter/minimal-notebook:2c80cf3537ca
---> db464e6587fb
Step 2/12 : RUN conda install -y -c conda-forge gdal=2.4.0
---> Running in 28b4d36e8130
Fetching package metadata .............
PackageNotFoundError: Packages missing in current channels:
- gdal 2.4.0*
We have searched for the packages in the following channels:
- https://conda.anaconda.org/conda-forge/linux-64
- https://conda.anaconda.org/conda-forge/noarch
- https://repo.continuum.io/pkgs/main/linux-64
- https://repo.continuum.io/pkgs/main/noarch
- https://repo.continuum.io/pkgs/free/linux-64
- https://repo.continuum.io/pkgs/free/noarch
- https://repo.continuum.io/pkgs/r/linux-64
- https://repo.continuum.io/pkgs/r/noarch
- https://repo.continuum.io/pkgs/pro/linux-64
- https://repo.continuum.io/pkgs/pro/noarch
I've tried the suggestion of removing the base jupyter/minimal-notebook image mentioned in #73 to no avail.
In datasets-identify.ipynb
, we use fiona to read the shapefile coordinate reference system. The version of fiona that is associated with the version of rasterio we are using, 1.8.0, has a bug where it is unable to open the EPSG support file gcs.csv. This issue is documented here. In 1.8.1, this bug will be fixed. Wait for 1.8.1 to go live and fix the installed fiona to that version.
I am running docker for mac, which no longer provides docker-machine
, the tool I was using to set my machine and build the docker images. When I rebuilt the docker image using docker for mac's default virtualization, I ran into the following error on rasterio import:
>$docker run -it planet-notebooks python -c "import rasterio"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/conda/lib/python3.7/site-packages/rasterio/__init__.py", line 22, in <module>
from rasterio._base import gdal_version
ImportError: libpoppler.so.76: cannot open shared object file: No such file or directory
In the crop-temporal notebook, in cell 23, gdalwarp vsicurl is called to download a portion of a geotiff. In the current docker image, this process fails with the following message:
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: pj_obj_create: Cannot find proj.db
ERROR 1: PROJ: createGeodeticReferenceFrame: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: createGeodeticReferenceFrame: Cannot find proj.db
ERROR 1: PROJ: pj_obj_create: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: pj_obj_create: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_database: Cannot find proj.db
ERROR 1: Cannot compute bounding box of cutline. Cannot find source SRS
The current installed version of gdal is 3.0.1. In the past, the version was pinned to 2.4.0.
This may be related to the switch to gdal 3. (ref and possible solutions: PDAL/PDAL#2544)
Create tutorials that will help hackers get started during the Stanford Big Earth Data Hackathon
To allow for running notebooks outside of a docker image, move requirements out of Dockerfile
and into requirements.txt
files
Add:
jupyter nbextension enable --py --sys-prefix widgetsnbextension
To build process
In order to avoid upstream issues like #73, we should push our own base image to dockerhub.
A common workflow for analysis is comparing or combining information from multiple sources. Create a tutorial that demonstrates pixel-by-pixel comparison of Landsat and PlanetScope imagery.
I met difficulty to run the In [23]:
ERROR 11: HTTP response code: 400
The step is crucial since I have to do some download using the API (my token works in other cases.)
Any idea?
def _gdalwarp(input_filename, output_filename, options, verbose=False):
commands = ['gdalwarp'] + options +
['-overwrite',
input_filename,
output_filename]
if verbose: print(' '.join(commands))
subprocess.check_call(commands)
def download_scene_aoi(download_url, output_filename, geojson_filename, verbose=False):
vsicurl_url = '/vsicurl/' + download_url
options = [
'-cutline', geojson_filename,
'-crop_to_cutline',
]
_gdalwarp(vsicurl_url, output_filename, options, verbose=verbose)
%time download_scene_aoi(download_url, output_file, geojson_filename, verbose=True)
Also try to run the gdal command line in bash, same error:
$ gdalwarp -cutline data/87/aoi.geojson -crop_to_cutline -overwrite /vsicurl/https://api.planet.com/data/v1/download?token=eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJKTEgzM1VsU2h4aGJXMUw5VWxwZUlHOGFWNXoyOEtmZVdTUE04Zk4wTStaWmsrRTNITFlRb3BJaTk2SklUT05EMTY1ME51ajl4RkhKM3FzbXZSbUZ1Zz09IiwiaXRlbV90eXBlX2lkIjoiUFNTY2VuZTRCYW5kIiwidG9rZW5fdHlwZSI6InR5cGVkLWl0ZW0iLCJleHAiOjE1MzAwNTQyMzMsIml0ZW1faWQiOiIyMDE3MDgxOF8xOTA2MjlfMTA1MCIsImFzc2V0X3R5cGUiOiJhbmFseXRpY19zciJ9.FkxnWYQgyXfnmkQlgacssXuVvVuUe9RQuo1xkxYNcXkE64TGy4P7Z6OqWGeARp1mXHK8ENyEODGwEJtXFLSaHw data/87/20170818_190629_1050.tif
Some large notebooks have multiple utilities that would be useful in their own notebooks:
Break these out into their own notebooks and then reference in the source notebooks.
to allow users to get up and running with notebooks as fast as possible, remove the requirement for building their own images by hosting a docker image on dockerhub
I've successfully run the Docker container but discovered that the example notebooks are not in the env by default, so am uploading them manually. Would be good if these were available by default
demonstrate utilizing PS imagery for a forest monitoring use case
Create a tutorial for visualizing and working with the PlanetScope UDM
Jen, I was looking at the clip-api-demo notebook with a colleague here and we were able to get it working. However, there is a short section of code which I ended up surrounding in a try-except block since I was getting a JSONDecode error which was causing Python to crash:
# If clipping process succeeded , we are done
try:
if check_state_request.json()['state'] == 'succeeded':
clip_download_url = check_state_request.json()['_links']['results'][0]
clip_succeeded = True
print("Clip of scene succeeded and is ready to download")
# Still activating. Wait 1 second and check again.
else:
print("...Still waiting for clipping to complete...")
time.sleep(1)
except Exception as e:
print('Exception! {}'.format(e))
print("...Still waiting for clipping to complete...")
time.sleep(1)
Just curious if you had seen that before or maybe I'm just lucky!
All steps in the installation process until and including 'Build the docker image' were successful. However I'm now running into this issue at the 'run the container' step.
Ran the following command in command prompt
docker run -it --rm -v "//c/Users/Craig D/Code/planet/notebooks/jupyter-notebooks:/home/jovyan/work" -e PL_API_KEY='[MY-API-KEY]' planet-notebooks
did not include the -p 8888:8888 flag mentioned in the README because it was already set when I ran the docker run
command originally and running the command again while specifying -p gives the error 'port already allocated'
the snapshot below is what I get ..
once I open the localhost URL the page simply says 'site can't be reached - localhost refused to connect' instead of the jupyter lab notebooks I expect to see.
my system is Windows 10 and Docker version is Docker Toolbox, Docker version 18.03.0-ce, build 0520e24302
We're outputting pixel coordinates not lat/lng
"ship_count": 4,
"ships": [
{
"lat": 557725.0,
"lng": 4176230.0,
"id": 1
},
{
"lat": 558590.0,
"lng": 4176110.0,
"id": 2
},
{
"lat": 559280.0,
"lng": 4176025.0,
"id": 3
},
{
"lat": 558870.0,
"lng": 4174930.0,
"id": 4
}
]
}
Running the Docker container, the notebook ndvi_planetscope.ipynb
requests my API key at the line !planet data download --item-type PSScene4Band --dest data --asset-type analytic,analytic_xml --string-in id 20160831_180302_0e26
. It appears that the environment variable PL_API_KEY
is not being set correctly. I tried setting the key within the notebook using %env PLANET_API_KEY='my_key'
but this also results in the error Error: InvalidAPIKey: {"message": "Please enter your API key.", "errors": []}
The following was successful !planet --api-key my_key data download --item-type PSScene4Band --dest data --asset-type analytic,analytic_xml --string-in id 20160831_180302_0e26
Note you also don't set %matplotlib inline
in this notebook
instead of creating a ton of code for parsing landsat qa bands, use rio-l8qa
Both the crop-classification/classify-cart-l8-ps.ipynb and crop-classification/classify-cart.ipynb throw an error when trying to download the planet data in the 4th cell running planet data download
paging: False pending: 0
activating: 0 complete: 0 elapsed: 1
paging: False pending: 0
Error: BadQuery: {"field": {}, "general": [{"message": "invalid_shape_exception: Invalid shape: Hole is not within polygon"}]}
I'm running the jupyter notebook directly from the docker container
Hi Everyone:
I had an issue recently trying to follow step-by-step procedure from API Introduction (tutorials)
I can't go further Quick Search section, because my API_KEY is apparently not working.
I got this answer:
{
"message": "Please enter your API key, or email and password.",
"errors": []
}
Everything was good for previous sections, but from here (the most important i consider) the code reboots me with that message.
Sorry if my question is too basic, but I tried looking for the answer from another blogs, trying their solutions and without success.
Thanks in advance for your reply!
Mishel.
Using the docker container, in the 01_ship_detector.ipynb
notebook I receive the following error:
ImportError Traceback (most recent call last)
<ipython-input-4-5c0ac2810ed5> in <module>()
1 import json
----> 2 from osgeo import gdal, osr
3 import numpy
4 from skimage.segmentation import felzenszwalb
5 from skimage.segmentation import mark_boundaries
/opt/conda/envs/python2/lib/python2.7/site-packages/osgeo/__init__.py in <module>()
19 fp.close()
20 return _mod
---> 21 _gdal = swig_import_helper()
22 del swig_import_helper
23 else:
/opt/conda/envs/python2/lib/python2.7/site-packages/osgeo/__init__.py in swig_import_helper()
15 if fp is not None:
16 try:
---> 17 _mod = imp.load_module('_gdal', fp, pathname, description)
18 finally:
19 fp.close()
ImportError: libjson-c.so.2: cannot open shared object file: No such file or directory
I've just followed the steps in your script to calculate top of atmosphere reflection (toar_planetscope.ipynb). It's really easy to follow. Thank you for providing this!
I think I've found a typo and figured I'd make you aware of it.
In [6] is the section where you set the characteristics of the output file to unint16. Then you rescale the reflectance bands from float so that they can be saved as unint16.
However, in the call to save the raster file, you define the reflectance file and not the rescaled file as output file.
If I understand this correctly, this would try to save the float values as unint16, resulting in an output file that has pretty much only 0 values.
I believe the section
with rasterio.open('data/reflectance.tif', 'w', **kwargs) as dst:
dst.write_band(1, band_blue_reflectance.astype(rasterio.uint16))
dst.write_band(2, band_green_reflectance.astype(rasterio.uint16))
dst.write_band(3, band_red_reflectance.astype(rasterio.uint16))
dst.write_band(4, band_nir_reflectance.astype(rasterio.uint16))
would have to be changed into:
with rasterio.open('data/reflectance.tif', 'w', **kwargs) as dst:
dst.write_band(1, blue_ref_scaled.astype(rasterio.uint16))
dst.write_band(2, green_ref_scaled .astype(rasterio.uint16))
dst.write_band(3, red_ref_scaled .astype(rasterio.uint16))
dst.write_band(4, nir_ref_scaled .astype(rasterio.uint16))
Many notebooks in the crop-classification folder run slowly due to classification training. Cache trained models so these notebooks can be run quickly.
create a test notebook that imports all the libraries used in this repo that can be run every time the docker image is changed to reduce the number of uncaught import errors (e.g. #110)
Update README to point to recently added notebooks.
Implement suggestions in the apt-get
section of the docker Dockerfile best practices to minimize image size.
Show how to create labeled data from Planet imagery for use in machine learning algorithms.
Changes made to the docker image may cause some notebooks to fail. To aid in Dockerfile development, create tools to auto-run notebooks.
not sure this is the right place, but I have a query regarding one of the notebooks , 'ndvi_planetscope'
I ran the notebook for a scene I was interested in,
Scene id: '20181002_045628_0f1a'
item_type = 'PSScene4Band'
asset_type = 'analytic_sr'
everything runs ok, however the resulting NDVI I get is completely unexpected, largely in the range of 0.6-0.8 A Sentinel 2 image (TOA not SR) for the same date shows NDVI values, much smaller. Any idea what the issue might be? or if there is one
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.