opendatacube / datacube-explorer Goto Github PK
View Code? Open in Web Editor NEWWeb-based exploration of Open Data Cube collections
License: Apache License 2.0
Web-based exploration of Open Data Cube collections
License: Apache License 2.0
Datacube allows older metadata styles where full CRS is present in the metdata instead of the AUTHORITY:NUMERIC-CODE style explorer expects. Use backwards lookup using PyPROJ database for these. Specific case / linked to #30
We've been using the DEA deployment of dashboard to keep track recently processed data. There's a bug where some pages are stuck displaying out of date data.
https://data.dea.gadevs.ga/ls8_fc_albers/2018/10
vs
https://data.dea.gadevs.ga/ls8_fc_albers/2018/10/5
This exists across multiple browsers/refreshes/internet connections/days so is clearly something inside dashboard.
Explorer uses lru_cache()
s on some of its frequently-used models, such as loading an ODC product. These caches are useful because the data can be accessed many times during a single page load
datacube-explorer/cubedash/summary/_stores.py
Lines 295 to 296 in 00a5668
But lru_cache has no expiry at all, which is an issue if we want newly-added products to appear without restarting Explorer.
A minimal timeout (five seconds?) or scope (one page render?) for the caching would be enough. These are cheap queries, they're just not cheap enough to run dozens of times in a single page load.
Review and fix _prepare_from_string for following warning:
/usr/local/lib/python3.6/dist-packages/pyproj/crs.py:77: FutureWarning: '+init=:' syntax is deprecated. ':
' is the preferred initialization method.
return _prepare_from_string(" ".join(pjargs))
TopologicalError: The operation 'GEOSIntersection_r' could not be performed. Likely cause is invalidity of the geometry <shapely.geometry.multipolygon.MultiPolygon object at 0x7fea6df25e80>
File "flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "flask/_compat.py", line 33, in reraise
raise value
File "flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "cubedash/_pages.py", line 57, in overview_page
regions_geojson=_model.get_regions_geojson(product_name, year, month, day),
File "flask_caching/__init__.py", line 674, in decorated_function
rv = f(*args, **kwargs)
File "cubedash/_model.py", line 166, in get_regions_geojson
region_info,
File "cubedash/_model.py", line 223, in _get_regions_geojson
} for region_code in region_counts
File "cubedash/_model.py", line 223, in <listcomp>
} for region_code in region_counts
File "cubedash/_model.py", line 245, in region_geometry_cut
return footprint.intersection(shapely_extent)
File "shapely/geometry/base.py", line 620, in intersection
return geom_factory(self.impl['intersection'](self, other))
File "shapely/topology.py", line 70, in __call__
self._check_topology(err, this, other)
File "shapely/topology.py", line 38, in _check_topology
self.fn.__name__, repr(geom)))
Full error report on Sentry: https://sentry.io/jeremy-hooke/cubedash/issues/688903184/
Affects June 2018 specifically.
We should rename this repository to remove dea
from the name.
The obvious and lazy name would be datacube-dashboard
.
But I don't think dashboard
is the most helpful name. Something like datacube-explorer
might be better.
It appears some of the geometry changes break Explorer: https://github.com/opendatacube/datacube-explorer/runs/733017236
We're touching an internal _geom
field because it was previously the only way to do some operations.
Find a home for this DB enhancement snippet
echo "Clustering $(date)"
psql "${dbname}" -X -c 'cluster cubedash.dataset_spatial using "dataset_spatial_dataset_type_ref_center_time_idx";'
psql "${dbname}" -X -c 'create index tix_region_center ON cubedash.dataset_spatial (dataset_type_ref, region_code text_pattern_ops, center_time);'
echo "Done $(date)"
To encourage other groups to use "Cubedash", we should externalize our DEA branding configuration or at least make it optional.
This can be based on existing code for translating ODC YAML documents into Static STAC Items and Catalogs. See the reference STAC Specification for more details.
Development seed has a documented example implementation of the STAC API
Cubedash-Gen writes passwords to log , mask these.
Alex to check on the details of what needs to be finished for 1.8 release
This is an example of a link that's timing out; https://explorer.dea.ga.gov.au/dataset/a066a2ab-42f7-4e72-bc6d-a47a558b8172
It's name is dsm1sv1_0_Clean_tiff. So it's probably timing out since it is used to produce so many wofs products.
This test is the only failing test when building with Docker. I'm going to skip it, but wanted to note that we should fix it.
Error log is below.
================================================================================================= FAILURES =================================================================================================
____________________________________________________________________________________________ test_with_timings _____________________________________________________________________________________________
client = <FlaskClient <Flask 'cubedash'>>
def test_with_timings(client: FlaskClient):
> _monitoring.init_app_monitoring()
integration_tests/test_page_loads.py:420:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
cubedash/_monitoring.py:26: in init_app_monitoring
@_model.app.before_request
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Flask 'cubedash'>, args = (<function init_app_monitoring.<locals>.time_start at 0x7fe80c08a950>,), kwargs = {}
def wrapper_func(self, *args, **kwargs):
if self.debug and self._got_first_request:
raise AssertionError(
> "A setup function was called after the "
"first request was handled. This usually indicates a bug "
"in the application where a module was not imported "
"and decorators or other functionality was called too late.\n"
"To fix this make sure to import all your view modules, "
"database models and everything related at a central place "
"before the application starts serving requests."
)
E AssertionError: A setup function was called after the first request was handled. This usually indicates a bug in the application where a module was not imported and decorators or other functionality was called too late.
E To fix this make sure to import all your view modules, database models and everything related at a central place before the application starts serving requests.
/usr/local/lib/python3.6/dist-packages/flask/app.py:90: AssertionError
21 minutes on GitHub Actions to do a Docker build and run tests.
Docker build needs caching, which should happen soon. But the tests are very slow... so we should fix the slow tests!
eo3 documents already work in Explorer thanks to ODC 1.8's automatic conversion (which appends the legacy fields to the document on index), but they don't display very well in Explorer.
This we can improve:
the geometry shown in explorer is the square bounding box. We could use the actual footprint available in eo3's native geometry. #162
support the region_code
field for displaying grid counts #162
support for eo3 fields in the document renderer #167
expand stac api to make use of extra eo3 properties. grids, etc.
Can be replicated using the following steps:
The product selection menu at the top of the screen is currently opened on mouse hover
But there are enough products in DEA now that the menu is taller than some people's screens. Users who scroll the screen down using the scrollbars on the side end up closing the menu accidentally.
The WAI (web accessibility) guidelines suggest either adding a closing delay on de-hover, or making it open/close based on click instead of hover.
When you visit the /
page, the default product that is attempted to be loaded is ls7_nbar_scene
.
This page should either load the first product of those available, or a list of available products to choose from perhaps.
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "cubedash.product" does not exist
LINE 2: FROM cubedash.product
^
[SQL: SELECT cubedash.product.dataset_count, cubedash.product.time_earliest, cubedash.product.time_latest, now() - cubedash.product.last_refresh AS last_refresh_age, cubedash.product.id AS id_, cubedash.product.
Error is:
Generating product summaries...
{"event":"generate.product.error","exception":"Traceback (most recent call last):\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1284, in execute_context\n cursor, statement, parameters, context\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute\n cursor.execute(statement, parameters)\npsycopg2.errors.InternalError: GEOSUnaryUnion: TopologyException: Input geom 1 is invalid: Self-intersection at or near point 234823.89145728183 5518112.4330590833 at 234823.89145728183 5518112.4330590833\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File "/env/lib/python3.6/site-packages/cubedash/generate.py", line 50, in generate_report\n updated = store.get_or_update(product.name, None, None, None, force_refresh)\n File "/env/lib/python3.6/site-packages/cubedash/summary/_stores.py", line 569, in get_or_update\n force_refresh=True,\n File "/env/lib/python3.6/site-packages/cubedash/summary/stores.py", line 611, in update\n for year in years\n File "/env/lib/python3.6/site-packages/cubedash/summary/_model.py", line 62, in add_periods\n periods = [p for p in periods if p is not None and p.dataset_count > 0]\n File "/env/lib/python3.6/site-packages/cubedash/summary/_model.py", line 62, in \n periods = [p for p in periods if p is not None and p.dataset_count > 0]\n File "/env/lib/python3.6/site-packages/cubedash/summary/stores.py", line 611, in \n for year in years\n File "/env/lib/python3.6/site-packages/cubedash/summary/_stores.py", line 569, in get_or_update\n force_refresh=True,\n File "/env/lib/python3.6/site-packages/cubedash/summary/stores.py", line 602, in update\n for month in range(1, 13)\n File "/env/lib/python3.6/site-packages/cubedash/summary/_model.py", line 62, in add_periods\n periods = [p for p in periods if p is not None and p.dataset_count > 0]\n File "/env/lib/python3.6/site-packages/cubedash/summary/_model.py", line 62, in \n periods = [p for p in periods if p is not None and p.dataset_count > 0]\n File "/env/lib/python3.6/site-packages/cubedash/summary/stores.py", line 602, in \n for month in range(1, 13)\n File "/env/lib/python3.6/site-packages/cubedash/summary/_stores.py", line 569, in get_or_update\n force_refresh=True,\n File "/env/lib/python3.6/site-packages/cubedash/summary/_stores.py", line 597, in update\n product_name, _utils.as_time_range(year, month)\n File "/env/lib/python3.6/site-packages/cubedash/summary/_summarise.py", line 78, in calculate_summary\n func.now().label("summary_gen_time"),\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2244, in execute\n return connection.execute(statement, multiparams, **params)\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1020, in execute\n return meth(self, multiparams, params)\n File "/env/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection\n return connection._execute_clauseelement(self, multiparams, params)\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_clauseelement\n distilled_params,\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1324, in execute_context\n e, statement, parameters, cursor, context\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1518, in handle_dbapi_exception\n sqlalchemy_exception, with_traceback=exc_info[2], from=e\n File "/env/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise\n raise exception\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1284, in execute_context\n cursor, statement, parameters, context\n File "/env/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute\n cursor.execute(statement, parameters)\nsqlalchemy.exc.InternalError: (psycopg2.errors.InternalError) GEOSUnaryUnion: TopologyException: Input geom 1 is invalid: Self-intersection at or near point 234823.89145728183 5518112.4330590833 at 234823.89145728183 5518112.4330590833\n\n[SQL: SELECT sum(srid_summaries.dataset_count) AS dataset_count, array_agg(srid_summaries.srid) AS srids, sum(srid_summaries.size_bytes) AS size_bytes, ST_AsEWKB(ST_Union(srid_summaries.footprint_geometry)) AS footprint_geometry, max(srid_summaries.newest_dataset_creation_time) AS newest_dataset_creation_time, now() AS summary_gen_time \nFROM (SELECT ST_SRID(cubedash.dataset_spatial.footprint) AS srid, count() AS dataset_count, ST_Transform(ST_Union(cubedash.dataset_spatial.footprint), %(ST_Transform_1)s) AS footprint_geometry, sum(cubedash.dataset_spatial.size_bytes) AS size_bytes, max(cubedash.dataset_spatial.creation_time) AS newest_dataset_creation_time \nFROM cubedash.dataset_spatial \nWHERE (tstzrange(%(tstzrange_1)s, %(tstzrange_2)s, %(tstzrange_3)s) @> cubedash.dataset_spatial.center_time) AND cubedash.dataset_spatial.dataset_type_ref = (SELECT agdc.dataset_type.id \nFROM agdc.dataset_type \nWHERE agdc.dataset_type.name = %(name_1)s) GROUP BY srid) AS srid_summaries]\n[parameters: {'ST_Transform_1': 3577, 'tstzrange_1': datetime.datetime(2020, 6, 1, 0, 0, tzinfo=tzfile('/usr/share/zoneinfo/Australia/Darwin')), 'tstzrange_2': datetime.datetime(2020, 7, 1, 0, 0, tzinfo=tzfile('/usr/share/zoneinfo/Australia/Darwin')), 'tstzrange_3': '[]', 'name_1': 's2_l2a'}]\n(Background on this error at: http://sqlalche.me/e/2j85)","level":"exception","product":"s2_l2a","timestamp":"2020-06-23T02:18:26.815106Z"}
s2_l2a error (see log)
done. 0/1 generated, 1 failures
Part of the issue is being unable to easily track down the dataset causing the issue.
Currently there's no easy way to update cubedash's product summaries once created.
There is a max_age when the spatial table is updated, but not when generating the summaries themselves, so they are skipped if they exist at all (in the get_or_update()
function).
We should extend the generate command to allow updating summaries older than a certain age.
(this was never needed previously in the NCI instance as it has a freshly-cloned database every day)
Reported by @dunkgray, as this affected DEA Africa.
Cubedash gen fails in the Geoscience Australia dev deployment and presumably in production. Failure occurred about a month ago , showing the delayed NRT products in the link above however recent improvements to logging have surfaced them. The NRT streams are functional and OWS can see them :
Failure mode is lot of messages as below per product:
NOTICE: identifier "dashgen-fc-percentile-albers-seasonal agdc-1.8.1.dev22+gc2c25d3d" will be truncated to "dashgen-fc-percentile-albers-seasonal agdc-1.8.1.dev22+gc2c25d3"
2020-06-19T07:07:29.650249Z [exception] generate.product.error product=ga_s2a_ard_nbar_granule
Traceback (most recent call last):
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1284, in _execute_context
cursor, statement, parameters, context
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.SequenceGeneratorLimitExceeded: nextval: reached maximum value of sequence "product_id_seq" (32767)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/env/lib/python3.6/site-packages/cubedash/generate.py", line 46, in generate_report
store.refresh_product(product, refresh_older_than=refresh_time)
File "/env/lib/python3.6/site-packages/cubedash/summary/_stores.py", line 181, in refresh_product
derived_products=derived_products,
File "/env/lib/python3.6/site-packages/cubedash/summary/_stores.py", line 388, in _set_product_extent
.values(name=product.name, **fields)
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2244, in execute
return connection.execute(statement, *multiparams, **params)
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1020, in execute
return meth(self, multiparams, params)
File "/env/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_clauseelement
distilled_params,
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1324, in _execute_context
e, statement, parameters, cursor, context
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1518, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/env/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1284, in _execute_context
cursor, statement, parameters, context
File "/env/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DataError: (psycopg2.errors.SequenceGeneratorLimitExceeded) nextval: reached maximum value of sequence "product_id_seq" (32767)
[SQL: INSERT INTO cubedash.product (name, dataset_count, last_refresh, source_product_refs, derived_product_refs, time_earliest, time_latest) VALUES (%(name)s, %(dataset_count)s, now(), %(source_product_refs)s, %(derived_product_refs)s, %(time_earliest)s, %(time_latest)s) ON CONFLICT (name) DO UPDATE SET dataset_count = %(param_1)s, last_refresh = now(), source_product_refs = %(param_2)s, derived_product_refs = %(param_3)s, time_earliest = %(param_4)s, time_latest = %(param_5)s RETURNING cubedash.product.id]
[parameters: {'name': 'ga_s2a_ard_nbar_granule', 'dataset_count': 173779, 'source_product_refs': [], 'derived_product_refs': [], 'time_earliest': datetime.datetime(2017, 7, 2, 0, 57, 11, 26000, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=0, name=None)), 'time_latest': datetime.datetime(2019, 9, 16, 1, 17, 21, 24000, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=0, name=None)), 'param_1': 173779, 'param_2': [], 'param_3': [], 'param_4': datetime.datetime(2017, 7, 2, 0, 57, 11, 26000, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=0, name=None)), 'param_5': datetime.datetime(2019, 9, 16, 1, 17, 21, 24000, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=0, name=None))}]
(Background on this error at: http://sqlalche.me/e/9h9h)
This is the command the Cronjob runs :
- "cubedash-gen"
- "--no-init-database"
- "--refresh-stats"
- "--force-refresh"
- "--all"
In 2.0.9
Steps to reproduce:
This appears to be related to:
datacube-explorer/cubedash/static/overview.js
Line 217 in ee25003
datacube-explorer/cubedash/static/overview.ts
Line 224 in ee25003
Where the URL generated for onclick does not use Flask to get the url for the page and will ignore SCRIPT_NAME
if set.
Files like templates and static assets are not installed when using PIP.
If we want to have a really clean prod environment, we may want to ensure that the project runs fine after PIP installing with no project folder.
https://explorer.digitalearth.africa/product-audit/
"Internal error"
@santoshamohan provided the following log from the server:
[2020-01-21 05:03:24,771] ERROR in app: Exception on /product-audit/ [GET]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/dist-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/code/cubedash/_audit.py", line 81, in product_audit_page
spatial_quality_stats=list(store.get_quality_stats()),
File "/code/cubedash/summary/_stores.py", line 341, in get_quality_stats
d["product"] = self._dataset_type_by_id(row["dataset_type_ref"])
File "/code/cubedash/summary/_stores.py", line 301, in _dataset_type_by_id
raise KeyError("Unknown dataset type id %r" % id_)
KeyError: 'Unknown dataset type id 18'
I had a meeting with a stakeholder yesterday who really wanted to know how many clear observations are available within each scene (not just what scenes are available).
It would be a useful feature enhancement to add this information to this dashboard.
I've just had a go at getting this running, and got it starting, but there's a crash on startup, see below.
It's built in the Docker image that has the latest code from develop
in it.
[2018-04-04 03:51:14 +0000] [10] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/gunicorn/arbiter.py", line 578, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/base.py", line 135, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.5/dist-packages/gunicorn/util.py", line 352, in import_app
__import__(module)
File "/code/cubedash/__init__.py", line 3, in <module>
from . _pages import app
File "/code/cubedash/_pages.py", line 12, in <module>
from cubedash import _utils as utils
File "/code/cubedash/_utils.py", line 16, in <module>
from datacube.index.postgres._fields import RangeDocField, PgDocField
ImportError: No module named 'datacube.index.postgres'
Search and update all references to PosgreSQL 10 to 11 in test and docker builds.
./run.sh starts up a server that is only accessible on the device that is running it (it references the 127.0.0.1 IP).
It should be accessible by other devices.
Removing versioneer support broke sentry integration since it has an import of _version.py
In the spatial_reference
field cubedash currently supports either EPSG codes or a custom format used by NEMO’s old scenes.
The new Sentinel products define their spatial references as WKT strings.
This may be non-trivial as all crs handling is currently done within Postgis, and it doesn’t appear to have any methods for translation between CRS formats (other than an exact string match on an existing registered CRS, which won’t match any Sentinel datasets).
We might need to add an extra step during summary generation to scan all datasets to find unique CRSes and translate them within Python, registering the result with Postgis. ... and then do an exact string match.
Hi, i've meet below error when run: docker-compose up
How can i fix it? Thanks.
Step 13/20 : RUN apt-get update && sed 's/#.*//' /tmp/requirements-apt.txt | xargs apt-get install -y && rm -rf /var/lib/apt/lists/*
---> Running in 752fd64e4991
Get:1 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 http://apt.postgresql.org/pub/repos/apt bionic-pgdg InRelease [84.6 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [933 kB]
Get:5 http://apt.postgresql.org/pub/repos/apt bionic-pgdg/main amd64 Packages [316 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages [13.5 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [11.3 MB]
Get:10 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [856 kB]
Get:11 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [59.3 kB]
Get:12 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [8815 B]
Get:13 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [186 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages [1344 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [73.6 kB]
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1230 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [20.1 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1387 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [8286 B]
Get:20 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [8158 B]
Fetched 18.4 MB in 60s (304 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package postgresql-11
E: Unable to locate package git
E: Unable to locate package build-essential
ERROR: Service 'explorer' failed to build: The command '/bin/sh -c apt-get update && sed 's/#.*//' /tmp/requirements-apt.txt | xargs apt-get install -y && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 123
ls7_nbart_albers
2018-02-13 10:19.12 summary.calc.done dataset_count=5407 footprints_missing=0 product=ls7_nbart_albers time=Range(begin=datetime.datetime(2015, 12, 1, 0,
0), end=datetime.datetime(2016, 1, 1, 0, 0))
TopologyException: found non-noded intersection between LINESTRING (147.926 -36.7029, 147.989 -36.6964) and LINESTRING (148.005 -36.6947, 147.926 -36.7029) at 147.970
41918908911 -36.698311660120893
2018-02-13 10:19.13 report.generate product=ls7_nbart_albers
Traceback (most recent call last):
File "dea-dashboard/cubedash/generate.py", line 32, in generate_reports
tmp_dir
File "dea-dashboard/cubedash/_model.py", line 211, in write_product_summary
s = _write_year_summary(product, year, year_folder)
File "dea-dashboard/cubedash/_model.py", line 233, in _write_year_summary
summary = TimePeriodOverview.add_periods(summaries)
File "dea-dashboard/cubedash/_model.py", line 111, in add_periods
) if with_valid_geometries else None,
File "/opt/conda/lib/python3.6/site-packages/shapely/ops.py", line 149, in unary_union
return geom_factory(lgeos.methods['unary_union'](collection))
File "/opt/conda/lib/python3.6/site-packages/shapely/geometry/base.py", line 76, in geom_factory
raise ValueError("No Shapely geometry can be created from null value")
ValueError: No Shapely geometry can be created from null value
error
ls5_pq_albers
2018-02-13 08:24.11 summary.calc.done dataset_count=0 footprints_missing=0 product=ls5_pq_albers time=Range(begin=datetime.datetime(2011, 12, 1, 0, 0), e
nd=datetime.datetime(2012, 1, 1, 0, 0))
TopologyException: found non-noded intersection between LINESTRING (150.487 -27.3808, 150.487 -27.3808) and LINESTRING (150.487 -27.3808, 150.487 -27.3808) at 150.486
89298499346 -27.380790145386491
2018-02-13 08:24.12 report.generate product=ls5_pq_albers
Traceback (most recent call last):
File "dea-dashboard/cubedash/generate.py", line 32, in generate_reports
tmp_dir
File "dea-dashboard/cubedash/_model.py", line 211, in write_product_summary
s = _write_year_summary(product, year, year_folder)
File "dea-dashboard/cubedash/_model.py", line 233, in _write_year_summary
summary = TimePeriodOverview.add_periods(summaries)
File "dea-dashboard/cubedash/_model.py", line 111, in add_periods
) if with_valid_geometries else None,
File "/opt/conda/lib/python3.6/site-packages/shapely/ops.py", line 149, in unary_union
return geom_factory(lgeos.methods['unary_union'](collection))
File "/opt/conda/lib/python3.6/site-packages/shapely/geometry/base.py", line 76, in geom_factory
raise ValueError("No Shapely geometry can be created from null value")
ValueError: No Shapely geometry can be created from null value
error
ls7_nbar_albers
2018-02-13 09:49.31 summary.calc.done dataset_count=5407 footprints_missing=0 product=ls7_nbar_albers time=Range(begin=datetime.datetime(2015, 12, 1, 0,
0), end=datetime.datetime(2016, 1, 1, 0, 0))
TopologyException: found non-noded intersection between LINESTRING (145.391 -14.3784, 145.577 -14.359) and LINESTRING (145.577 -14.359, 145.212 -14.3971) at 145.44220
027752203 -14.373039579968463
2018-02-13 09:49.32 report.generate product=ls7_nbar_albers
Traceback (most recent call last):
File "dea-dashboard/cubedash/generate.py", line 32, in generate_reports
tmp_dir
File "dea-dashboard/cubedash/_model.py", line 211, in write_product_summary
s = _write_year_summary(product, year, year_folder)
File "dea-dashboard/cubedash/_model.py", line 233, in _write_year_summary
summary = TimePeriodOverview.add_periods(summaries)
File "dea-dashboard/cubedash/_model.py", line 111, in add_periods
) if with_valid_geometries else None,
File "/opt/conda/lib/python3.6/site-packages/shapely/ops.py", line 149, in unary_union
return geom_factory(lgeos.methods['unary_union'](collection))
File "/opt/conda/lib/python3.6/site-packages/shapely/geometry/base.py", line 76, in geom_factory
raise ValueError("No Shapely geometry can be created from null value")
ValueError: No Shapely geometry can be created from null value
error
The Sentinel-1 monthly means over Ghana, which are available at s3://deafrica-data/esa/s1/ghana-s1/
don't correctly display full extents, but do show when you limit time to a specific month or dataset.
Steps to index are:
datacube product add https://raw.githubusercontent.com/digitalearthafrica/config/master/products/sentinel1_ghana_monthly.yaml
s3-find "s3://deafrica-data/esa/s1/ghana-s1/**/**/**/**/**/*.yaml" | s3-to-tar | dc-index-from-tar
Increment pinned datacube dependencies in explorer docker images to 1.8.0 and tag a new release of Explorer.
Summary generation throws an exception if datasets are missing a creation time (by default: 'creation_dt' in the yaml)
This has confused a few people, as some datacube guides don't include the field in metadata.
We need to either make it clear why it's failing (a firendly error message?), or else make the field optional in cubedash.
Werkzeug version has fallen out of sync. Polygon equality check is too rigid and does not account for floating point noise.
On VDI, dashboard doesn’t load because the following files do not exist,
/g/data/v10/public/agdc-v2-dashboard/product-summaries/*/timeline.json for:
All other products have timeline.json files.
This creates an error irrespective of which product is requested in the dashboard. For example when requesting http://127.0.0.1:8080/ls8_nbart_albers:
[2018-09-06 13:22:11,197] ERROR in app: Exception on /ls8_nbart_albers [GET]
Traceback (most recent call last):
....
FileNotFoundError: [Errno 2] No such file or directory: '/g/data/v10/public/agdc-v2-dashboard/product-summaries/ls5_pq_albers/timeline.json'
127.0.0.1 - - [06/Sep/2018 13:22:11] "GET /ls8_nbart_albers HTTP/1.1" 500 -
Propose that the timeline.json files be refreshed (if required) and the three missing ones added, please.
When viewing a specific region, eg: https://data.dea.gadevs.ga/region/ls8_nbar_albers/20_-34
The product drop down menu lets you switch to any other product, even though most products don't have that region defined.
The resulting page currently throws an exception, such as with CUBEDASH-1G
We should:
Not sure what is causing it, but accessing the root context of web application doesnt consistantly redirect to default product.
Is it better to just have a simple landing page that is the product list?
Continue work on this branch : https://github.com/opendatacube/datacube-explorer/tree/jez/gha
Use more granular steps :
eo
should be EO
overflow: scroll
where the scroll bar is splitting the page_
from page title and dataset namerecenter
on the mapNCI/AWS/Sandbox
information to titletechnical
section in the menu areaThere are a couple of minor issues when a user directory inputs path row into the url.
Example 1: Leading zeroes:
{{hostname}}/region/ls8_nbar_scene/096_071
When leading zeroes are including in the path row reference the area is correctly identified on the map provided but no datasets are returned on the page; this may occur if a user copies and pastes path row values from a set with leading zeroes.
Example 2: Invalid Path Rows
{{hostname}}/region/ls8_nbar_scene/10096_10071
If a user inputs invalid path row values for a given dataset cubedash responds with an internal server error; this may crop up if a user is copy-pasting into the URL without paying attention; or where multiple collections with different spatial models are provided from one instance.
After pushing image to Dockerhub. Pull it back down and run Claire scanner using :
https://github.com/usr42/clair-container-scan or similar.
The latest revision of cubedash
app fails to serve any http
/ https
requests coming from alb
load balancer.
Last know working docker image is v2.1.0
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.