Vanity package for geopython projects
pip install geopython
>>> import geopython
pygeofilter is a pure Python parser implementation of OGC filtering standards
License: MIT License
Vanity package for geopython projects
pip install geopython
>>> import geopython
Correct me if i'm completely misusing this but I'm trying to perform a simple "meets" expression as cql-json:
For simplicity my db contains the following datetimes:
1: 2019-06-24T07:41:10Z
2: 2019-06-24T07:41:13Z
3: 2019-06-24T07:41:17Z
4: 2019-06-24T07:41:21Z
And I want to perform a "meets" request with time interval 2019-06-24T07:41:13 - 2019-06-24T07:41:18
as input.
This should result in all rows from the db where the datetime coinsides with the beginning of the interval, which in my db is only the second row: 2019-06-24T07:41:13Z
However running the query, it seems to parse correctly:
from pygeofilter.parsers.cql_json import parse as parse_json
from pygeofilter.backends.sqlalchemy import to_filter
import sqlalchemy as sa
cql_json = "meets": [{ "property": "datetime" }, ["2019-06-24T07:41:13Z", "2019-06-24T07:41:18Z"]]
ast = parse_json(cql_json)
TimeMeets(lhs=ATTRIBUTE datetime, rhs=Interval(start=datetime.datetime(2019, 6, 24, 7, 41, 13, tzinfo=<StaticTzInfo 'Z'>), end=datetime.datetime(2019, 6, 24, 7, 41, 18, tzinfo=<StaticTzInfo 'Z'>)))
but when it evaluates the ast to a SqlAlchemy expression, and queries the db, it returns both row 2 and 3: 2019-06-24T07:41:13Z
and 2019-06-24T07:41:17Z
Inspecting the temporal method of filters.py
in the sqlalchemy backend:
it seems to handle special cases for before
, after
and tequals
operations, thus other temporals end up being evaluated as either between
, >=
, or <=
so my query ends up being evaluated as a between(datetime, "2019-06-24T07:41:13Z", "2019-06-24T07:41:18Z")
explaining why row 3 is also returned
On a sidenote:
It seems that you have to specify an interval as input for anything besides before
and after
otherwise this crashes:
with TypeError: cannot unpack non-iterable datetime.datetime object
From the OAF specification:
CQL supports date and timestamps as time instants, but even the smallest "instant" has a duration and can also be evaluated as an interval.
Hence to my understanding you should be able to run "meets", "metby", "begins" ect. on exact timedates, treating them as intervals, and not having to explicitly specify an interval (despite it being nonsensical to do in practise)
Am I completely missing something here or is the temporal method in fact not handling all temporal cases?
I see that there is a branch with a base implementation for cql-text.
When I try to use it, I get import errors.
What is the road map for this branch? Do you have a timeline? Is it actively being developed?
I am wondering what the reasoning behind not including the <>
ast.NotEqual
ne
comparison in the cql-json parser is?
pygeofilter.parsers.cql_json.parser.COMPARISON_MAP
I cannot find any description of how pygeofilter is supposed to behave when it meets invalid filters. The issue is written assuming that pygeofilter should fail when parsing invalid filters.
Example 1 where pygeofilter silently ignores invalid and
expression:
>>> import pygeofilter.ast
>>> from pygeofilter.parsers.cql_json import parse as parse_json
>>>
>>> cql_json = { "intersects": [ { "property": "geometry" }, { "type": "Point", "coordinates": [ 10.4064, 55.3951 ] } ], "and": [ {"eq": [ { "property": "direction" }, "east" ] } ] }
>>> print(pygeofilter.ast.get_repr(parse_json(cql_json)))
INTERSECTS(ATTRIBUTE geometry, Geometry(geometry={'type': 'Point', 'coordinates': [10.4064, 55.3951]}))
Example 2 where pygeofilter parses invalid and
into a simple comparison:
>>> import pygeofilter.ast
>>> from pygeofilter.parsers.cql_json import parse as parse_json
>>>
>>> cql_json = { "and": [ {"eq": [ { "property": "direction" }, "east" ] } ] }
>>> print(pygeofilter.ast.get_repr(parse_json(cql_json)))
ATTRIBUTE direction = 'east'
Needless to say it would be a better user experience if both these examples would fail with some meaningful message.
For OGC API Records work in Testbed 18, we need to leverage the VendorSpecificParameters option, as available from the pyCSW implementation ().
The pyCSW implementation uses the pygeofilter for request parsing:
from pygeofilter.parsers.ecql import parse as parse_ecql
from pygeofilter.parsers.cql2_json import parse as parse_cql2_json
When using the option, the CQL parser throws an exception:
{
"code": "InvalidParameterValue",
"description": "CQL parsing error: No terminal matches '[' in the current parser context, at line 1 col 4\n\ndcs[key_challenge] = \"secret\"\n ^\nExpected one of: \n\t* LESSTHAN\n\t* COMMA\n\t* \"<>\"\n\t* EQUAL\n\t* IS\n\t* IN\n\t* \"DOES-NOT-EXIST\"\n\t* SLASH\n\t* AFTER\n\t* EXISTS\n\t* BEFORE\n\t* BETWEEN\n\t* AND\n\t* NOT\n\t* ILIKE\n\t* \"<=\"\n\t* \">=\"\n\t* RPAR\n\t* DURING\n\t* LPAR\n\t* MINUS\n\t* PLUS\n\t* STAR\n\t* LIKE\n\t* MORETHAN\n\t* OR\n\nPrevious tokens: Token('NAME', 'dcs')\n"
}
You can reproduce the error with the following CURL request:
curl -X GET "https://ogc.demo.secure-dimensions.de/pycsw/collections/metadata%3Amain/items?limit=10&f=json&dcs[key_challenge]=secret" -H "accept: application/geo+json"
According to the cql2 spec, the NULL predicate should look like this in cql2-json:
{
"op": "isNull",
"args": [ { "property": "geometry" } ]
}
However, if using to_cql2
to generate the json, args
is not a list:
>>> from pygeofilter.parsers.ecql import parse as parse_ecql
>>> from pygeofilter.backends.cql2_json import to_cql2
>>> to_cql2(parse_ecql("geometry IS NULL"))
'{"op": "isNull", "args": {"property": "geometry"}}'
Initially raised in stac-utils/stac-fastapi#473, it appears as though the NOT
operator is under- or un-supported for CQL2-TEXT. E.g.
collection NOT LIKE 'not-a-collection-id'
#22 appears to be a similar issue that asks about general CQL2-TEXT support, but I thought a specific issue for this problem might be helpful. Please feel free to close if you feel as though this is a duplicate of #22 (or another issue). Thanks!
geoalchemy removed the management argument from the intialization of a geoalchemy2.types._GISType
earlier this year:
As a result of this change, subsequent geoalchemy releases are not compatible with pygeofilter's sqlalchemy tests, which are still using the management
keyword argument:
pygeofilter/tests/backends/sqlalchemy/test_evaluate.py
Lines 26 to 36 in a08c5a5
This means the tests are currently failing.
Possible solutions:
management
argumentI guess the first option is better, since the fix is really small - I'll prepare a PR with the change
Python Version: 3.10
OS: macOS Ventura 13.6.6
pygeofilter
Version: 0.2.1 (main
branch)
According to the OGC CQL2 documentation, a value of TRUE
(case-insensitive) should be interpreted as a booleanLiteral
value (see CQL2-Text example 14). So, for example, in the following query we should interpret attr
as an attribute and TRUE
as a booleanLiteral
that we are comparing attr
against:
attr = TRUE
However, the current CQL2 Text parser interprets the value of TRUE
as an attribute. This leads to errors when we try to translate the AST to a backend like SQLAlchemy.
Steps to Reproduce:
from pygeofilter import ast
from pygeofilter.parsers.cql2_text import parse
result = parse("attr = TRUE")
assert result == ast.Equal(
ast.Attribute("attr"),
True,
)
# Assertion fails. The actual object is
# ast.Equal(
# ast.Attribute("attr"),
# ast.Attribute("TRUE")
# )
I will put up a PR shortly with a possible solution that uses Lark's terminal priority to ensure that values of TRUE
/true
/FALSE
/false
are parsed as booleanLiteral
values.
It seems that the current grammar is not compatible with Lark 1.0 (https://github.com/lark-parser/lark/releases/tag/1.0.0)
Now the version is fixed to lark<1.0
Goal is to stay up to date with lates lark
pygeofilter.backends.sqlalchemy.filters incorrectly maps the INTERSECTS
spatial operator to ST_Contains
. It should instead map to ST_Intersects
SQLAlchemy 2.0.0 was released a couple of days ago, which introduces breaking changes. I re-ran the last CI to confirm.
Options:
hi,
I am trying to filter on an attribute with a colon in its name, in my case 's5p:orbit'. after conversion to django filters, this name is unchanged, so django cannot find the matching field. so I now manually change the AST so colons become "_ ".. of course while typing this, I see that you can pass a field mapping to to_filter.. so now I'm wondering if it would make sense to have a mode where ':' is automatically converted to " _", for cases where the django model matches this?
The cql2_json
parser fails with a ValueError if I try to pass it an operator in uppercase, e.g. "op": "or", "args": [...]
is fine, but "op": "OR", "args": [...]
fails. And indeed, in cql2_json/parser.py we find an explicit test if op in ("and", "or"):
(ditto for all the other operators, of course).
What is the reasoning behind this? cql2_text
is case-insensitive w.r.t. operators (on account of presumably deliberate condition "OR"i
etc. specifications in the Lark grammar), but if I consult the CQL2 JSON standard spec I just see things like orExpression = booleanExpression "OR" booleanExpression;
which to me would indicate either case insensitivity (haven't been able to quickly find out what the default is for BNF literals) or in fact mandatory uppercase...
As discovered in #57 breaking changes in dependencies can cause issues with pygeofilter.
Suggest to amend requirements*.txt files to pin specific versions or version ranges for all dependencies.
pygeoif released v 1.0.0 on Sept 22 2022, which is the first release since 0.7 in 2017 (!).
pygeoif is unpinned in pygeofilter, so the latest version 1.0.0 is pulled, even though it's not compatible.
stac-fastapi gets this failure when trying to import the cql2 converters:
File "/app/stac_fastapi/pgstac/stac_fastapi/pgstac/core.py", line 13, in
from pygeofilter.backends.cql2_json import to_cql2
File "/usr/local/lib/python3.8/site-packages/pygeofilter/backends/cql2_json/init.py", line 1, in
from .evaluate import to_cql2
File "/usr/local/lib/python3.8/site-packages/pygeofilter/backends/cql2_json/evaluate.py", line 32, in
from ..evaluator import Evaluator, handle
File "/usr/local/lib/python3.8/site-packages/pygeofilter/backends/evaluator.py", line 31, in
from .. import ast
File "/usr/local/lib/python3.8/site-packages/pygeofilter/ast.py", line 32, in
from . import values
File "/usr/local/lib/python3.8/site-packages/pygeofilter/values.py", line 33, in
from pygeoif.geometry import as_shape
ImportError: cannot import name 'as_shape' from 'pygeoif.geometry' (/usr/local/lib/python3.8/site-packages/pygeoif/geometry.py)
The in stac-fastapi was to pin it to 0.7 in setup.py
"pygeoif==0.7",
Related issues:
>>> from pygeofilter.parsers.cql_json import parse as parse_json
>>> parse_json({ "eq": [{ "property": "title" }, "Lorem ipsum"]}) # sanity check/works
Equal(lhs=ATTRIBUTE title, rhs='Lorem ipsum')
>>> parse_json({ "like": [{ "property": "title" }, "Lorem%"]})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tomkralidis/Dev/pycsw/lib/python3.7/site-packages/pygeofilter/parsers/cql_json/parser.py", line 228, in parse
return walk_cql_json(cql)
File "/Users/tomkralidis/Dev/pycsw/lib/python3.7/site-packages/pygeofilter/parsers/cql_json/parser.py", line 167, in walk_cql_json
walk_cql_json(value['like'][0]),
TypeError: list indices must be integers or slices, not str
cc @kalxas
the function input is defined by using arguments
while in the OGC specification it uses args
ref: https://docs.ogc.org/DRAFTS/21-065.html#functions
example ๐
{
"op": "s_within",
"args": [
{"property": "road"},
{
"function": {
"name":"Buffer",
"args": [
{"property": "geometry"},
10,
"m"
]
}
}
]
}
Local environment configuration for test execution can be tricky. A containerized execution environment should be provided for at least one of the supported Python versions to make it easier to execute tests.
There is an incompatibility with the pygeoif
dependency in the setup.py
and requirements-text.txt
files:
Line 60 in a08c5a5
pygeofilter/requirements-test.txt
Line 10 in a08c5a5
Naturally pip
is not able to resolve pygeoif >= 1.0.0 and pygeoif == 0.7
๐
pygeofilter/backends/sqlalchemy/filters.py handles BEFORE
and AFTER
operations but not TEQUALS
. Support should be added for an equality check using ==
.
Hi! I've been working on pygeoapi
for the past couple weeks and a recent PR #964 added the possibility to use CQL to query features inside the pygeoapi
PostgreSQL provider
! :)
The implementation of this new feature uses pygeofilter
as it references the to_filter
function to translate ECQL AST
to a Django Query expression
. The call to to_filter
from pygeoapi
can be found here.
While using this new pygeoapi
CQL feature, I have come across an issue that could be coming from pygeofilter
? I'm creating this issue more as a discussion topic, because I'd like to know more information about the way pygeofilter
is meant to be used with geometries using various SRID. I'm interested to know in order to see if I can work around my issue or if it's something that might bring an idea to someone working in this current repository.
In order to provide context, here's an example of my usage in pygeoapi
and the steps to reproduce my issue:
In pygeoapi
, via a service request handler, I'm receiving a CQL written as such: INTERSECTS(geom, POLYGON((-103.7402 55.4461, -103.7402 55.8272, -103.2597 55.8272, -103.2597 55.4461, -103.7402 55.4461)))
.
pygeoapi
starts by using the parse
function to parse the CQL string. To the best of my understanding, this uses Lark
to parse the string and generate a result ref: Lark's parse. The call to pygeofilter's parse from pygeoapi
can be found here
Then, in pygeoapi
, this result is sent to the aforementioned to_filter
function in order to finally send the cql filters results to an sqlalchemy query like so: session.query(self.table_model).filter(cql_filters)
. This call from pygeoapi
can be found here.
Everything seems to be great and I'm pretty convinced that, if the CQL has a geometry that has its coordinates in SRID=4326
or maybe the same SRID as the data, everything would be working fine. That being said, I believe that I've stumbled on this particular issue, because our data is stored in SRID=3978
, not in SRID=4326
(!). So, when I execute this CQL on our point layer, in 3978, sqlachemy understandably fails and gives this error: sqlalchemy.exc.InternalError: (psycopg2.errors.InternalError_) ST_Intersects: Operation on mixed SRID geometries (Point, 3978) != (Polygon, 4326)
and [SQL: SELECT {....} FROM {....} WHERE ST_Intersects({geom_field}, ST_GeomFromEWKT(%(ST_GeomFromEWKT_1)s))
.
Therefore, the first thing I've tried (even if that's not what we want in pygeoapi
) is to send a CQL with a POLYGON that's using coordinates in SRID=3978
(the same SRID as the data) just to attempt to get results from the query. I used this CQL: INTERSECTS(geom, POLYGON((-574624 735257, -574624 870115, -417433 870115, -417433 735257, -574624 735257)))
. It failed with the same error message. This was telling me something was probably forcing it in SRID=4326
.
I then tried to send a CQL in which I was explicitly writing the SRID of the POLYGON using a PostGIS EWKT
format as such: INTERSECTS(geom, SRID=3978;POLYGON((-574624 735257, -574624 870115, -417433 870115, -417433 735257, -574624 735257)))
to attempt to force the geometry in 3978. It was still giving me the same error about a 4326 incompatibility.
At this point, I thought either (1) pygeofilter
is not meant to be used with a SRID other than 4326, or (2) it's not supporting the EWKT
format. I was thinking I had to abandon using to_filter
altogether and try to find a different way to make it work for CQL inside pygeoapi
. However, before doing so, I jumped in GitHub to try to understand the implementation of to_filter
and something caught my attention. Is it possible that, with all the formatting that happens (impressive work btw!), the function parse_geometry
is called at some point?
If so, this might explain the issue that I'm experiencing. This parse_geometry
function starts by calling as_shape
** from the pygeoif
library and then calls to_wkt
** again from the pygeoif
library. Afterwards, it uses a regular expression to search for a SRID={some_number};
in the geometry WKT string to determine what kind of SRID
should be used in ST_GeomFromEWKT
to be returned. Even though, using rhs
variable on the geometry, I validate that the SRID 3978 is defined (Geometry(geometry={'type': 'Polygon', 'coordinates': (((-574624.0, 735257.0), (-574624.0, 870115.0), (-417433.0, 870115.0), (-417433.0, 735257.0), (-574624.0, 735257.0)),), 'crs': {'type': 'name', 'properties': {'name': 'urn:ogc:def:crs:EPSG::3978'}}})
); after the object goes through as_shape
and to_wkt
it loses its SRID. Actually, my understanding is that a WKT never has a SRID - when a WKT has a SRID it's called an EWKT - so I think that's normal that a SRID can't be read ever? In my particular case, it seems the SRID={number};
is never found and the default SRID=4326;
is used, all the time, in the response, possibly causing my issue.
My question, finally! :), is it possible to use pygeofilter
CQL on geometries that are in other SRIDs? Maybe the parse_geometry
function should have an additional parameter like the similar parse_bbox
function?
Thank you for taking the time to read!
Is it possible the function parse_geometry
always responds with an SRID=4326 when returning the ST_GeomFromEWKT
? Making pygeofilter
exclusively(?) work with geometries in SRID=4326 projection? Is there a way to work with other SRID values for a geometry in pygeofilter
CQL? Maybe the parse_geometry
function should have an additional parameter like the similar parse_bbox
function?
** pygeofilter
(and by extension pygeoapi
) is referencing an old version of pygeoif
(version==0.7 which dates back from sometime near 2012 maybe?). This is on purpose as it seems that the latest version of pygeoif
has caused issues recently and so this commit addresses this. That is why the references above point to other places in pygeoif
. Indeed, the function as_shape
in pygeoif
's Geometry class doesn't exist anymore. It's now in PyGeoif's Factories class. Also, the function to_wkt
in pygeoif
's Geometry class has been renamed to wkt
.
if a fes filter:
<ogc:Filter xmlns:ogc="http://www.opengis.net/ogc">
<ogc:And>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>Type</ogc:PropertyName>
<ogc:Literal>dataset</ogc:Literal>
</ogc:PropertyIsEqualTo>
<ogc:PropertyIsLike wildCard="*" singleChar="_" escapeChar="/">
<ogc:PropertyName>AnyText</ogc:PropertyName>
<ogc:Literal>*potentielle Evapotranspiration*</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:And>
</ogc:Filter>
is converted to a django filter, the like query will result in a exact lookup for *potentielle Evapotranspiration*
This is cause the current like function does not take into account the wildCard
char. It will always use the hard coded %
char as wildcard.
cql2 specs mentions instances used with temporal can be date or timestamp (https://docs.ogc.org/DRAFTS/21-065.html#_temporal_data_types_and_instances), but the following example (taken from https://github.com/opengeospatial/ogcapi-features/blob/master/cql2/standard/schema/examples/json/example20.json) fails with pygeofilter:
{
"op": "t_before",
"args": [
{ "property": "built" },
{ "date": "2015-01-01" }
]
}
The cql-json LIKE format is:
{
"like": [
{ "property": "name" },
"Smith."
],
"singleChar": ".",
"nocase": true
}
https://portal.ogc.org/files/96288#req_simple-cql_like-predicate
Could you please clarify the exact meaning of "OGC filtering standards" and "OGC CQL standard" in the first section of the README? I suspect you're not referring to this 2014 document, however I'm not aware of any other OGC filtering standard. Is this an implemetation of current CQL requirements in the OGC Features API Part 3 draft specification?
The parser implementation for fes v1.1.0 filters is currently incorrect. It will allow the newer in fes v2.x.x filters renamed PropertyName
as ValueReference
.
It shall be
<ogc:PropertyIsLessThan>
<ogc:PropertyName>attr</ogc:PropertyName>
...
Hi,
I am currently trying to figure out a way to use pygeofilter
for converting CQL2 to a DuckDB query using the spatial extension. This extension is quite prototypical and I stumble across a couple of caveats. Here's my current approach:
import duckdb
duckdb.install_extension('spatial')
duckdb.load_extension('spatial')
Define the CQL2 filter
cql2_filter = {
"op": "and",
"args": [
{
"op": ">=",
"args": [
{
"property": "end_datetime"
},
'2020-03-28T20:05:46+02'
]
},
{
"op": "<=",
"args": [
{
"property": "start_datetime"
},
'2020-03-28T22:06:15+02'
]
},
{
"op": "=",
"args": [
{
"property": "sar:instrument_mode"
},
'IW'
]
}
]
}
Optionally add spatial filtering
# ext = None
ext = {'xmin': -4, 'xmax': -2, 'ymin': 6, 'ymax': 8}
if ext is not None:
arg = {
'op': 's_intersects',
'args': [
{
'property': 'geometry'
},
{
'type': 'Polygon',
'coordinates': [[[ext['xmin'], ext['ymin']],
[ext['xmin'], ext['ymax']],
[ext['xmax'], ext['ymax']],
[ext['xmax'], ext['ymin']],
[ext['xmin'], ext['ymin']]]]
}
]
}
cql2_filter['args'].append(arg)
Convert CQL2 filter to SQL where clause
from pygeofilter.parsers.cql2_json import parse as json_parse
filter = json_parse(cql2_filter)
from pygeofilter.backends.sql.evaluate import to_sql_where
sql_where = to_sql_where(filter, {
's1:datatake': 's1:datatake',
'datetime': 'datetime',
'sar:instrument_mode': 'sar:instrument_mode',
'end_datetime': 'end_datetime',
'start_datetime': 'start_datetime',
'geometry': 'geometry'
})
Create DuckDB query for a geoparquet file:
Here it gets ugly because (1) the WKB column needs to be converted to GEOMETRY
type and (2) the DuckDB-spatial implementation of ST_GeomFromWKB
cannot read the WKB-HEX representation returned by to_sql_where
.
import re
sql_query = "SELECT * EXCLUDE geometry, ST_GeomFromWKB(geometry) AS geometry FROM '20200301_20200401.parquet' WHERE %s" % sql_where
if ext is not None:
# convert WKB blob to GEOMETRY
sql_query = sql_query.replace('ST_Intersects("geometry"', 'ST_Intersects(ST_GeomFromWKB(geometry)')
# duckdb_spatial apparently cannot yet read wkb_hex representation -> convert it back to text
spatial = ("ST_GeomFromText('POLYGON(({xmin} {ymin}, {xmin} {ymax}, "
"{xmax} {ymax}, {xmax} {ymin}, {xmin} {ymin}))')")
sql_query = re.sub(r'ST_GeomFromWKB\(x\'.*\'\)', spatial.format(**ext), sql_query)
sql_query
Execute the query:
df = duckdb.query(sql_query)
I wonder, how would you do this? Do you think there is anything that could/should be modified on the pygeofilter
end or is it entirely up to the DuckDB-spatial package? I'd appreciate any help.
pygeofilter/backends/sqlalchemy/filters.py's bbox
method accepts a crs
parameter but does not pass that parameter in its call to parse_bbox
, meaning that the caller's CRS information is lost and the filter is only capable of operating in srid=4326. The bbox
method should pass the parameter as provided.
I see in the specs that between
operator accepts 3 args: property, lower limit, and upper limit.
But pygeofilter cql2_json parser expects two args: property and a list of two values indicating the lower and upper limits.
Here's the example from OGC Specs (https://docs.ogc.org/DRAFTS/21-065.html#advanced-comparison-operators), which fails with pygeofilter:
{
"op": "between",
"args": [
{ "property": "depth" },
100.0,
150.0
]
}
pygeofilter expects it to be like this:
{
"op": "between",
"args": [
{ "property": "depth" },
[100.0, 150.0]
]
Is this due to changes in specs?
It seems like a straightforward fix here: https://github.com/geopython/pygeofilter/blob/main/pygeofilter/parsers/cql2_json/parser.py#L131.
If it's an actual bug, I can submit a PR.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.