elastic / elasticsearch-hadoop Goto Github PK
View Code? Open in Web Editor NEW:elephant: Elasticsearch real-time search and analytics natively integrated with Hadoop
Home Page: https://www.elastic.co/products/hadoop
License: Apache License 2.0
:elephant: Elasticsearch real-time search and analytics natively integrated with Hadoop
Home Page: https://www.elastic.co/products/hadoop
License: Apache License 2.0
Currently, we don't use the Bulk API to add data to ES. This clearly needs to be changed once the design settles down.
Thanks for this repo. I've tried the Cascading code and it seems to work. The input example gives an error at the end though, see https://github.com/jeroenvandijk/elasticsearch-hadoop-trial/blob/master/src/trial/input.clj
I also tried to get the ES Cascading code to work with Cascalog. The input code worked perfectly, the output code failed at the Cascading level though, see https://github.com/jeroenvandijk/elasticsearch-hadoop-trial/blob/master/src/trial/cascalog/output.clj. I don't why this would happens. Cascalog does depend on an older version of Cascading (2.0.8), but I'm not sure if that's the issue.
The complete trial repo can be found here: https://github.com/jeroenvandijk/elasticsearch-hadoop-trial
I hope you don't mind the Clojure code. I hope it translates easily to the original Java code.
The ArrayWritable to List transformation does not return the result list but falls back to the byte array:
else if (writable instanceof ArrayWritable) {
Writable[] writables = ((ArrayWritable) writable).get();
List<Object> list = new ArrayList<Object>(writables.length);
for (Writable wrt : writables) {
list.add(fromWritable(wrt));
}
+ return list;
}
....
// fall-back to bytearray
return org.apache.hadoop.io.WritableUtils.toByteArray(writable);
}
I'm trying to export some of my data to ES for Kibana. This now requires indexes per day. However AFAIK this is currently not possible in an easy way with the ES Cascading Tap since you have to choose an index upfront.
I think the ES tap needs a possibility to define a pattern for the index so it is possible to dynamically write to an index. I guess I need something similar as the GlobHfs Tap
What do you think?
Jackson is now in the com.fasterxml.jackson namespace; elasticsearch uses it.
Some field names are not supported by pig and cannot be translated to ES (ex: '@timestamp'). Additionally some complex types (bag and tuples) cannot be serialized safely as they both map to a JSON list.
By allowing a user mapping, such details can be tweaked to give the user better control over the data (in/out) mapping.
Hello,
I am not sure if this is the right place to ask this question but I couldn’t find a mailing list or contact info.
I have been playing with this library and looks really good, it just works. I are planning to use elastic search to index about 45M documents. So, I will use this with a pig script to throw data from cassandra to elastic search.
So, what can we expect a stable release and what is the current status of this project for production use?
I have a type in ES and in Hive I added a EXTERNAL TABLE pointing to ES. Something like this:
CREATE EXTERNAL TABLE user (userId INT, userRoles ARRAY<STRUCT<roleId:INT, name:STRING>>) STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler' TBLPROPERTIES('es.resource'='samples/user/_search?q=*');
My mapping in ES sounds like this:
{
"user" : {
"properties" : {
"userId" : {
"type" : "int"
},
"userRoles" : {
"properties" : {
"roleId" : {
"type" : "int"
},
"name" : {
"type" : "string"
}
}
}
}
}
}
So when a execute a SELECT * FROM user I caught:
2013-07-08 09:09:17,648 ERROR CliDriver (SessionState.java:printError(386)) - Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.io.VIntWritable cannot be cast to java.lang.Integer
java.io.IOException: java.lang.ClassCastException: org.apache.hadoop.io.VIntWritable cannot be cast to java.lang.Integer
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:150)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1412)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.VIntWritable cannot be cast to java.lang.Integer
at org.apache.hadoop.hive.serde2.objectinspector.primitive.JavaIntObjectInspector.get(JavaIntObjectInspector.java:39)
at org.apache.hadoop.hive.serde2.lazy.LazyUtils.writePrimitiveUTF8(LazyUtils.java:201)
at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:428)
at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serializeField(LazySimpleSerDe.java:382)
at org.apache.hadoop.hive.serde2.DelimitedJSONSerDe.serializeField(DelimitedJSONSerDe.java:71)
at org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:366)
at org.apache.hadoop.hive.ql.exec.ListSinkOperator.processOp(ListSinkOperator.java:91)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:90)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:490)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
... 11 more
When I only use strings, works fine. I tried the master branch and the maven repositories available releases, but none works.
Currently to perform efficient writes, the data is saved before being passed to ES. As Hadoop (and various libraries) perform object pooling, each entry needs to be copied otherwise the data is lost.
This causes significant memory overhead which can be alleviated by serializing early ( #3 )
Currently the Hive reading support requires the ES query to be specified as a table property. It would be better (and consistent with writing) to specify just the index and then create the query through hive directly.
After writing data to ES, the data needs to be flushed otherwise subsequent reads will not see the data.
In some cases this might not be needed (hence the need to offer a configuration option) but in most it's probably what the user expects.
Adding support for Cascading Tap/Scheme in line with what we provide for MR/Pig/Hive.
The current approach of using pagination is not very effective when dealing with lots of data and should be changed to the Scroll API for better performance:
http://www.elasticsearch.org/guide/reference/api/search/search-type/
Helpful for doing integration testing
There are cases where the raw json might be used as input or sent as output (without transforming it into an object).
This should be supported either by trying to automatically identify the time or by looking at a flag.
When the JSON in the result set contains an array, then Pig 10.0 fails during internal serialization with an exception trace similar to the following. It appears that the array is being serialized as a Byte[] at some level, and Pig cannot handle that.
java.lang.RuntimeException: Unexpected data type [B found in stream. Note only standard Pig type is supported when you output from UDF/LoadFunc
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:559)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:435)
at org.apache.pig.data.BinInterSedes.writeMap(BinInterSedes.java:581)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:451)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:435)
at org.apache.pig.data.BinInterSedes.writeMap(BinInterSedes.java:581)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:451)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:435)
at org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135)
at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:613)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:443)
at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:435)
at org.apache.pig.impl.io.InterRecordWriter.write(InterRecordWriter.java:73)
at org.apache.pig.impl.io.InterStorage.putNext(InterStorage.java:87)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:264)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:673)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:331)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
For example :
if I have a map type (rdata) column in hive
and when I try to push the data to Elastic search I only see last mapid/value pair in elastic search.
Log for the writable
INFO org.elasticsearch.hadoop.rest.BufferedRestClient: Writable{rid=1, rdata={value=5, mapid=4}, rdate=1234, mapids=[7, 8, 9]}
Any workaround for this ? Is this a bug ?
Thanks
Looks like line 26 is missing a not operator; detectHostPortAddress seems fond of returning null.
Hello everyone
I am right now playing around with this great piece of sotware.
I have looked at the code but not found a clear answer.
Say I want to push from hive into an elasticsearch index a date field how would I go about writing it?
CREATE EXTERNAL TABLE es_wrtie (
clientId String,
mydate timestamp||string)
STORED BY 'org.elasticsearch.hadoop.hive.ESStorageHandler'
TBLPROPERTIES('es.resource' = 'test/test/')
since hive only knows timstamps would I use these, or put a strign and let elasticesearch do the trick if I format it correctly?
thnaks for your help and software
Seems the commons-io (marked as optional) is not considered as part of the classpath when importing elasticsearch-hadoop into IDEA.
To speed up things, a jackson serializer/deserializer could be added to avoid the intermediary step of creating generic objects that are then transformed into Writable.
It remains to be seen though whether there is enough information to do this translation directly into the serializer.
ES-Hadoop uses Jackson's ObjectMapper for converting to/from JSON. By default OM registers a DEFAULT and ANNOTATION based introspectors.
To facilitate the integration of various Hadoop libraries one will need a greater control overt the JSON binding functionality. For example to use an Avro schema for JSON binding (like jackson-dataformat-avro) one would have to providing a custom ObjectMapper configuration.
Therefore the ObjectMapper has to be exposed at ESInputFormat and ESOutputFormat lever.
Will you be releasing a version of elasticsearch-hadoop compilable with hadoop 2.X
OR could someone help how to make it work for the mentioned version.
Thanks,
Venkat
ERROR 2998: Unhandled internal error. Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at org.elasticsearch.hadoop.mr.ESOutputFormat.checkOutputSpecs(ESOutputFormat.java:104)
at org.apache.pig.newplan.logical.rules.InputOutputFileValidator$InputOutputFileVisitor.visit(InputOutputFileValidator.java:80)
at org.apache.pig.newplan.logical.relational.LOStore.accept(LOStore.java:77)
at org.apache.pig.newplan.DepthFirstWalker.depthFirst(DepthFirstWalker.java:64)
Because the Configuration allows only one es.resource setting it is impossible to query one index and create a different one in the same Job. (see #18)
Ideally, where possible, we should try to convert the hive operations to ES queries - to minimize IO but also the work done on the Hive side.
I using Hive with ES and when a field named with sensitive chars in hive these fields doesn't work.
ES:
{"user": {"name": "Joseph", "countryId":"US"}}
Hive:
select * from user;
Returns:
Joseph NULL
To execute proper splitting, one needs information about the shards used by the target indices, which is supported in 0.90.beta2 through the shard admin API: elastic/elasticsearch#2726
The properties available in configuration file mention a server of Elasticsearch. Why set a specific server (host+port) instead the name of cluster? I mean, is not better use infrastructure of cluster to prevent fails and overhead?
Hi, I want to use Elasticsearch+Hadoop MapReduce and I want to read from elasticsearch and write to elasticsearch in the same MapReduce task. In the sample only one way is showed (or read or write because exists only one es.resource). What I want is something like follows:
Read from:
/radio/artists/_search?q=me*
Write to :
/radio/statistics
Best regards.
Apache Crunch (http://crunch.apache.org/) is a framework for writing, testing, and running MapReduce pipelines composed of many user-defined functions.
When Hadoop executes a Job using plugin with one query like the follow:
samples/test/_search?q=*
The Map input 136188 data to map. But in ElasticSearch execute the follow query:
samples/test/_count?q=*
The result is:
{"count": 68094, "_shards":{"total":5, "successful":5, "failed":0}}
My environment is:
Linux, Hadoop 1.2.0 (3 nodes), ES 0.90.1 (3 nodes).
Currently the REST interaction uses Jersey client as a temporary solution. This should be replaced with commons-http-3.0.x found in Hadoop, Hive and Pig
ESStorage has a constructor ESStorage(String, int) that is intended to set the host and port for the storage. Pig requires that the constructor only take strings. I changed my local code as follows:
--- a/src/main/java/org/elasticsearch/hadoop/pig/ESStorage.java
+++ b/src/main/java/org/elasticsearch/hadoop/pig/ESStorage.java
@@ -83,12 +83,12 @@ public class ESStorage extends LoadFunc implements StoreFuncInterface, StoreMeta
private RecordWriter<Object, Object> writer;
public ESStorage() {
- this(null, 0);
+ this(null, "0");
}
- public ESStorage(String host, int port) {
+ public ESStorage(String host, String port) {
this.host = host;
- this.port = port;
+ this.port = Integer.parseInt(port);
}
@Override
To reproduce:
clone the es-hadoop/master repository and run:
curl -XDELETE 'http://localhost:9200/radio'
./gradlew -x clean test build
Both the CascadingHadoopTest and the CascadingLocalTest faile.
Am I doing something wrong or missing some configuration?
The current parsing of es.resource needs to be improved as extra characters (like ?) can easily break it. Additionally support for json documents, for complicated queries, needs to be added in.
Currently when trying to load data from a non-existing item, Hive (for example) returns an obscure exception instead of returning an empty set or something human readable.
Current ObjectMapper configuration doesn't serialise null fields. The test data (artists.dat) contains empty fields that cause CascadingHadoopTest.testWriteToES() to fail with:
org.codehaus.jackson.map.JsonMappingException: No serializer found for class org.apache.hadoop.io.NullWritable
To make it work with the existing OM configuration the test needs to filter out all null-field entries before writing the index: fa62b72
pipe = new Each(pipe,
new Fields("id", "name", "url", "picture"), new FilterNull());
Lets say we have a billion-line log file on Hadoop which we want to index. We could eliminate the need to make lots of requests to a centralized ES server and greatly improve the overall indexing performance if we create the indexes on Map/Reduce jobs locally and then mount them to a running ES cluster after the map reduce job. Maybe Elasticsearch-Hadoop project could provide an ES compatible index/shard creation library which could be ready to mount to ES. What do you think?
When I try to run a mapred job by Hive.
Environment Mac OS or Linux, hive 0.11, hadoop 1.2 or 0.23, ES 0.90.2.
My "test" entity hava just three fields (userid, name and type), and have just one record on ES.
hive> select count(userid) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_201307081651_0001, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201307081651_0001
Kill Command = /Users/developergvt/servers/hadoop-1.2.0/libexec/../bin/hadoop job -kill job_201307081651_0001
Hadoop job information for Stage-1: number of mappers: 5; number of reducers: 1
2013-07-08 16:57:39,056 Stage-1 map = 0%, reduce = 0%
2013-07-08 16:58:01,154 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201307081651_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201307081651_0001
Examining task ID: task_201307081651_0001_m_000006 (and more) from job job_201307081651_0001
Task ID:
task_201307081651_0001_m_000000
URL:
Diagnostic Messages for this Task:
java.lang.IllegalArgumentException: Can not create a Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:82)
at org.apache.hadoop.fs.Path.(Path.java:90)
at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.getPath(HiveInputFormat.java:106)
at org.apache.hadoop.mapred.MapTask.updateJobWithSplit(MapTask.java:451)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:409)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 5 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
The current Cascading implementation is meant for local environment but needs to be extended to Hadoop as well.
Hello
I'm writing an ES plugin for cascading.lingual, and had to hack in some places:
Could you please fix in SNAPSHOT.
Regards,
Oleg Sinitsin
[email protected]
CascadingLocalTest and CascadingHadoopTest require a particular execution order for the tests to work: They run the testWriteToES() first to create the index and then run the testReadFromES() to ready that index.
The JUnit assumes that all tests can be performed in an arbitrary order and often it runs the testReadFromES() before the testWriteToES().
Note: To reproduce the issue when an external ES is used make sure to delete the old indexes:
curl -XDELETE 'http://localhost:9200/billboard'
curl -XDELETE 'http://localhost:9200/top'
./gradlew -x clean test build
Solutions:
Ideally, where possible, we should try to convert Pig predicates to ES queries - to minimize IO but also the work done on the Pig side.
Hi everyone
thanks for a great piece of software.
I am running cloudera CDH4.3 with hive .10 on a three node cluster.
I have created Hive "reading" and Hive "writing" tables pointing to my elasticsearch server.
I can read fine however when I try to push data as per your example:
INSERT OVERWRITE TABLE es_tasks_write select "false","titi","tata" from sample_table limit 10;
I get an error message:
2013-07-12 16:31:33,709 FATAL ExecReducer: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{},"value":{"_col0":"false","_col1":"titi","_col2":"tata"},"alias":0}
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct cannot be cast to [Ljava.lang.Object;
at org.elasticsearch.hadoop.hive.HiveValueWriter.write(HiveValueWriter.java:155)
at org.elasticsearch.hadoop.hive.HiveValueWriter.write(HiveValueWriter.java:56)
at org.elasticsearch.hadoop.hive.HiveValueWriter.write(HiveValueWriter.java:40)
at org.elasticsearch.hadoop.serialization.ContentBuilder.value(ContentBuilder.java:242)
The issue is explained here:
Caused by: java.lang.ArrayStoreException
at java.lang.System.arraycopy(Native Method)
at java.util.ArrayList.toArray(ArrayList.java:306)
at org.elasticsearch.hadoop.hive.ESSerDe.hiveToWritable(ESSerDe.java:136)
at org.elasticsearch.hadoop.hive.ESSerDe.hiveToWritable(ESSerDe.java:197)
at org.elasticsearch.hadoop.hive.ESSerDe.serialize(ESSerDe.java:109)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:586)
https://groups.google.com/forum/?fromgroups=#!topic/elasticsearch/BAaoqF6SkiY
The data appears to returned duplicated when an index has a replication factor of 1. I'm assuming the search request is being sent to both the primary shard and replica shard and the results combined.
For example, if I run the following command from the console:
curl localhost:9200/twitter/profile/_search?q=screen_name:twitter
I get one result returned
However if I start hive with the elasticsearch-hadoop jar added and create a table like the following
create external table profile ( id string, screen_name string ) stored by 'org.elasticsearch.hadoop.hive.ESStorageHandler' TBLPROPERTIES('es.resource'='twitter/profile/_search?q=screen_name:twitter');
and then do a
select * from profile;
two identical rows are returned.
783214 twitter
783214 twitter
The 3.1 interface has long since been abandoned to the new http components interface. The gradle script already downloads the appropriate jar files, it's simply a matter of using the newer API.
Each library has its own way of configuring the various settings and we need to be consistent about it especially as spreading this across multiple nodes is required (see #12 ).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.