Each archive file contains a (small) fragment of Wkipedia History pages, structured as a PROV provenance graph. The extractor pulls the content of history pages from a user-defined start page from the live Wikipedia API, and stores them into a Neo4J graph database. The resulting PROV structure, as well as the configuration options to control the crawling of the extractor, are described in a separate document.
The traces in this folder are Neo4J DB files. Each .tar.gz archive contains a single folder, with extension .db. The folder contains the entire content of a Neo4J data file. Thus, "loading" these files simply amounts to copying the .db file into the appropriate data folder in the Neo4J folder structure.
Here are the detailed instructions.
-
download Neo4J from here
-
there is no installation. Simply unzip and move the folder wherever you like.
-
untar the trace file and move the .db folder under the data/ folder of Neo4J.
3.1 the simplest way to get running is simply to rename the .db file to "graph.db" which is what Neo4J expects unless you tell it otherwise, using the server configuration instructions found here
-
start and manage the server as described in the README file.
File cypher-queries.txt contains a number of sample queries in Neo4J's Cypher query language. A nice client interface for trying them out and visualizing fragments of the traces is NeoClipse. You can get it from here. Try the Cypher query editor
Filename | filesize | max revisions | max users | max depth | number of nodes | number of relations |
Newcastle-r10-u3-d5.db.tar.gz | 3.6M | 10 | 3 | 5 | 2758 | 4218 |
Newcastle-r20-u1-d1.db.tar.gz | 80K | 20 | 1 | 1 | 58 | 80 |
Newcastle-r20-u5-d2.db.tar.gz | 203K | 20 | 5 | 2 | 178 | 200 |
Newcastle-r100-u25-d25.db.tar.gz | 6.1M | 100 | 25 | 25 | 4044 | 6623 |
Earthship-r100-u3-d5.db.tar.gz | 3.6M | 100 | 3 | 5 | 2994 | 4876 |
Eartships-r200-u25-d25.db.tar.gz | 1.2M | 200 | 25 | 25 | 991 | 1600 |
--Paolo Missier