dccspeed / fractal Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
hi, I have a problem with graphs with edges labels. The problem is that some patterns which are isomorphic are not group. I put an example with a small graph. I would like to know if there is a manipulation that I couldn't find to resolve it.
the two patterns are isomorphic, is it possible to have the same result but with only one pattern for this example?
Even that each test case should clean the contexts, it doesn't, making filtering rules persist.
NOTE: I don't know if it is the expected result, but doesn't feel like it should be.
// BasicTestSuite.scala
class BasicTestSuite extends FunSuite with BeforeAndAfterAll {
private val numPartitions: Int = 8
private val appName: String = "fractal-test"
private val logLevel: String = "error"
private var master: String = _
private var sc: SparkContext = _
private var fc: FractalContext = _
private var fgraph: FractalGraph = _
private var fgraphEdgeLabel: FractalGraph = _
/** set up spark context */
override def beforeAll: Unit = {
master = s"local[${numPartitions}]"
// spark conf and context
val conf = new SparkConf().
setMaster(master).
setAppName(appName)
sc = new SparkContext(conf)
sc.setLogLevel(logLevel)
fc = new FractalContext(sc, logLevel)
fgraph = fc.textFile("../data/cube.graph")
fgraphEdgeLabel = fc.textFile("../data/cube-edge-label.graph")
}
/** stop spark context */
override def afterAll: Unit = {
if (sc != null) {
sc.stop()
fc.stop()
}
}
test("[cube,vfilter]", Tag("cube.vfilter")) {
val numSubgraph = List(3)
for (k <- 0 to (numSubgraph.size - 1)) {
val frac = fgraph.vfractoidAndExpand.
vfilter[String](v => v.getVertexLabel() == 1).
set("num_partitions", numPartitions)
val subgraphs = frac.subgraphs
assert(subgraphs.count == numSubgraph(k))
}
}
test("[cube,cliques]", Tag("cube.cliques")) {
val numSubgraph = List(8, 12, 0)
for (k <- 0 to (numSubgraph.size - 1)) {
val cliqueRes = fgraph.cliques.
set("num_partitions", numPartitions).
explore(k)
val subgraphs = cliqueRes.subgraphs
assert(subgraphs.count == numSubgraph(k))
}
}
}
export FRACTAL_HOME=`pwd` && ./gradlew test
- [cube,vfilter]
- [cube,cliques] *** FAILED ***14s]
7 did not equal 12 (BasicTestSuite.scala:148)
Run completed in 7 seconds, 366 milliseconds.
Total number of tests run: 2
Suites: completed 2, aborted 0
Tests: succeeded 1, failed 1, canceled 0, ignored 0, pending 0
*** 1 TEST FAILED ***
- [cube,vfilter]
- [cube,cliques]
Run completed in 4 seconds, 207 milliseconds.
Total number of tests run: 2
Suites: completed 2, aborted 0
Tests: succeeded 2, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
install_fractal_rmurphy.log
I had some problems installing, specifically, running gradlew assemble
on a Linux machine. I have attached the log file.
I was advised that the installation is assuming a build with Java 8 whereas the machine was using Java 11. Exporting JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
allowed the build to complete. Please let me know if you need more details.
Hi fractal,
Thank you for your contribution to the graph mining community. However, it may be safe to say that not everybody in the community is familiar with Java, Spark, etc. I am one of them. I have read your readme, but I am not sure how to use your application.
In step 3, you have code to build an Object. Where should I put that code? Is it a standalone file that I compile? What do I use to compile it?
Regarding step 4, I simply don't know what that means.
Would it be possible to add to the documentation/readme/tutorial to help a broader audience?
Thanks so much. Again, I will be indebted to your contribution once I know how to use it :)
Best,
-Ryan Murphy
Dear authors,
Thanks for the great work!
I found when I try to mining 4-cliques or other patterns with steps to explore more than 2, the CPU on each machine can not be fully occupied, and actually almost 100% idle.
My set up with Spark is stand alone version. I am not sure what's wrong. Should I wait the low CPU utilization experiment to be finished? Or perhaps my spark 2.2.0 setting up is wrong?
We are using 4 physical machines, each with 20 cores, 40 threads.
Could you give me some suggestions? Thanks a lot!
Dear Fractal,
I am trying to run your example of putting a custom app inside fractal_apps. I am getting an error and I think it may have to do with pointing gradlew
to the correct spark master. I am attaching a log and a screenshot of my project structure and code. Are you able to find what's going wrong? Thanks so much.
Dear Fractal,
I tried to run your batch example using cube.graph and ran into a problem.
I guess the jar does not exist in the fractal build? Thanks.
steps=2 inputgraph=$FRACTAL_HOME/data/cube.graph alg=cliques ./bin/fractal.sh
I got this output
FRACTAL_HOME is set to /home/murph213/Downloads/Installers/fractal
SPARK_HOME is set to /usr/local/spark
alg is set to 'cliques'
inputgraph is set to '/home/murph213/Downloads/Installers/fractal/data/cube.graph'
steps is set to '2'
spark-submit --master local[1] --deploy-mode client \ --driver-memory 2g \ --num-executors 1 \ --executor-cores 1 \ --executor-memory 2g \ --class br.ufmg.cs.systems.fractal.FractalSparkRunner \ --jars /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar \ /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar \ al /home/murph213/Downloads/Installers/fractal/data/cube.graph cliques scratch 1 2 info
19/09/13 12:53:06 WARN Utils: Your hostname, RM-Satellite resolves to a loopback address: 127.0.1.1; using 192.168.2.3 instead (on interface eth0)
19/09/13 12:53:06 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/09/13 12:53:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/09/13 12:53:07 WARN DependencyUtils: Local jar /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar does not exist, skipping.
19/09/13 12:53:07 WARN DependencyUtils: Local jar /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar does not exist, skipping.
19/09/13 12:53:07 WARN SparkSubmit$$anon$2: Failed to load br.ufmg.cs.systems.fractal.FractalSparkRunner.
java.lang.ClassNotFoundException: br.ufmg.cs.systems.fractal.FractalSparkRunner
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:806)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/09/13 12:53:07 INFO ShutdownHookManager: Shutdown hook called
19/09/13 12:53:07 INFO ShutdownHookManager: Deleting directory /tmp/spark-b89a1b4f-b7a3-4ade-a289-7b78d2d92c22
// MyMotifsApp.scala
object MyMotifsApp extends Logging {
def main(args: Array[String]): Unit = {
// environment setup
val conf = new SparkConf().setAppName("MyMotifsApp")
val sc = new SparkContext(conf)
val fc = new FractalContext(sc)
val fgraph = fc.textFile("data/cube.graph")
for (path <- fgraph.vfractoid.expand(2).subgraphs) {
logInfo(s"path=${path}")
}
for (path <- fgraph.efractoid.expand(2).subgraphs) {
logInfo(s"path=${path}")
}
// environment cleaning
fc.stop()
sc.stop()
}
}
./gradlew assemble && app_class=br.ufmg.cs.systems.fractal.apps.MyMotifsApp ./bin/fractal-custom-app.sh
# fgraph.vfractoid.expand(2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(1)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(3)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(7)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(6)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(4)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(5)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(0)
# fgraph.efractoid.expand(2)
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,4))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((2,7))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((1,5))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((4,5))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((4,6))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((6,7))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((5,7))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,3))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((1,2))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((3,6))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((2,3))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,1))
# fgraph.vfractoid.expand(2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(1,2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(3,7)
...
# fgraph.efractoid.expand(2)
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,4), (0,3))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((2,7), (2,1))
...
Hi Fractal,
I have successfully built using gradlew. I am using Ubuntu 16.04, where I have a Spark Cluster running. I have attached the Spark monitor view.
I ran
steps=2 inputgraph=$FRACTAL_HOME/data/citeseer-single-label.graph alg=cliques ./bin/fractal.sh
and got the error
Error message: No computation is set
The full log is attached
first_batch_test.log
Please advise, and thanks so much!
--Ryan Murphy
hi Fractal,
I'm coming for a question on the input graph. I wanted to know if it was possible to put in input several graphs separated by a stop mark, so that each vertex of the different graph starts from 0. From what I'm seeing, the numbering of the first vertex of the second graph begins by incrementing the last vertex id of the first graph. So I wanted to know if there was a way to make an entry like this?
Dear Fractal,
I am also having problems trying your code remotely. In particular, I ran
steps=2 inputgraph=$FRACTAL_HOME/data/citeseer-single-label.graph alg=cliques ./bin/fractal.sh
on a remote machine with
Spark version 2.4.3
Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_222
I am attaching the log
Thank you!
Hi Vinicius,
We corresponded briefly via email, but Github seems more convenient.
I am having difficulty running fractal in a multicore single machine environment. Specifying worker_cores > 1 leads to a deadlock.
From examining the logs, it seems that Fractal is not starting multiple actors. One slave is created, but then I'm guessing the Master is waiting to find another? This is the tail end of the logs when the deadlock happens. My CPU usage drops to 0 and nothing happens afterwards. Any help would be appreciated!
2019-09-12 15:56:55,337 INFO ActorMessageSystem$: Started akka-sys: akka://fractal-msgsys - executor - waiting for messages
2019-09-12 15:56:55,338 INFO SlaveActor: Actor Actor[akka://fractal-msgsys/user/slave-actor-11-0-0-0#740124414] started
2019-09-12 15:56:55,340 INFO SlaveActor: Actor[akka://fractal-msgsys/user/slave-actor-11-0-0-0#740124414] sending identification to master
2019-09-12 15:56:55,454 INFO SlaveActor: Actor[akka://fractal-msgsys/user/slave-actor-11-0-0-0#740124414] knows master: Actor[akka.tcp://[email protected]:2552/user/master-actor-11-0#1302576852]
[WARN] [SECURITY][09/12/2019 15:56:55.455] [fractal-msgsys-akka.actor.default-dispatcher-3] [akka.serialization.Serialization(akka://fractal-msgsys)] Using the default Java serializer for class [br.ufmg.cs.systems.fractal.computation.HelloMaster] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'
[WARN] [SECURITY][09/12/2019 15:56:55.460] [fractal-msgsys-akka.actor.default-dispatcher-3] [akka.serialization.Serialization(akka://fractal-msgsys)] Using the default Java serializer for class [br.ufmg.cs.systems.fractal.computation.Log] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'
2019-09-12 15:56:55,464 INFO MasterActor: Actor[akka://fractal-msgsys/user/master-actor-11-0#1302576852] knows 1 slaves.
2019-09-12 15:56:55,464 INFO MasterActor: StatsReport{step=0,partitionId=0,canonical_subgraphs_1:0,neighborhood_lookups_0:0,valid_subgraphs_1:0,subgraphs_output:0,canonical_subgraphs_4:0,valid_subgraphs_0:0,valid_subgraphs_3:0,canonical_subgraphs_3:0,neighborhood_lookups_2:0,neighborhood_lookups_5:0,canonical_subgraphs_0:0,valid_subgraphs_5:0,valid_subgraphs_2:0,neighborhood_lookups_1:0,canonical_subgraphs_2:0,neighborhood_lookups_4:0,canonical_subgraphs_5:0,neighborhood_lookups_3:0,valid_subgraphs_4:0,maxMemory=1.77783203125,totalMemory=0.43798828125,freeMemory=0.3339014947414398,usedMemory=0.10408678650856018}
Hello, we are very interested in "Fractal" and trying to run the examples in the distributed setting. But we encountered a problem when running the example and it is hanging there:
java.lang.UnsupportedOperationException: Accumulator must be registered before send to executor
Would you mind helping figure out the reason? Thanks a lot! The whole log is here
test.log
.
Hello! I am interested in getting frequent graph pattern and all matches(embeddings/instances) of pattern at same time. It seems that fsm algorithm can get the graph pattern but cannot get the whole matches of the pattern. And gquery can get the whole matches of one pattern. Can I get all frequent graph pattern and matches of pattern at the same time?
Hello,
When I build fractals I got this error would please help to figure out why or what went wrong.
"
Starting a Gradle Daemon (subsequent builds will be faster)
Task :fractal-core:compileScala FAILED
error: scala.reflect.internal.MissingRequirementError: object java.lang.Object in compiler mirror not found.
at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:17)
at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:18)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:53)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102)
at scala.reflect.internal.Mirrors$RootsBase.getRequiredClass(Mirrors.scala:105)
at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257)
at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257)
at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1390)
at scala.tools.nsc.Global$Run.(Global.scala:1242)
at scala.tools.nsc.Driver.doCompile(Driver.scala:31)
at scala.tools.nsc.MainClass.doCompile(Main.scala:23)
at scala.tools.nsc.Driver.process(Driver.scala:51)
at scala.tools.nsc.Main.process(Main.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at sbt.compiler.RawCompiler.apply(RawCompiler.scala:33)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:159)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:155)
at sbt.IO$.withTemporaryDirectory(IO.scala:358)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:155)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:152)
at sbt.IO$.withTemporaryDirectory(IO.scala:358)
at sbt.compiler.AnalyzingCompiler$.compileSources(AnalyzingCompiler.scala:152)
at sbt.compiler.IC$.compileInterfaceJar(IncrementalCompiler.scala:58)
at sbt.compiler.IC.compileInterfaceJar(IncrementalCompiler.scala)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.getCompilerInterface(ZincScalaCompilerFactory.java:119)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.access$200(ZincScalaCompilerFactory.java:47)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$2.apply(ZincScalaCompilerFactory.java:90)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$2.apply(ZincScalaCompilerFactory.java:86)
at com.typesafe.zinc.Cache.get(Cache.scala:41)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.createCompiler(ZincScalaCompilerFactory.java:86)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.access$100(ZincScalaCompilerFactory.java:47)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$1.create(ZincScalaCompilerFactory.java:75)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$1.create(ZincScalaCompilerFactory.java:71)
at org.gradle.internal.SystemProperties.withSystemProperty(SystemProperties.java:126)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.createParallelSafeCompiler(ZincScalaCompilerFactory.java:71)
at org.gradle.api.internal.tasks.scala.ZincScalaCompiler$Compiler.execute(ZincScalaCompiler.java:69)
at org.gradle.api.internal.tasks.scala.ZincScalaCompiler.execute(ZincScalaCompiler.java:57)
at org.gradle.api.internal.tasks.scala.ZincScalaCompiler.execute(ZincScalaCompiler.java:40)
at org.gradle.api.internal.tasks.compile.daemon.AbstractDaemonCompiler$CompilerWorkAction.execute(AbstractDaemonCompiler.java:113)
at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:47)
at org.gradle.workers.internal.AbstractClassLoaderWorker$1.create(AbstractClassLoaderWorker.java:46)
at org.gradle.workers.internal.AbstractClassLoaderWorker$1.create(AbstractClassLoaderWorker.java:36)
at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:98)
at org.gradle.workers.internal.AbstractClassLoaderWorker.executeInClassLoader(AbstractClassLoaderWorker.java:36)
at org.gradle.workers.internal.IsolatedClassloaderWorker.execute(IsolatedClassloaderWorker.java:54)
at org.gradle.workers.internal.WorkerDaemonServer.execute(WorkerDaemonServer.java:56)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.gradle.process.internal.worker.request.WorkerAction.run(WorkerAction.java:118)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:412)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
at java.base/java.lang.Thread.run(Thread.java:829)
FAILURE: Build failed with an exception.
org.gradle.internal.serialize.PlaceholderException (no error message)
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 7s
2 actionable tasks: 1 executed, 1 up-to-date
"
Thank you
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.