Coder Social home page Coder Social logo

fractal's People

Contributors

alexjf avatar marcoserafini avatar tcriscuolo avatar viniciusvdias avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fractal's Issues

Isomorphism problem with edge labels

hi, I have a problem with graphs with edges labels. The problem is that some patterns which are isomorphic are not group. I put an example with a small graph. I would like to know if there is a manipulation that I couldn't find to resolve it.

for a graph:

0 0 1 2 2 4
1 1 2 3
2 0
3 0 4 3 5 4
4 1 5 2
5 0

I have as a result

[(0,0),(4)-(1,0)],[(1,0),(3)-(2,1)],[(0,0),(2)-(2,1)] count 1
[(0,0),(4)-(1,0)],[(1,0),(2)-(2,1)],[(0,0),(3)-(2,1)] count 1

the two patterns are isomorphic, is it possible to have the same result but with only one pattern for this example?

VFilter or EFilter filtering results keeped through experiments

Problem

Even that each test case should clean the contexts, it doesn't, making filtering rules persist.
NOTE: I don't know if it is the expected result, but doesn't feel like it should be.

Environment

Steps to reproduce

  1. Change br.ufmg.cs.systems.fractal.BasicTestSuite.scala
// BasicTestSuite.scala

class BasicTestSuite extends FunSuite with BeforeAndAfterAll {
  private val numPartitions: Int = 8
  private val appName: String = "fractal-test"
  private val logLevel: String = "error"

  private var master: String = _
  private var sc: SparkContext = _
  private var fc: FractalContext = _
  private var fgraph: FractalGraph = _
  private var fgraphEdgeLabel: FractalGraph = _

  /** set up spark context */
  override def beforeAll: Unit = {
    master = s"local[${numPartitions}]"
    // spark conf and context
    val conf = new SparkConf().
      setMaster(master).
      setAppName(appName)

    sc = new SparkContext(conf)
    sc.setLogLevel(logLevel)
    fc = new FractalContext(sc, logLevel)

    fgraph = fc.textFile("../data/cube.graph")

    fgraphEdgeLabel = fc.textFile("../data/cube-edge-label.graph")
  }

  /** stop spark context */
  override def afterAll: Unit = {
    if (sc != null) {
      sc.stop()
      fc.stop()
    }
  }

  test("[cube,vfilter]", Tag("cube.vfilter")) {
    val numSubgraph = List(3)
    for (k <- 0 to (numSubgraph.size - 1)) {
      val frac = fgraph.vfractoidAndExpand.
        vfilter[String](v => v.getVertexLabel() == 1).
        set("num_partitions", numPartitions)
      val subgraphs = frac.subgraphs
      assert(subgraphs.count == numSubgraph(k))
    }
  }

  test("[cube,cliques]", Tag("cube.cliques")) {
    val numSubgraph = List(8, 12, 0)

    for (k <- 0 to (numSubgraph.size - 1)) {
      val cliqueRes = fgraph.cliques.
        set("num_partitions", numPartitions).
        explore(k)

      val subgraphs = cliqueRes.subgraphs
      assert(subgraphs.count == numSubgraph(k))
    }
  }

}
  1. Run tests
export FRACTAL_HOME=`pwd` && ./gradlew test

Output

- [cube,vfilter]
- [cube,cliques] *** FAILED ***14s]
  7 did not equal 12 (BasicTestSuite.scala:148)
Run completed in 7 seconds, 366 milliseconds.
Total number of tests run: 2
Suites: completed 2, aborted 0
Tests: succeeded 1, failed 1, canceled 0, ignored 0, pending 0
*** 1 TEST FAILED ***

Expected Output

- [cube,vfilter]
- [cube,cliques]
Run completed in 4 seconds, 207 milliseconds.
Total number of tests run: 2
Suites: completed 2, aborted 0
Tests: succeeded 2, failed 0, canceled 0, ignored 0, pending 0
All tests passed.

Gradle Assembly Errors

install_fractal_rmurphy.log
I had some problems installing, specifically, running gradlew assemble on a Linux machine. I have attached the log file.

I was advised that the installation is assuming a build with Java 8 whereas the machine was using Java 11. Exporting JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64 allowed the build to complete. Please let me know if you need more details.

Readme and Walkthrough

Hi fractal,

Thank you for your contribution to the graph mining community. However, it may be safe to say that not everybody in the community is familiar with Java, Spark, etc. I am one of them. I have read your readme, but I am not sure how to use your application.

In step 3, you have code to build an Object. Where should I put that code? Is it a standalone file that I compile? What do I use to compile it?

Regarding step 4, I simply don't know what that means.

Would it be possible to add to the documentation/readme/tutorial to help a broader audience?

Thanks so much. Again, I will be indebted to your contribution once I know how to use it :)

Best,
-Ryan Murphy

CPU very low utilization when running with steps >= 3

Dear authors,

Thanks for the great work!

I found when I try to mining 4-cliques or other patterns with steps to explore more than 2, the CPU on each machine can not be fully occupied, and actually almost 100% idle.

My set up with Spark is stand alone version. I am not sure what's wrong. Should I wait the low CPU utilization experiment to be finished? Or perhaps my spark 2.2.0 setting up is wrong?

We are using 4 physical machines, each with 20 cores, 40 threads.

Could you give me some suggestions? Thanks a lot!

Building Example App

Dear Fractal,

I am trying to run your example of putting a custom app inside fractal_apps. I am getting an error and I think it may have to do with pointing gradlew to the correct spark master. I am attaching a log and a screenshot of my project structure and code. Are you able to find what's going wrong? Thanks so much.

Building_App_First.log
image

ClassNotFoundException, Jars do not exist, batch example

Dear Fractal,

I tried to run your batch example using cube.graph and ran into a problem.

I guess the jar does not exist in the fractal build? Thanks.

steps=2 inputgraph=$FRACTAL_HOME/data/cube.graph alg=cliques ./bin/fractal.sh
I got this output

FRACTAL_HOME is set to /home/murph213/Downloads/Installers/fractal

SPARK_HOME is set to /usr/local/spark
alg is set to 'cliques'
inputgraph is set to '/home/murph213/Downloads/Installers/fractal/data/cube.graph'
steps is set to '2'
spark-submit --master local[1] --deploy-mode client \ --driver-memory 2g \ --num-executors 1 \ --executor-cores 1 \ --executor-memory 2g \ --class br.ufmg.cs.systems.fractal.FractalSparkRunner \ --jars /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar \ /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar \ al /home/murph213/Downloads/Installers/fractal/data/cube.graph cliques scratch 1 2 info
19/09/13 12:53:06 WARN Utils: Your hostname, RM-Satellite resolves to a loopback address: 127.0.1.1; using 192.168.2.3 instead (on interface eth0)
19/09/13 12:53:06 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/09/13 12:53:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/09/13 12:53:07 WARN DependencyUtils: Local jar /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar does not exist, skipping.
19/09/13 12:53:07 WARN DependencyUtils: Local jar /home/murph213/Downloads/Installers/fractal/build/libs/fractal-SPARK-2.2.0-all.jar does not exist, skipping.
19/09/13 12:53:07 WARN SparkSubmit$$anon$2: Failed to load br.ufmg.cs.systems.fractal.FractalSparkRunner.
java.lang.ClassNotFoundException: br.ufmg.cs.systems.fractal.FractalSparkRunner
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:806)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/09/13 12:53:07 INFO ShutdownHookManager: Shutdown hook called
19/09/13 12:53:07 INFO ShutdownHookManager: Deleting directory /tmp/spark-b89a1b4f-b7a3-4ade-a289-7b78d2d92c22

Expand not generating K extensions

Environment

Steps to reproduce

  1. Change MyMotifsApp.scala
// MyMotifsApp.scala

object MyMotifsApp extends Logging {
  def main(args: Array[String]): Unit = {
    // environment setup
    val conf = new SparkConf().setAppName("MyMotifsApp")
    val sc = new SparkContext(conf)
    val fc = new FractalContext(sc)
    val fgraph = fc.textFile("data/cube.graph")

    for (path <- fgraph.vfractoid.expand(2).subgraphs) {
      logInfo(s"path=${path}")
    }

    for (path <- fgraph.efractoid.expand(2).subgraphs) {
      logInfo(s"path=${path}")
    }

    // environment cleaning
    fc.stop()
    sc.stop()
  }
}
  1. Run app
./gradlew assemble && app_class=br.ufmg.cs.systems.fractal.apps.MyMotifsApp ./bin/fractal-custom-app.sh

Output

# fgraph.vfractoid.expand(2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(1)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(3)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(7)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(6)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(4)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(5)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(0)

# fgraph.efractoid.expand(2)
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,4))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((2,7))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((1,5))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((4,5))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((4,6))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((6,7))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((5,7))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,3))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((1,2))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((3,6))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((2,3))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,1))

Expected Output

# fgraph.vfractoid.expand(2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(1,2)
20/04/03 15:24:36 INFO MyMotifsApp$: vertex_path=VSubgraph(3,7)
...

# fgraph.efractoid.expand(2)
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((0,4), (0,3))
20/04/03 15:24:38 INFO MyMotifsApp$: egde_path=ESubgraph((2,7), (2,1))
...

Another entry possible for several graphs

hi Fractal,

I'm coming for a question on the input graph. I wanted to know if it was possible to put in input several graphs separated by a stop mark, so that each vertex of the different graph starts from 0. From what I'm seeing, the numbering of the first vertex of the second graph begins by incrementing the last vertex id of the first graph. So I wanted to know if there was a way to make an entry like this?

Problem running sample batch on a remote machine

Dear Fractal,

I am also having problems trying your code remotely. In particular, I ran

steps=2 inputgraph=$FRACTAL_HOME/data/citeseer-single-label.graph alg=cliques ./bin/fractal.sh
on a remote machine with

Spark version 2.4.3
Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_222

I am attaching the log

remotelog.log

Thank you!

Single Machine Multicore

Hi Vinicius,

We corresponded briefly via email, but Github seems more convenient.

I am having difficulty running fractal in a multicore single machine environment. Specifying worker_cores > 1 leads to a deadlock.

From examining the logs, it seems that Fractal is not starting multiple actors. One slave is created, but then I'm guessing the Master is waiting to find another? This is the tail end of the logs when the deadlock happens. My CPU usage drops to 0 and nothing happens afterwards. Any help would be appreciated!

2019-09-12 15:56:55,337 INFO ActorMessageSystem$: Started akka-sys: akka://fractal-msgsys - executor - waiting for messages
2019-09-12 15:56:55,338 INFO SlaveActor: Actor Actor[akka://fractal-msgsys/user/slave-actor-11-0-0-0#740124414] started
2019-09-12 15:56:55,340 INFO SlaveActor: Actor[akka://fractal-msgsys/user/slave-actor-11-0-0-0#740124414] sending identification to master
2019-09-12 15:56:55,454 INFO SlaveActor: Actor[akka://fractal-msgsys/user/slave-actor-11-0-0-0#740124414] knows master: Actor[akka.tcp://[email protected]:2552/user/master-actor-11-0#1302576852]
[WARN] [SECURITY][09/12/2019 15:56:55.455] [fractal-msgsys-akka.actor.default-dispatcher-3] [akka.serialization.Serialization(akka://fractal-msgsys)] Using the default Java serializer for class [br.ufmg.cs.systems.fractal.computation.HelloMaster] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'
[WARN] [SECURITY][09/12/2019 15:56:55.460] [fractal-msgsys-akka.actor.default-dispatcher-3] [akka.serialization.Serialization(akka://fractal-msgsys)] Using the default Java serializer for class [br.ufmg.cs.systems.fractal.computation.Log] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'
2019-09-12 15:56:55,464 INFO MasterActor: Actor[akka://fractal-msgsys/user/master-actor-11-0#1302576852] knows 1 slaves.
2019-09-12 15:56:55,464 INFO MasterActor: StatsReport{step=0,partitionId=0,canonical_subgraphs_1:0,neighborhood_lookups_0:0,valid_subgraphs_1:0,subgraphs_output:0,canonical_subgraphs_4:0,valid_subgraphs_0:0,valid_subgraphs_3:0,canonical_subgraphs_3:0,neighborhood_lookups_2:0,neighborhood_lookups_5:0,canonical_subgraphs_0:0,valid_subgraphs_5:0,valid_subgraphs_2:0,neighborhood_lookups_1:0,canonical_subgraphs_2:0,neighborhood_lookups_4:0,canonical_subgraphs_5:0,neighborhood_lookups_3:0,valid_subgraphs_4:0,maxMemory=1.77783203125,totalMemory=0.43798828125,freeMemory=0.3339014947414398,usedMemory=0.10408678650856018}

Problem running examples on Spark mode

Hello, we are very interested in "Fractal" and trying to run the examples in the distributed setting. But we encountered a problem when running the example and it is hanging there:

            java.lang.UnsupportedOperationException: Accumulator must be registered before send to executor

Would you mind helping figure out the reason? Thanks a lot! The whole log is here
test.log
.

Can I get graph pattern and all instances of the graph pattern at the same time?

Hello! I am interested in getting frequent graph pattern and all matches(embeddings/instances) of pattern at same time. It seems that fsm algorithm can get the graph pattern but cannot get the whole matches of the pattern. And gquery can get the whole matches of one pattern. Can I get all frequent graph pattern and matches of pattern at the same time?

build issue

Hello,

When I build fractals I got this error would please help to figure out why or what went wrong.

"
Starting a Gradle Daemon (subsequent builds will be faster)

Task :fractal-core:compileScala FAILED
error: scala.reflect.internal.MissingRequirementError: object java.lang.Object in compiler mirror not found.
at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:17)
at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:18)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:53)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102)
at scala.reflect.internal.Mirrors$RootsBase.getRequiredClass(Mirrors.scala:105)
at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257)
at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257)
at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1390)
at scala.tools.nsc.Global$Run.(Global.scala:1242)
at scala.tools.nsc.Driver.doCompile(Driver.scala:31)
at scala.tools.nsc.MainClass.doCompile(Main.scala:23)
at scala.tools.nsc.Driver.process(Driver.scala:51)
at scala.tools.nsc.Main.process(Main.scala)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at sbt.compiler.RawCompiler.apply(RawCompiler.scala:33)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:159)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1$$anonfun$apply$2.apply(AnalyzingCompiler.scala:155)
at sbt.IO$.withTemporaryDirectory(IO.scala:358)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:155)
at sbt.compiler.AnalyzingCompiler$$anonfun$compileSources$1.apply(AnalyzingCompiler.scala:152)
at sbt.IO$.withTemporaryDirectory(IO.scala:358)
at sbt.compiler.AnalyzingCompiler$.compileSources(AnalyzingCompiler.scala:152)
at sbt.compiler.IC$.compileInterfaceJar(IncrementalCompiler.scala:58)
at sbt.compiler.IC.compileInterfaceJar(IncrementalCompiler.scala)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.getCompilerInterface(ZincScalaCompilerFactory.java:119)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.access$200(ZincScalaCompilerFactory.java:47)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$2.apply(ZincScalaCompilerFactory.java:90)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$2.apply(ZincScalaCompilerFactory.java:86)
at com.typesafe.zinc.Cache.get(Cache.scala:41)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.createCompiler(ZincScalaCompilerFactory.java:86)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.access$100(ZincScalaCompilerFactory.java:47)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$1.create(ZincScalaCompilerFactory.java:75)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory$1.create(ZincScalaCompilerFactory.java:71)
at org.gradle.internal.SystemProperties.withSystemProperty(SystemProperties.java:126)
at org.gradle.api.internal.tasks.scala.ZincScalaCompilerFactory.createParallelSafeCompiler(ZincScalaCompilerFactory.java:71)
at org.gradle.api.internal.tasks.scala.ZincScalaCompiler$Compiler.execute(ZincScalaCompiler.java:69)
at org.gradle.api.internal.tasks.scala.ZincScalaCompiler.execute(ZincScalaCompiler.java:57)
at org.gradle.api.internal.tasks.scala.ZincScalaCompiler.execute(ZincScalaCompiler.java:40)
at org.gradle.api.internal.tasks.compile.daemon.AbstractDaemonCompiler$CompilerWorkAction.execute(AbstractDaemonCompiler.java:113)
at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:47)
at org.gradle.workers.internal.AbstractClassLoaderWorker$1.create(AbstractClassLoaderWorker.java:46)
at org.gradle.workers.internal.AbstractClassLoaderWorker$1.create(AbstractClassLoaderWorker.java:36)
at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:98)
at org.gradle.workers.internal.AbstractClassLoaderWorker.executeInClassLoader(AbstractClassLoaderWorker.java:36)
at org.gradle.workers.internal.IsolatedClassloaderWorker.execute(IsolatedClassloaderWorker.java:54)
at org.gradle.workers.internal.WorkerDaemonServer.execute(WorkerDaemonServer.java:56)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.gradle.process.internal.worker.request.WorkerAction.run(WorkerAction.java:118)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:412)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
at java.base/java.lang.Thread.run(Thread.java:829)

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':fractal-core:compileScala'.

org.gradle.internal.serialize.PlaceholderException (no error message)

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

BUILD FAILED in 7s
2 actionable tasks: 1 executed, 1 up-to-date
"

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.