thatdot / quine Goto Github PK
View Code? Open in Web Editor NEWQuine • a streaming graph • https://quine.io • Discord: https://discord.gg/GMhd8TE4MR
Home Page: https://quine.io
License: Other
Quine • a streaming graph • https://quine.io • Discord: https://discord.gg/GMhd8TE4MR
Home Page: https://quine.io
License: Other
The labels property, an implementation detail of the Cypher interpreter, is visible in Cypher results (including query UI)
Assuming a Quine instance serving on localhost:8080:
http://localhost:8080/#CALL%20recentNodes(1)%20YIELD%20node%20RETURN%20node
This query should return an in-memory node. Note that the returned node represents its labels both as labels (correct) and as a property, __LABEL
by default (incorrect -- this should not be observable from Cypher)
By comparison, querying for a node by ID masks the internal label property, exposing the stored labels only as a Cypher label:
Currently, the payload is the only part of a kafka record that’s parsed from what I can tell. It would be nice to be able to extract information from headers and keys as well.
Is your feature request related to a problem? Please describe.
As addition to offering quine in dockerhub https://hub.docker.com/r/thatdot/quine
it would be good to add quine to https://artifacthub.io to make it even more visible, and with this used...
Artifact Hub is a web-based application that enables finding, installing, and publishing Kubernetes packages
It's a sandbox project of the CNCF https://www.cncf.io/sandbox-projects/
Describe the solution you'd like
How to do this, see https://artifacthub.io/docs/topics/repositories/
Describe alternatives you've considered
non
Additional context
this would not only help on visibility,
but also making it easy for anybody interested in quine, to have an easy inside on its state of security
(the platform generates automated security reports using trivy, see https://artifacthub.io/docs/topics/security_report/)
Found a bug? Have a suggestion? Or a question? Feel free to leave it here as a Github issue or find us directly on the Quine community Discord server: https://discord.gg/GMhd8TE4MR
Pravega is a alternative streaming storage engine form CNCF
Beside the well-known and already connected Kafka,
Apache Pulsar (https://github.com/apache/pulsar#readme)
really looks like another rewarding candidate for a connection (read and write to topics)
It has interesting features out of the box like easy to scale, persistence, georeplication, lambda like functions, is already connected to several other apps (source/sink) see https://pulsar.apache.org/docs/2.10.x/io-connectors/
and seems to grow pretty fast
https://pulsar.apache.org/blog/2022/05/11/apache-pulsar-community-welcomes-500th-contributor/
In July 2022 it seems to have also the highest number of pull requests in the field of messaging/streaming tech - even more than Kafa
https://www.youtube.com/watch?v=ckfw68-cn2o
-> would be really exiting what one can build/which new user groups could be attracted by make it easy to combine Quine and Pulsar :-D
Describe the bug
Hi, my Cassandra DB (Helm bitnami/cassandra) requires authentication with username, password. How can I configure it in quine.conf ? Thanks.
Authentication error on node cassandra/<unresolved>:9042: Node cassandra/<unresolved>:9042 requires authentication (org.apache.cassandra.auth.PasswordAuthenticator), but no authenticator configured
I think this project is very intreasting but never managed to read all the code:)
Will quine merge to pekko or stay with Akka 2.6.21?
The default port of 8080 is already in use on my machine and Quine was unable to bind to 8080. Running Quine w/ -h displays -p, --port command line option to override the default port. This was validated by running a recipe as shown below:
% java -jar quine-1.1.0.jar -r apache_log --recipe-value in_file=/var/log/apache2/access.log.1 -p 8081
16:10:48,465 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
16:10:48,465 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.xml]
16:10:48,647 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestampedNoDrop] - Attaching appender named [consoleTimestamped] to AsyncAppender.
16:10:48,648 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestampedNoDrop] - Setting discardingThreshold to 0
16:10:48,650 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncConsole] - Attaching appender named [console] to AsyncAppender.
16:10:48,650 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncConsole] - Setting discardingThreshold to 51
16:10:48,651 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestamped] - Attaching appender named [consoleTimestamped] to AsyncAppender.
16:10:48,651 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestamped] - Setting discardingThreshold to 0
16:10:48,663 |-WARN in org.gnieh.logback.config.ConfigConfigurator@6a4d7f76 - No configuration files to watch, so no file scanning is possible
Graph is ready!
Application state loaded.
Running Recipe Apache Log Analytics
Using 1 sample queries
Running Standing Query STANDING-1
2022-03-15 16:10:51,827 WARN [NotFromActor] [graph-service-akka.quine.graph-shard-dispatcher-16] com.thatdot.quine.app.importers.package$ - Could not verify that the provided ingest query is idempotent. If timeouts occur, query
execution may be retried and duplicate data may be created.
Running Ingest Stream INGEST-1
Status query URL is
http://0.0.0.0:8081#MATCH%20%28l%29%2D%5Brel%3Averb%5D%2D%3E%28v%29%20WHERE%20l%2Etype%20%3D%20%27log%27%20AND%20v%2Etype%20%3D%20%27verb%27%20AND%20v%2Everb%20%3D%20%27GET%27%20RETURN%20count%28rel%29%20AS%20get%5Fcount
Quine app web server available at http://0.0.0.0:8081
INGEST-1 status is completed and ingested 12
---[ Status Q
get_count | 2G-1 count 12
However, when Quine is started w/ -p option set to 8081, but w/o a recipe, it ignores the -p arg and defaults to 8080. On my machine, this failed to bind to 8080 as it is in use:
% java -jar quine-1.1.0.jar -p 8081
16:09:29,796 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
16:09:29,797 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.xml]
16:09:30,012 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestampedNoDrop] - Attaching appender named [consoleTimestamped] to AsyncAppender.
16:09:30,012 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestampedNoDrop] - Setting discardingThreshold to 0
16:09:30,015 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncConsole] - Attaching appender named [console] to AsyncAppender.
16:09:30,015 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncConsole] - Setting discardingThreshold to 51
16:09:30,015 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestamped] - Attaching appender named [consoleTimestamped] to AsyncAppender.
16:09:30,015 |-INFO in ch.qos.logback.classic.AsyncAppender[asyncTimestamped] - Setting discardingThreshold to 0
16:09:30,028 |-WARN in org.gnieh.logback.config.ConfigConfigurator@72bc6553 - No configuration files to watch, so no file scanning is possible
Graph is ready!
Application state loaded.
2022-03-15 16:09:34,099 ERROR [akka://graph-service/system/IO-TCP/selectors/$a/0] [graph-service-akka.actor.default-dispatcher-4] akka.io.TcpListener - Bind failed for TCP channel on endpoint [/0.0.0.0:8080]
java.net.BindException: [/0.0.0.0:8080] Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:455)
at java.base/sun.nio.ch.Net.bind(Net.java:447)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at akka.io.TcpListener.liftedTree1$1(TcpListener.scala:60)
at akka.io.TcpListener.<init>(TcpListener.scala:57)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at akka.util.Reflect$.instantiate(Reflect.scala:73)
at akka.actor.ArgsReflectConstructor.produce(IndirectActorProducer.scala:101)
at akka.actor.Props.newActor(Props.scala:226)
at akka.actor.ActorCell.newActor(ActorCell.scala:616)
at akka.actor.ActorCell.create(ActorCell.scala:643)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:514)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:536)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:295)
at akka.dispatch.Mailbox.run(Mailbox.scala:230)
at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Maybe the README just wasn't updated after 91ad7a7?
When trying to test the 1.2 JAR I was getting failures in previously working standing queries with UNWIND where I'm working with an expected map. When retested against the 1.1 open source JAR the queries were successfully accepted and working.
ERROR
MATCH (n) WHERE id(n) = $sqMatch.data.id WITH n.tags AS tags, n UNWIND keys(tags) AS key MATCH (m) WHERE id(m) = idFrom(n.cid, 'tag', key, n.tags[key]) CREATE (n)-[:tag]->(m) SET m.key = key, m.value = n.tags[key], m:tag NOT Registered OK for tagquery on http://10.10.10.10:8080: ["Type mismatch: expected Map, Node or Relationship but was Boolean, Float, Integer, Number, Point, String, Duration, Date, Time, LocalTime, LocalDateTime, DateTime, List<Boolean>, List<Float>, List<Integer>, List<Number>, List<Point>, List<String>, List<Duration>, List<Date>, List<Time>, List<LocalTime>, List<LocalDateTime> or List<DateTime>"]
Queries
standing_match_tags = """MATCH (n) WHERE exists(n.tags) RETURN id(n)""" standing_action_tags = ( """MATCH (n) WHERE id(n) = $sqMatch.data.id """ """WITH n.tags AS tags, n """ """UNWIND keys(tags) AS key """ """MATCH (m) WHERE id(m) = idFrom(n.foobar, 'tag', key, n.tags[key]) """ """CREATE (n)-[:tag]->(m) """ """SET m.key = key, m.value = n.tags[key], m:tag""" )
We currently support ingest sources like files, server-sent events, websockets, kafka, kinesis, stdin, and named pipes as ingest sources, but having a plain HTTP source sounds desirable (e.g. to consume Twitter's streaming API). It should also have the option to pass an authorization header (for e.g. bearer token for Twitter's API).
I imagine this would be analogous to our file ingest source, just reading from HTTP instead of a file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.