lagom / lagom-samples Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://developer.lightbend.com/start/?group=lagom
License: Creative Commons Zero v1.0 Universal
Home Page: https://developer.lightbend.com/start/?group=lagom
License: Creative Commons Zero v1.0 Universal
REQUIRED_CONTACT_POINT_NR
is set on the deployment YAML file to "3"
. We could specify it must be equal or less than the replica number, otherwise the relation between the two values is not clear.
Projects are using recipes
in artefact and packages. We should change to samples.
Need help with some weird exception occured during testing grpc methods.
Testing "lagom-java-grpc-example", run ssl-lagom , then runAll command.
Http-GET requests via curl succeeded.
Tried to invoke grpc-method.
Grpcc console taken from njpatel/grpcc github repo.
Ran grpcc as pointed in README- had error. Can connect to service, but when tried to invoke client.sayHello("Alice", printReply)
had the same exception.
Connecting:
grpcc --proto hello-impl/src/main/protobuf/helloworld.proto --insecure --address 127.0.0.1:11000
Connected:
Connecting to helloworld.GreeterService on 127.0.0.1:11000. Available globals:
client - the client connection to GreeterService
sayHello (HelloRequest, callback) returns HelloReply
printReply - function to easily print a unary call reply (alias: pr)
streamReply - function to easily print stream call replies (alias: sr)
createMetadata - convert JS objects into grpc metadata instances (alias: cm)
printMetadata - function to easily print a unary call's metadata (alias: pm)
Method invoked- exception:
Error: { Error: 14 UNAVAILABLE: TCP Read failed
at Object.exports.createStatusError (C:\Users\abcdef\AppData\Roaming\npm\node_modules\grpcc\node_modules\grpc\src\common.js:91:15)
at Object.onReceiveStatus (C:\Users\abcdef\AppData\Roaming\npm\node_modules\grpcc\node_modules\grpc\src\client_interceptors.js:1204:28)
at InterceptingListener._callNext (C:\Users\abcdef\AppData\Roaming\npm\node_modules\grpcc\node_modules\grpc\src\client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (C:\Users\abcdef\AppData\Roaming\npm\node_modules\grpcc\node_modules\grpc\src\client_interceptors.js:618:8)
at callback (C:\Users\abcdef\AppData\Roaming\npm\node_modules\grpcc\node_modules\grpc\src\client_interceptors.js:845:24) code: 14, metadata: {}, details: 'TCP Read failed' }
Also made my project on maven, mostly copied from sample (Maven 3.6.0, java 1.8, x64, Win7). Have the same error.
May be some additional configuration needed? Or other grpc CLI?
We currently don't run Lagom samples using AdoptOpenJDK 11. Need to add it there so that we can use the samples to validate Java 11 support.
Starting Kubernetes 1.16, the new startup probe is introduced.
The shopping-cart sample should use that to have relaxed settings during startup and, then, use more strict values for failureThreshold
in the livenessProbe
.
Currently, the settings are picked up from applicaiton.conf
and require manually setting the akka discovery class name.
#6 introduces withGrpcClient
which could be extracted so it can be part of an akka-grpc-lagom-testkit
(tools to write tests for lagom that uses gRPC).
Right now, the Shopping Cart Kubernetes Service
definitions use the LoadBalancer
service type, but since we use an OpenShift Route
to expose the service outside the cluster, the LoadBalancer
service type adds unnecessary complexity. Since this example is likely to be copied by users without understanding the implications, we should use the simpler and more secure ClusterIP
type in our examples.
See akka/akka-management#574 for more details.
I noticed that the deployment scripts for shopping cart were moved up a level, outside of the Java/Scala specific projects.
Should we do the same for the "schemas" directory?
https://github.com/lagom/lagom-samples/blob/1.5.x/shopping-cart/shopping-cart-java/schemas/shopping-cart.sql
https://github.com/lagom/lagom-samples/blob/1.5.x/shopping-cart/shopping-cart-scala/schemas/shopping-cart.sql
(They're identical)
As pointed out by @bjaglin at lagom/lagom#280 (comment) the .proto
files locations need a review.
I think we may need to put them on hello-api
and hello-stream-impl
: location or the .proto
file depends on whether it's a used to create a server or a client maybe... IDK. Opinions?
For the past few dasy (weeks?) deployment jobs run in CRON
have been failing with the message:
You have access to the following projects and can switch between them with 'oc project <projectname>':
* console-danny
lagom-scala-openshift-smoketests
lagom-shopping-cart-java-maven-travis-1-5-x
reactive-bbq-danny
Using project "console-danny".
No resources found.
Error from server (Forbidden): projects.project.openshift.io "lagom-shopping-cart-java-sbt-travis-1-5-x" is forbidden: User "play-team" cannot get projects.project.openshift.io in the namespace "lagom-shopping-cart-java-sbt-travis-1-5-x": no RBAC policy matched
What I think is happening is that: (1) the namespace hasn't completely been built on the k8s cluster causing the User "play-team" cannot get projects.project.openshift.io in the namespace
, or (2) there's new RBAC requirements to complete the operation.
The error is quite consistent across all 3 jobs and branches (1.5.x
and 1.6.x
) but this operation is unrelated to either the programming language and hasn't changed in a while so that was expected.
related to #116
Do the samples currently demonstrate best practices around where .proto
files should live?
The examples currently don't share a schema definition between the client and the implementing service. Why is that? Is this not something that we would expect to see in the api
package, and then used by both the service making the client call and the service providing the handler?
https://github.com/lagom/akka-grpc-lagom-quickstart-java/issues/7 discusses introducing a withClient
for testkit. This idea could also be ported to scaladsl
.
When running mvn lagom:runAll
instead of 'sbt runAll' (after following the other necessary instructions in the readme) the project starts successfully but the curl commands do not work. That is when running any of the curl commands I get an error. The error is:
<!DOCTYPE html>
<html lang="en">
<head>
...
</head>
<body>
<h1>Action Not Found</h1>
<p id="detail">
For request 'GET /shoppingcart/123'
</p>
<h2>
These routes have been tried, in this order:
</h2>
<div>
</div>
</body>
</html>
Are these going to be merged into this repo and the functionality moved to the shopping cart apps, @lagom/core?
Is there any easier way to specify the latest patch build of each release? Otherwise this will be a constant chore.
There certainly is. We should use the same spell we use on other repos.
Originally posted by @ignasi35 in #107 (comment)
thanks @TimMoore
After #79, Couchbase and gRPC examples will be missing.
Currently, the gRPC example projects don't run in dev mode due to lagom/lagom#1857.
It would be nice if we could catch this in CI. I'm opening this issue to start a discussion with the team about how we might be able to do this, and whether it would be worthwhile. I guess we'd need to add scripted tests, which could inflate the build times a lot. Maybe as a nightly job?
#93 introduces a workaround for lagom/lagom#2192 in the couchbase-scala-example. The correct fix should be part of Lagom 1.6.0-M6
so when bumping to that version we must rollback a30882f
Cover:
We are currently using adoptopenjdk/openjdk8
.
lagom-samples-lagom-java-shopping-cart-example.zip
cd lagom-samples-lagom-java-shopping-cart-example/
mvn package
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.413 s
[INFO] Finished at: 2019-07-09T08:27:34+09:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal pl.project13.maven:git-commit-id-plugin:2.2.6:revision (default) on project shopping-cart-api: .git directory is not found! Please specify a valid [dotGitDirectory] in your pom.xml -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :shopping-cart-api
The problem is that the included maven-git-commit-id-plugin
expects to be run in side a git repository, but when provided from the project starter, the resulting directory isn't a git repo.
This is regarding the shopping-cart-scala app.
I get the following error when I try to publish the docker images to docker hub:
[error] java.lang.RuntimeException: Repository for publishing is not specified.
[error] at scala.sys.package$.error(package.scala:26)
[error] at sbt.Classpaths$.$anonfun$getPublishTo$1(Defaults.scala:2644)
[error] at scala.Option.getOrElse(Option.scala:121)
[error] at sbt.Classpaths$.getPublishTo(Defaults.scala:2644)
[error] at sbt.Classpaths$.$anonfun$ivyBaseSettings$48(Defaults.scala:2089)
[error] at scala.Function1.$anonfun$compose$1(Function1.scala:44)
I pass docker username and respository as an argument to sbt like so:
sbt -Ddocker.username=codingkapoor -Ddocker.registry=index.docker.io
Images do get pushed but I still the error when I try to publish docker:publish
.
Please suggest. TIA.
The value of shutdown-after-unsuccessful...
is set to 60s
but that value should be aligned with readiness and liveness settings in the YAML deployment files
This settings should include extra documentation wrt it's value and how it interacts with other settings on the sample app.
Both for Java8 and Java11
To reproduce, run sbt test
in ./shopping-cart/shopping-cart-scala
project. The tests pass, but the following exception occurs:
2019-11-01 15:39:12,501 ERROR com.lightbend.lagom.internal.broker.kafka.TopicProducerActor - Unable to locate Kafka service named [kafka_native]. Retrying...
2019-11-01 15:39:12,501 ERROR com.lightbend.lagom.internal.broker.kafka.TopicProducerActor - Unable to locate Kafka service named [kafka_native]. Retrying...
2019-11-01 15:39:12,502 WARN akka.stream.scaladsl.RestartWithBackoffSource - Restarting graph due to failure. stack_trace:
java.lang.IllegalArgumentException: Unable to locate Kafka service named [kafka_native]. Retrying...
at com.lightbend.lagom.internal.broker.kafka.TopicProducerActor.$anonfun$eventualBrokersAndOffset$3(TopicProducerActor.scala:184)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:47)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:47)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2019-11-01 15:39:12,502 WARN akka.stream.scaladsl.RestartWithBackoffSource - Restarting graph due to failure. stack_trace:
java.lang.IllegalArgumentException: Unable to locate Kafka service named [kafka_native]. Retrying...
at com.lightbend.lagom.internal.broker.kafka.TopicProducerActor.$anonfun$eventualBrokersAndOffset$3(TopicProducerActor.scala:184)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:430)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:92)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:92)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:47)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:47)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
It seems that the deploy stage is not being triggered in Travis.
Here is an example of a build from a PR that should have triggered the deployment.
https://travis-ci.com/lagom/lagom-samples/builds/130444259
Not PR is NOT sent from a fork and is targeting on the main branches (ie: 1.5.x)
Shopping cart prod-application.conf
references port by name management
it'd be good to add a comment linking to the YAML file.
It works with sbt and also when running the test in isolation within IntelliJ. But when running using Maven:
mvn -Dlogback.debug=true test
The following error occurs:
Running com.example.shoppingcart.impl.ShoppingCartReportTest
18:44:33,671 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
18:44:33,671 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
18:44:33,671 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/marcospereira/Lightbend/lagom/lagom-samples/shopping-cart/shopping-cart-java/shopping-cart/target/test-classes/logback.xml]
18:44:33,722 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
18:44:33,729 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
18:44:33,734 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:44:33,757 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [akka.actor.testkit.typed.internal.CapturingAppender]
18:44:33,824 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CapturingAppender]
18:44:33,825 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[akka.actor.testkit.typed.internal.CapturingAppenderDelegate]
18:44:33,825 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.cassandra] to ERROR
18:44:33,825 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.datastax.driver] to WARN
18:44:33,825 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [akka] to WARN
18:44:33,826 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
18:44:33,826 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
18:44:33,826 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [CapturingAppender] to Logger[ROOT]
18:44:33,826 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
18:44:33,827 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@7fc229ab - Registering current configuration as safe fallback point
18:44:34,296 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
18:44:34,296 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
18:44:34,297 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:44:34,297 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [akka.actor.testkit.typed.internal.CapturingAppender]
18:44:34,297 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CapturingAppender]
18:44:34,297 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[akka.actor.testkit.typed.internal.CapturingAppenderDelegate]
18:44:34,298 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.cassandra] to ERROR
18:44:34,298 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.datastax.driver] to WARN
18:44:34,298 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [akka] to WARN
18:44:34,298 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
18:44:34,298 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
18:44:34,298 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [CapturingAppender] to Logger[ROOT]
18:44:34,298 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
18:44:34,298 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@1921ad94 - Registering current configuration as safe fallback point
2019-10-31 18:44:34,915 INFO play.api.db.HikariCPConnectionPool - Creating Pool for datasource 'default'
2019-10-31 18:44:34,932 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting...
2019-10-31 18:44:34,943 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Start completed.
2019-10-31 18:44:34,949 INFO play.api.db.HikariCPConnectionPool - datasource [default] bound to JNDI as DefaultDS
2019-10-31 18:44:36,763 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Executing cluster start task jdbcCreateTables.
2019-10-31 18:44:36,811 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Executing cluster start task slickOffsetStorePrepare.
2019-10-31 18:44:36,822 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task slickOffsetStorePrepare done.
2019-10-31 18:44:36,827 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task jdbcCreateTables done.
log4j:WARN No appenders could be found for logger (org.jboss.logging).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
2019-10-31 18:44:38,390 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Shutdown initiated...
2019-10-31 18:44:38,395 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Shutdown completed.
2019-10-31 18:44:38,418 INFO play.api.db.HikariCPConnectionPool - Shutting down connection pool.
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.975 sec
Running com.example.shoppingcart.impl.ShoppingCartServiceTest
18:44:39,465 |-WARN in Logger[akka.actor.CoordinatedShutdown] - No appenders present in context [default] for logger [akka.actor.CoordinatedShutdown].
18:44:39,534 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
18:44:39,534 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
18:44:39,534 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
18:44:39,535 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [akka.actor.testkit.typed.internal.CapturingAppender]
18:44:39,535 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CapturingAppender]
18:44:39,535 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[akka.actor.testkit.typed.internal.CapturingAppenderDelegate]
18:44:39,535 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.cassandra] to ERROR
18:44:39,535 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.datastax.driver] to WARN
18:44:39,535 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [akka] to WARN
18:44:39,535 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
18:44:39,536 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
18:44:39,536 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [CapturingAppender] to Logger[ROOT]
18:44:39,536 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
18:44:39,536 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@54234569 - Registering current configuration as safe fallback point
2019-10-31 18:44:39,568 INFO play.api.db.HikariCPConnectionPool - Creating Pool for datasource 'default'
2019-10-31 18:44:39,569 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-2 - Starting...
2019-10-31 18:44:39,569 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-2 - Start completed.
2019-10-31 18:44:39,569 INFO play.api.db.HikariCPConnectionPool - datasource [default] bound to JNDI as DefaultDS
2019-10-31 18:44:39,643 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Executing cluster start task jdbcCreateTables.
2019-10-31 18:44:39,653 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Executing cluster start task slickOffsetStorePrepare.
2019-10-31 18:44:39,656 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Executing cluster start task readSideGlobalPrepare-ShoppingCartReportProcessor.
2019-10-31 18:44:39,659 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task slickOffsetStorePrepare done.
2019-10-31 18:44:39,661 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task jdbcCreateTables done.
2019-10-31 18:44:39,852 INFO com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor - Cluster start task readSideGlobalPrepare-ShoppingCartReportProcessor done.
2019-10-31 18:44:40,426 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-2 - Shutdown initiated...
2019-10-31 18:44:40,433 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-2 - Shutdown completed.
2019-10-31 18:44:40,445 INFO play.api.db.HikariCPConnectionPool - Shutting down connection pool.
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.996 sec
Running com.example.shoppingcart.impl.ShoppingCartTest
Tests run: 12, Failures: 0, Errors: 12, Skipped: 0, Time elapsed: 0.083 sec <<< FAILURE!
shouldAllowGettingShoppingCartSummary(com.example.shoppingcart.impl.ShoppingCartTest) Time elapsed: 0.002 sec <<< ERROR!
java.lang.IllegalStateException: CapturingAppender not defined for [ROOT] in logback-test.xml
at akka.actor.testkit.typed.internal.CapturingAppender$.get(CapturingAppender.scala:24)
at akka.actor.testkit.typed.javadsl.LogCapturing.<init>(LogCapturing.scala:39)
at com.example.shoppingcart.impl.ShoppingCartTest.<init>(ShoppingCartTest.java:31)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:217)
at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:266)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:263)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
So, the correct file is picked:
18:44:33,671 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/marcospereira/Lightbend/lagom/lagom-samples/shopping-cart/shopping-cart-java/shopping-cart/target/test-classes/logback.xml]
The appender is there:
18:44:34,297 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [akka.actor.testkit.typed.internal.CapturingAppender]
18:44:34,297 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CapturingAppender]
And that it is attached to the ROOT
logger:
18:44:39,536 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [CapturingAppender] to Logger[ROOT]
To get the Akka Cluster running, I had to change this line in prod-conf: coordinated-shutdown.exit-jvm = on to the value of off.
For example:
https://travis-ci.com/lagom/lagom-samples/jobs/251610682#L1512-L1586
This happens for the test using TestServer
with defaultSetup().withJdbc()
:
2019-10-31 20:43:32,013 INFO com.zaxxer.hikari.HikariDataSource - HikariPool-2 - Shutdown completed.
2019-10-31 20:43:32,013 INFO play.api.db.HikariCPConnectionPool - Shutting down connection pool.
2019-10-31 20:43:32,836 WARN slick.basic.BasicBackend.stream - Error scheduling synchronous streaming
java.util.concurrent.RejectedExecutionException: Task slick.basic.BasicBackend$DatabaseDef$$anon$4@258ceb2 rejected from slick.util.AsyncExecutor$$anon$1$$anon$2@7c123a79[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 71]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at slick.util.AsyncExecutor$$anon$1$$anon$4.execute(AsyncExecutor.scala:161)
at slick.basic.BasicBackend$DatabaseDef.scheduleSynchronousStreaming(BasicBackend.scala:302)
at slick.basic.BasicBackend$DatabaseDef.scheduleSynchronousStreaming$(BasicBackend.scala:300)
at slick.jdbc.JdbcBackend$DatabaseDef.scheduleSynchronousStreaming(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.streamSynchronousDatabaseAction(BasicBackend.scala:295)
at slick.basic.BasicBackend$DatabaseDef.streamSynchronousDatabaseAction$(BasicBackend.scala:293)
at slick.jdbc.JdbcBackend$DatabaseDef.streamSynchronousDatabaseAction(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:240)
at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:142)
at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:141)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef$$anon$1.subscribe(BasicBackend.scala:118)
at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary.preStart(ActorGraphInterpreter.scala:134)
at akka.stream.impl.fusing.GraphInterpreter.init(GraphInterpreter.scala:306)
at akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:593)
at akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:701)
at akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:750)
at akka.actor.Actor.aroundPreStart(Actor.scala:543)
at akka.actor.Actor.aroundPreStart$(Actor.scala:543)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:690)
at akka.actor.ActorCell.create(ActorCell.scala:637)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:509)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:531)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:294)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:242)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
2019-10-31 20:43:32,846 WARN slick.basic.BasicBackend.stream - Error scheduling synchronous streaming
java.util.concurrent.RejectedExecutionException: Task slick.basic.BasicBackend$DatabaseDef$$anon$4@22f716e0 rejected from slick.util.AsyncExecutor$$anon$1$$anon$2@7c123a79[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 71]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
at slick.util.AsyncExecutor$$anon$1$$anon$4.execute(AsyncExecutor.scala:161)
at slick.basic.BasicBackend$DatabaseDef.scheduleSynchronousStreaming(BasicBackend.scala:302)
at slick.basic.BasicBackend$DatabaseDef.scheduleSynchronousStreaming$(BasicBackend.scala:300)
at slick.jdbc.JdbcBackend$DatabaseDef.scheduleSynchronousStreaming(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.streamSynchronousDatabaseAction(BasicBackend.scala:295)
at slick.basic.BasicBackend$DatabaseDef.streamSynchronousDatabaseAction$(BasicBackend.scala:293)
at slick.jdbc.JdbcBackend$DatabaseDef.streamSynchronousDatabaseAction(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:240)
at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:142)
at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:141)
at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef$$anon$1.subscribe(BasicBackend.scala:118)
at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary.preStart(ActorGraphInterpreter.scala:134)
at akka.stream.impl.fusing.GraphInterpreter.init(GraphInterpreter.scala:306)
at akka.stream.impl.fusing.GraphInterpreterShell.init(ActorGraphInterpreter.scala:593)
at akka.stream.impl.fusing.ActorGraphInterpreter.tryInit(ActorGraphInterpreter.scala:701)
at akka.stream.impl.fusing.ActorGraphInterpreter.preStart(ActorGraphInterpreter.scala:750)
at akka.actor.Actor.aroundPreStart(Actor.scala:543)
at akka.actor.Actor.aroundPreStart$(Actor.scala:543)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundPreStart(ActorGraphInterpreter.scala:690)
at akka.actor.ActorCell.create(ActorCell.scala:637)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:509)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:531)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:294)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:242)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
1.5.x
and archived)1.5.x
and archived)1.5.x
and archived)1.5.x
and archived)1.5.x
is the only one)1.5.x
is the only one)1.5.x
is the only one)1.5.x
is the only one)https://github.com/lagom/lagom-scala-openshift-smoketests was unarchived because it's useful to have a small Lagom app to use to get going with deploying to Openshift, for newly upgraded Openshift clusters (no unnecessary complications like Postgres, etc).
But it should live in this repo.
These seem to fail almost always on the first run, but usually pass on a re-run. I suspect a race condition.
We might be able to mitigate this by limiting to one concurrent job, at the expense of slowing down the happy path. WDYT?
We should be more explicit which kind of contributions are welcome and point out that community samples should be added as a link on the main README file.
Deployment to Central Park should be part of the CI build. Since the primary purpose of this repository is to demonstrate deployment to OpenShift, we need to ensure there aren't any regressions and the process documented in the guide works through updates.
There aren't any branch protection rules configured in this repository: https://github.com/lagom/lagom-samples/settings/branches
This could lead to accidental merges of code that hasn't passed the Travis CI or CLA validator checks.
What should they be?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.