Coder Social home page Coder Social logo

online-auction-scala's Introduction

Gitter Join the contributors chat at https://gitter.im/lagom/contributors CI / Build and tests Open Source Helpers

Deprecated - Lagom - July 1 2023

This project will only recieve security patches until July 1, 2024, at that point the project will no longer receive any additional patches.

If you are an existing customer of Lightbend and we have not yet contacted you, please reach out to Support.

We recommend migrating any existing work to:

  • Akka for deeply customized projects with complex infrastructure needs. Akka now contains the vast majority of Lagom features.
  • Kalix for a managed scalable environment with an abstraction above the Akka framework layer to allow you to focus only on business logic.

Lagom - The Reactive Microservices Framework

Lagom is a Swedish word meaning just right, sufficient. Microservices are about creating services that are just the right size, that is, they have just the right level of functionality and isolation to be able to adequately implement a scalable and resilient system.

Lagom focuses on ensuring that your application realizes the full potential of the Reactive Manifesto while delivering a high productivity development environment, and seamless production deployment experience.

Learn More

License

Copyright (C) Lightbend Inc. (https://www.lightbend.com).

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this project except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

online-auction-scala's People

Contributors

agemooij avatar beritou avatar dwijnand avatar erip avatar fgfernandez0321 avatar ignasi35 avatar jroper avatar jsravn avatar katejim avatar longmuir avatar longshorej avatar marcospereira avatar octonato avatar ryanhanks avatar sethtisue avatar svard avatar yg-apaza avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

online-auction-scala's Issues

Deployment to GCP not working

Hello everyone,
I'm trying to get online-auction-scala running on GCP. The only thing I miss now is getting ingress working. The health check from search service reports UNHEALTHY status, although search service is up and running and there are no errors in it's logs.

The steps I do after building docker images and publishing them to docker registry:

gcloud container clusters create mt-develop

gcloud container clusters get-credentials mt-develop

kubectl create serviceaccount tiller --namespace kube-system

kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller

helm install lightbend-helm-charts/reactive-sandbox --name reactive-sandbox

PRIMARY_ACCOUNT=$(gcloud info --format='value(config.account)')

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$PRIMARY_ACCOUNT

kubectl apply -f RBAC.yml #with the content from [#78](https://github.com/lagom/online-auction-scala/issues/78) posted by @TimMoore

when the reactive-sandbox is ready, I'm running the script from KUBERNATES.md file.

#
# NOTE: You must change the secret values below or the applications will crash.
#
# These values are used for Play's application secret. It is important that they are set to a secret value.
# More information: https://www.playframework.com/documentation/latest/ApplicationSecret

secret_bidding=Secret123
secret_item=Secret123
secret_user=Secret123
secret_search=Secret123
secret_web=Secret123

# Configure Play's Allowed Hosts filter.
# More information: https://www.playframework.com/documentation/latest/AllowedHostsFilter

allowed_host=.

# Default addresses for reactive-sandbox, which provides Cassandra, Kafka, Elasticsearch

export service_cassandra=_cql._tcp.reactive-sandbox-cassandra.default.svc.cluster.local
export service_kafka=_broker._tcp.reactive-sandbox-kafka.default.svc.cluster.local
export service_elasticsearch=_http._tcp.reactive-sandbox-elasticsearch.default.svc.cluster.local

# Deploy bidding-impl

rp generate-kubernetes-resources registry.mydomain.com/trg/biddingimpl:1.0.0-SNAPSHOT \
  --generate-pod-controllers --generate-services --service-type="NodePort" \
  --env JAVA_OPTS="-Dplay.http.secret.key=$secret_bidding -Dplay.filters.hosts.allowed.0=$allowed_host" \
  --pod-controller-replicas 2 \
  --external-service "cas_native=$service_cassandra" \
  --external-service "kafka_native=$service_kafka" | kubectl apply -f -

# Deploy item-impl

rp generate-kubernetes-resources registry.mydomain.com/trg/itemimpl:1.0.0-SNAPSHOT \
  --generate-pod-controllers --generate-services --service-type="NodePort" \
  --env JAVA_OPTS="-Dplay.http.secret.key=$secret_item -Dplay.filters.hosts.allowed.0=$allowed_host" \
  --pod-controller-replicas 2 \
  --external-service "cas_native=$service_cassandra" \
  --external-service "kafka_native=$service_kafka" | kubectl apply -f -

# Deploy user-impl

rp generate-kubernetes-resources registry.mydomain.com/trg/userimpl:1.0.0-SNAPSHOT \
  --generate-pod-controllers --generate-services --service-type="NodePort" \
  --env JAVA_OPTS="-Dplay.http.secret.key=$secret_user -Dplay.filters.hosts.allowed.0=$allowed_host" \
  --pod-controller-replicas 2 \
  --external-service "cas_native=$service_cassandra" \
  --external-service "kafka_native=$service_kafka" | kubectl apply -f -

# Deploy search-impl

rp generate-kubernetes-resources registry.mydomain.com/trg/searchimpl:1.0.0-SNAPSHOT \
  --generate-pod-controllers --generate-services --service-type="NodePort" \
  --env JAVA_OPTS="-Dplay.http.secret.key=$secret_search -Dplay.filters.hosts.allowed.0=$allowed_host" \
  --pod-controller-replicas 2 \
  --external-service "cas_native=$service_cassandra" \
  --external-service "kafka_native=$service_kafka" \
  --external-service "elastic-search=$service_elasticsearch" | kubectl apply -f -

# Deploy webgateway

rp generate-kubernetes-resources registry.mydomain.com/trg/webgateway:1.0.0-SNAPSHOT \
  --service-type="NodePort" \
  --generate-pod-controllers --generate-services \
  --env JAVA_OPTS="-Dplay.http.secret.key=$secret_web -Dplay.filters.hosts.allowed.0=$allowed_host" | kubectl apply -f -

# Deploy ingress for everything

# Note that some environments, such as IBM Cloud and Google Kubernetes Engine have slightly different nginx
# implementations. For these, you may need to specify `--ingress-path-suffix '*'` or `--ingress-path-suffix '.*'` as
# part of the command below.

rp generate-kubernetes-resources \
  --generate-ingress --ingress-name online-auction \
  registry.mydomain.com/trg/webgateway:1.0.0-SNAPSHOT \
  registry.mydomain.com/trg/searchimpl:1.0.0-SNAPSHOT \
  registry.mydomain.com/trg/userimpl:1.0.0-SNAPSHOT \
  registry.mydomain.com/trg/itemimpl:1.0.0-SNAPSHOT \
  registry.mydomain.com/trg/biddingimpl:1.0.0-SNAPSHOT | kubectl apply -f -

screenshot

Any help would be much appreciated.

Kafka: Consumer interrupted with WakeupException after timeout.

I'm trying to run the example locally but I'm receiving lots of warns

[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds
[warn] a.k.KafkaConsumerActor - Consumer interrupted with WakeupException after timeout. Message: null. Current value of akka.kafka.consumer.wakeup-timeout is 3000 milliseconds

This usually indicates kafka client to server mismatch, but can't find the exact version of kafka used here.

I'm using 0.10.1.0-1 from https://hub.docker.com/r/wurstmeister/kafka/tags/ but can switch quickly to different version, but can't pinpoint the version used by this example.

Thank you for your help

Circuit Breaker TImed out error

When you runAll and go to / you get this:

! @73ghf5b2j - Internal server error, for (GET) [/createuser] ->
 
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[$anon$1: Circuit Breaker Timed out.]]
	at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:293)
	at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:220)
	at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:100)
	at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:99)
	at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346)
	at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
	at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:70)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
Caused by: akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
[error] application - 

Elastic search throws parsing_exception when using pageNo and pageSize params

When trying to search for an item via locahost:9000/api/search?pageNo=1&pageSize=10 the api throws an parsing_excpetion

{"error":{"root_cause":[{"type":"parsing_exception","reason":"Unknown key for a VALUE_NUMBER in [pageNumber].","line":1,"col":15}],"type":"parsing_exception","reason":"Unknown key for a VALUE_NUMBER in [pageNumber].","line":1,"col":15},"status":400}

This must be due to the field names pageSize and pageNumber. I believe Elastic search equivalent for them is from and size.

Why does the example require an internet connection?

When I'm connected to an internet connection, the example works. When I'm not connected to an internet connection, or I'm behind a firewall, then the example does not work.

Does anyone know what part of the code is trying to connect to the internet so that I can cut it out?

Thank you!

My best,
Michael

Question: Separating Commands and Queries

It seems in the design of this example, the separation of operations that mutate the persistent entity and the ones that query the state of it, is rather lax. E.g. placing a bid also return the current price.

What is the authors' take on separating commands and queries (CQS) when it comes to Lagom persistent entities?
It seems some CQRS proponents on the internet suggest following this rather strictly, e.g. a command that mutates the state should only result in an acknowledgement, not in business data being returned.

If we were to follow that, to query the state, either the read side or in the Lagom case also ReadOnlyCommands should be used, but not commands that also mutate the state.

Is Lagom opinionated in the same way, or is it considered good practise to return e.g. the updated state from the command handler?

Fix-up Elasticsearch integration

As mentioned in #73, execution of SearchService.search currently incurs a runtime error (formatted for readability):

{
    "error": {
        "root_cause": [
            {
                "type": "parsing_exception",
                "reason": "[itemStatus] query malformed, no start_object after query name",
                "line": 1,
                "col": 64
            }
        ],
        "type": "parsing_exception",
        "reason": "[itemStatus] query malformed, no start_object after query name",
        "line": 1,
        "col": 64
    },
    "status": 400
}

The underlying issue is that the query we send to Elasticsearch is malformed.

The broken Elasticsearch query

Here's a full example of a query that might be generated with the current implementation:

{
    "from": 0,
    "size": 100,
    "query": {
        "bool": {
            "must_not": {
                "itemStatus": "Created"
            },
            "must": [
                {
                    "multi_match": {
                        "query": "",
                        "fields": [
                            "title",
                            "description"
                        ]
                    }
                },
                {
                    "price": {
                        "lte": 0
                    }
                },
                {
                    "match": {
                        "currencyId": "USD"
                    }
                }
            ]
        }
    },
    "sort": [
        {
            "auctionEnd": {
                "order": "desc",
                "unmapped_type": "boolean"
            }
        },
        {
            "price": "asc"
        }
    ]
}

There's two problem areas with the this query as it's currently formed, one within the must clause, and the other within the must_not clause.

The broken must clause

Here’s the must clause in isolation:

{
    "must": [
         {
            "price": {
                "lte": 0
            }
        }
    ]
}

To fix this up, we need to inject a range clause to make the price filter valid:

{
    "must": [
        {
            "range": {
                "price": {
                    "gte": 0
                }
            }
        }
    ]
}

The broken must_not clause

Here's the must_not clause in isolation:

{
    "must_not": {
        "itemStatus": "Created"
    }
}

To fix this, we need to inject a match clause, and also change the field name from itemStatus to status:

{
    "must_not": {
        "match": {
            "status": "Created"
        }
    }
}

Solution Proposal

While the changes we need to see in the JSON sent to Elasticsearch are relatively straightforward, the implementation of these changes surface a few questions. The bulk of these questions centers around our implementation of the AST models that build out the query payload.

In the current implementation, we build out an internal representation of query using a set of models that roughly mirror the AST of the Elasticsearch DSL. Having spent a small amount of time reading their documentation, it appears that a full implementation of this AST / DSL would be quite an involved effort, and one that we would likely consider beyond the scope of the core intentions of this project. Entire projects have devoted themselves to this effort (see https://github.com/sksamuel/elastic4s for an example).

Using A Domain-Specific Query Model

As opposed to creating a data object that mirrors the AST of Elasticsearch, and using the JSON serialization of those objects to build our JSON payload, suppose our model for the query more closely matched the particular domain we’re dealing with here, and we use Elasticsearch-specific JSON serialization to define how the actual query payload is created. Looking at the SearchService.search method, we find the following case class is the argument to that method:

case class SearchRequest(keywords: Option[String], maxPrice: Option[Int], currency: Option[String])

If we take the approach of modeling a query object after our domain (as opposed to after the Elasticsearch query DSL’s AST), we might end up with a query model that looks something like:

case class ItemQuery(titleDescriptionQueryString: Option[String],
                     minPrice: Option[Int],
                     currencyId: Option[String],
                     pageNumber: Int,
                     pageSize: Int)

JSON serialization for this class could be implemented as follows:

object ItemQuery {
  implicit val format: Format[ItemQuery] = new Format[ItemQuery] {

    override def reads(json: JsValue): JsResult[ItemQuery] = ???

    override def writes(itemQuery: ItemQuery): JsObject = {
      val fromOffset = itemQuery.pageNumber * itemQuery.pageSize

      Json.obj(
        "query" -> Json.obj(
          "bool" -> Json.obj(
            "must_not" -> Json.obj(
              "match" -> Json.obj(
                "status" -> ItemStatus.Created)),
            "must" -> Json.arr(
              Seq(
                itemQuery.titleDescriptionQueryString.map(queryString =>
                  Json.obj("multi_match" -> Json.obj(
                    "query" -> queryString,
                    "fields" -> Json.arr("title", "description")
                  ))),
                itemQuery.minPrice.map(minPrice =>
                  Json.obj("range" -> Json.obj(
                    "price" -> Json.obj("gte" -> minPrice)
                  ))),
                itemQuery.currencyId.map(currencyId =>
                  Json.obj("match" -> Json.obj("currencyId" -> currencyId)))
              ).flatten[JsObject]
            )
          )
        ),
        "from" -> fromOffset,
        "size" -> itemQuery.pageSize,
        "sort" -> Json.arr(
          Json.obj(
            "auctionEnd" ->
              Json.obj(
                "order" -> "desc",
                "unmapped_type" -> "boolean"
              )
          ),
          Json.obj("price" -> "asc")
        )
      )
    }
  }
}

I realize this approach is quite a departure from the current implementation. With this departure, we might argue that the ItemQuery object isn’t as flexible in the Elasticsearch query that it’s able to produce, and that would be true.

As a counterpoint to that argument, we might point out that this application currently strives to satisfy no use cases beyond the one at-hand, and if additional search use cases are added in the future, this approach could be generalized at that time.

If it’s important that we proceed with a clear definition of how that more generalized solution might look, it would be helpful to identify the specific use case(s) that warrant this generalization. Given a concrete set of use cases, we can create an implementation that satisfies a varying set of cases, giving the reader the ability to see why the generalized solution is necessary, and how it supports the particular use cases it was designed for.

@ignasi35 as the author of most of the Elasticsearch integration, what's your thoughts on this? Would you be open to a departure from the current implementations approach?

For an example of how this might look in total, please see #111.

Let me know if you have any questions or suggestions! Thanks!

Duplication of events

I am new to Lagom.
Can someone explain me why there is duplication of same events (AuctionStarted, AuctionFinished, etc.) in item-impl while same events exist in item-api.

Thanks

Search feature completion

online-auciton-scala's version of the Search service is not as complete as the java equivalent in online-auction-java. For starters: there's no UI.

As raised in #68, there was a runtime issue fixed in #72. But #72 exposed another runtime issue:

$ curl -d'{"keywords":"chair","maxPrice":10, "currency":"USD"}' -X POST -H "Content-Type: application/json" "http://localhost:9000/api/search?pageNo=1&pageSize=10"

returns:

{"name":"UndeserializableException","detail":"{\"error\":{\"root_cause\":[{\"type\":\"parsing_exception\",\"reason\":\"[itemStatus] query malformed, no start_object after query name\",\"line\":1,\"col\":63}],\"type\":\"parsing_exception\",\"reason\":\"[itemStatus] query malformed, no start_object after query name\",\"line\":1,\"col\":63},\"status\":400}"}

a.a.OneForOneStrategy - Ask timed out

When running the example locally, I get the following error:

[error] a.a.OneForOneStrategy - Ask timed out on [Actor[akka://biddingImpl-application/user/cassandraOffsetStorePrepare-singletonProxy#1865177863]] after [20000 ms]. Sender[null] sent message of type "com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor$Execute$".
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://biddingImpl-application/user/cassandraOffsetStorePrepare-singletonProxy#1865177863]] after [20000 ms]. Sender[null] sent message of type "com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor$Execute$".
	at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604) ~[akka-actor_2.11-2.4.16.jar:na]
	at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126) ~[akka-actor_2.11-2.4.16.jar:na]
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) ~[scala-library-2.11.8.jar:na]
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[scala-library-2.11.8.jar:na]
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) ~[scala-library-2.11.8.jar:na]
	at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329) ~[akka-actor_2.11-2.4.16.jar:na]
	at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280) ~[akka-actor_2.11-2.4.16.jar:na]
	at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284) ~[akka-actor_2.11-2.4.16.jar:na]
	at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236) ~[akka-actor_2.11-2.4.16.jar:na]
	at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_102]

Found someone having a similar symptom here but cannot extrapolate this to my own issue. Using sbt 0.13.11.

Thanks for your attention.

Readiness probe failed: Get http://172.17.0.10:10002/platform-tooling/ready: dial tcp 172.17.0.10:10002: getsockopt: connection refused Back-off restarting failed container

Hello,

I am trying to deploy my lagom scala project on kubernetes by following the development workflow. But i am getting this error on kubernetes dashboard:

Readiness probe failed: Get http://172.17.0.10:10002/platform-tooling/ready: dial tcp 172.17.0.10:10002: getsockopt: connection refused Back-off restarting failed container

There is my deployment steps:

$ minikube start --memory 8192
$ minikube addons enable ingress
$ sbt "deploy minikube"

Typo in http://www.lagomframework.com/documentation/1.3.x/scala/ServiceImplementation.html

It currently reads:

play.application.loader += com.example.HelloLoader

But this gives this error:

Cause: com.typesafe.config.ConfigException$WrongType: reference.conf @ jar:file:/Users/tom/.ivy2/cache/com.typesafe.play/play_2.11/jars/play_2.11-2.5.10.jar!/reference.conf: 162: Cannot concatenate object or list with a non-object-or-list, Quoted("play.api.inject.guice.GuiceApplicationLoader") and SimpleConfigList(["demo.api.product.ProductApplicationLoader"]) are not compatible

I think it ought to be:
play.application.loader = com.example.HelloLoader

Provide transaction api

It's necessary to provide the transaction api (api and implementation projects) in this repository with Scala code, following the same endpoints and business rules of the online-auction-java repository.

Is there any reason why transaction api project isn't available yet?

[Suggestions for improvement] User Service

Hi everyone,

I would like to refactor the user service and add a couple of features. Most of it will be inspired by the online-auction-java and specificaly from their service descriptor:
https://github.com/lagom/online-auction-java/blob/master/user-api/src/main/java/com/example/auction/user/api/UserService.java
As you can see, the java versions is supporting a registration process, a login endpoint as well as a Read side support.

I'll submit several PRs this week in order to make the user-service support those.

If you have any other suggestions for this service, let me know.

Chris

Readme.md overhaul

Currently readme lists a set of services and operations/messages each of them supports.

Readme should give some context wrt the project, the purpose, Lagom, instructions to import it into an IDE, instructions to run it and a high level architecture (See #20).

This issue can be split into multiple small PRs.

Execution exception on startup + http://localhost:9000/ (root uri)

I downloaded the latest master as a zip file, ran the sbt command, runAll then entered http://localhost:9000/ in Safari.

I'm running:

  • macOS Sierra version 10.12.6
  • sbt 0.13.15

I received a number of warnings followed by:

Execution exception
[$anon$1: Circuit Breaker Timed out.]

I put the console log in a public gist since it was pretty large: https://gist.github.com/jewelsjacobs/6da1aea77c88e1107a9e1be92844bde3#file-startup-errors

For more info I set the sbt log to debug. I'm guessing it's related to Akka which I know nothing about. Here is the even larger debug log: https://gist.github.com/jewelsjacobs/ede36b361b02f2e31547c449e7ffbf6e#file-debug-log

Tests can be contaminated by manual test data

Reproduction

  1. Start the sample app with a fresh database by running sbt clean runAll
  2. Create a user and an item
  3. Stop the server
  4. sbt test

Expected behavior

The tests should pass.

Actual behavior

[info] The Item service
[info] - should allow creating items
[info] - should return all items for a given user
[info] - should should emit auction started event *** FAILED ***
[info]   ItemUpdated(5d50b8b0-022b-11e7-87c6-bf22b4dc5f33,51971f4d-da2e-4c8d-a758-c047404df29a,title,description,USD,Created) was not an instance of com.example.auction.item.api.AuctionStarted, but an instance of com.example.auction.item.api.ItemUpdated (ItemServiceImplIntegrationTest.scala:87)

The events created by the test are not isolated in any way.

Workaround

sbt clean test

Integration test fails on current master

When running all tests on the current master, the ItemServiceImplIntegrationTest fails consistently on my machine. I don't know enough about Lagom yet to debug it unfortunately.

Output:

[info] ItemServiceImplIntegrationTest:
[info] The Item service
[info] - should allow creating items
[info] - should return all items for a given user
[info] - should should emit auction started event *** FAILED ***
[info]   ItemUpdated(aee498e0-f2b8-11e6-906a-5b805e33f129,9f7eaaf1-3dee-4f25-8aff-aae703b68253,title,description,USD,Created) was not an instance of com.example.auction.item.api.AuctionStarted, but an instance of com.example.auction.item.api.ItemUpdated (ItemServiceImplIntegrationTest.scala:83)

Clarification required about data persistence for item entities

Hello,

I am new to the Lagom framework and CQRS. I tried running the online-auction-scala application on my machine and I am trying to figure out where all of the item data is stored, especially the item description...

I have looked at all of the cassandra tables especially the itemsummarybycreator:

cqlsh:item> select * FROM itemsummarybycreator;

 creatorid                            | itemid                               | currencyid | reserveprice | status    | title
--------------------------------------+--------------------------------------+------------+--------------+-----------+-------------
 7667f80b-7ef0-4123-96e3-1bd48f329299 | a7901470-e744-11e8-a67e-2344d1520887 |        USD |            0 | Completed |          qq

But was not able to find out where the item description is stored.

Can someone please tell me?

Regards,

Unit tests fail to compile

$ git clone ...
$ vi plugins/plugins.sbt
# fix version of Lagom plugin, as per PR #2
$ sbt test
...
[error] /Users/deanwampler/projects/lightbend/lagom/online-auction-scala/online-auction-scala-deanw-git/item-impl/src/test/java/com/example/auction/item/impl/ItemEntityTest.java:5: package com.example.auction.item.impl.PItemCommand does not exist
[error] import com.example.auction.item.impl.PItemCommand.*;
[error] /Users/deanwampler/projects/lightbend/lagom/online-auction-scala/online-auction-scala-deanw-git/item-impl/src/test/java/com/example/auction/item/impl/ItemEntityTest.java:6: package com.example.auction.item.impl.PItemEvent does not exist
[error] import com.example.auction.item.impl.PItemEvent.AuctionFinished;
[error] /Users/deanwampler/projects/lightbend/lagom/online-auction-scala/online-auction-scala-deanw-git/item-impl/src/test/java/com/example/auction/item/impl/ItemEntityTest.java:7: package com.example.auction.item.impl.PItemEvent does not exist
[error] import com.example.auction.item.impl.PItemEvent.AuctionStarted;
[error] /Users/deanwampler/projects/lightbend/lagom/online-auction-scala/online-auction-scala-deanw-git/item-impl/src/test/java/com/example/auction/item/impl/ItemEntityTest.java:8: package com.example.auction.item.impl.PItemEvent does not exist
...

sbt-eclipse creates descriptors for transaction-xyz causing errors

When forking the online-auction-java some java left-over code was left on transaction-api. sbt does a good job at ignoring it because the lazy val root doesn't include transaction-xxx as project modules (see aggregate).

sbt-eclipse, on the other hand, creates Eclipse descriptors for both transaction-xxx which are not compiling at the moment (plus, use Java instead of Scala), so when importing into Eclipse there are compilation errors there.

There are several options here:

  • cleanup transactions code (remove java code)
  • configure sbt-eclipse to honour the root project setup
  • update README and instruct users to not import transactions-xxx (see #24)

Search service api expects a request body for GET request

In SearchService.scala the search is referring to GET request at the same time expecting a request body. This cause issues when trying to execute a request from any common rest clients such as postman.

restCall(Method.GET, "/api/search?pageNo&pageSize", search _),

Changing this to POST works pretty well,

restCall(Method.POST, "/api/search?pageNo&pageSize", search _),

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.