Coder Social home page Coder Social logo

kanadi's People

Contributors

chanhnguyen17 avatar deeds67 avatar domizei385 avatar faolivera avatar gchudnov avatar hgiddens avatar juliaiskra avatar mdedetrich avatar nejdaw avatar perploug avatar refal avatar roisin-jin avatar suryanarb avatar vadeg avatar xjrk58 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kanadi's Issues

When consuming events don't process in parallel for the same partition

Quoting from coworker

If we read the batch [e1, e2, e3, e4] (one partiion)
We process events in parallel
e4 was successfully processed
e2 failed
kanadi commit cursor after processing e4
e2 is lost

The problem is that Kanadi by default commits cursors in parallel (for a single partition) rather than doing them sequentially.

Optimize logging messages and imports

Use logger arguments instead of Scala interpolated strings. logger does optimization under the hood which prevent from creating useless String objects in case if certain log level is turned off.

Subscriptions.EventTypeStats.partitions should be a list.

When trying to get the subscription stats using the method .stats from Subscriptions it fails parsing the response since partitions in case class EventTypeStats(eventType: EventTypeName, partitions: EventTypeStats.Partition) should be List[EventTypeStats.Partition]

Integration Tests are not stopping

When sbt test command is finished, it seems that some part of the tests (e.g. http servers started on a random port) continue to run.

It would be great to implement afterAll behavior in tests when the related resources shut down.

Tags

Hello,
is it possible to push tags in the repo?
it looks the current version is 0.9.0 but the last pushed tag is v0.7.1

Thank you!

When disconnecting due to NoEmptySlotsOrCursorReset, we don't use noEmptySlotsCursorResetRetryDelay

When we get disconnected due to NoEmptySlotsOrCursorReset, the delay we use for reconnecting is serverDisconnectRetryDelay where as instead it should be noEmptySlotsCursorResetRetryDelay.

https://github.com/zalando-nakadi/kanadi/blob/master/src/main/scala/org/zalando/kanadi/api/Subscriptions.scala#L1443 is where we call reconnect however the reconnect function at https://github.com/zalando-nakadi/kanadi/blob/master/src/main/scala/org/zalando/kanadi/api/Subscriptions.scala#L1350-L1379 is just using the hardcoded kanadiHttpConfig.serverDisconnectRetryDelay. We should explicitly pass the delay into the reconnect function.

Fix flaky tests

There is a few tests that are occasionally failing. We should fix those. (e.g. SubscriptionSpec)

Decoding failure when decoding commit response.

When committing a cursor using the high level API (Subscription) when an empty batch arrived, I receive the following error:

DecodingFailure at .items[0].partition: Attempt to decode value on failed cursor
DecodingFailure at .items[0].offset: Attempt to decode value on failed cursor
DecodingFailure at .items[0].event_type: Attempt to decode value on failed cursor
DecodingFailure at .items[0].cursor_token: Attempt to decode value on failed cursor

However, if the batch contains events, I receive:

Message entity must not be empty.

I can confirm, that despite the error log messages, the cursor is committed correctly. I'm using Kanadi version 0.2.3.

I prepared a minimal reproduction of this bug: https://github.com/bszwej/kanadi-commit-bug

BUG! Publishing retry mechanism does not handle 207 response codes correctly

I discovered that 207 response codes are not handled correctly by Kanadi. The successful events are retried instead of the failed ones.

Tests run on version : 0.7.1

I tried to publish 4 events in a single request, with eids:

9bb77ac1-382b-46e3-a3d1-0c4e2ade8071
173463fc-f323-402a-a364-69357a12d43f
26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86
5539be2c-b9f5-471b-927c-c68c8fcb4db7

TEST 1

I mocked the nakadi endpoint using a local tool, called Mockoon, with following responses:

  1. [500 - Internal Server Error] : default response
  2. [207 - Multi-Status] : when the request contains 4 items

When calling the Events.publish function, the following requests are generated:

  1. POST /event-types/fake_shipment_shipped/events
    with body:
[
    {"metadata":{"eid":"9bb77ac1-382b-46e3-a3d1-0c4e2ade8071","occurred_at":"2020-08-14T14:46:41.257+02:00","event_type":"fake_shipment_shipped"}, ...}]},
    {"metadata":{"eid":"173463fc-f323-402a-a364-69357a12d43f","occurred_at":"2020-08-14T14:46:41.259+02:00","event_type":"fake_shipment_shipped"}, ...}]},
    {"metadata":{"eid":"26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86","occurred_at":"2020-08-14T14:46:41.26+02:00","event_type":"fake_shipment_shipped"}, ...}]},
    {"metadata":{"eid":"5539be2c-b9f5-471b-927c-c68c8fcb4db7","occurred_at":"2020-08-14T14:46:41.261+02:00","event_type":"fake_shipment_shipped"}, ...}]}
]

Response: 207
body:

[
    {"eid":"173463fc-f323-402a-a364-69357a12d43f","publishing_status":"submitted"},
    {"eid":"26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86","publishing_status":"failed","step":"publishing"},
    {"eid":"5539be2c-b9f5-471b-927c-c68c8fcb4db7","publishing_status":"failed","step":"publishing"},
    {"eid":"9bb77ac1-382b-46e3-a3d1-0c4e2ade8071","publishing_status":"failed","step":"publishing"}
]
  1. POST /event-types/fake_shipment_shipped/events (repeated 3 times)
    with body:
[
    {"metadata":{"eid":"173463fc-f323-402a-a364-69357a12d43f","occurred_at":"2020-08-14T14:46:41.259+02:00","event_type":"fake_shipment_shipped"}, ...}]}
]

Response: 500
empty body

Result: the second request is retried for 3 times and then the publish function returns a Future.Failed

Logs from Kanadi:

[info] 2020-08-14 14:46:41,854 WARN  org.zalando.kanadi.api.Events - Events with eid's 26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86,5539be2c-b9f5-471b-927c-c68c8fcb4db7,9bb77ac1-382b-46e3-a3d1-0c4e2ade8071 failed to submit, retrying in 15000 millis
[info] 2020-08-14 14:46:56,901 WARN  org.zalando.kanadi.api.Events - Events with eid's 173463fc-f323-402a-a364-69357a12d43f failed to submit, retrying in 15000 millis
[info] 2020-08-14 14:47:11,943 WARN  org.zalando.kanadi.api.Events - Events with eid's 173463fc-f323-402a-a364-69357a12d43f failed to submit, retrying in 15000 millis
[info] 2020-08-14 14:47:26,984 ERROR org.zalando.kanadi.api.Events - Max retry failed for publishing events, event id's still not submitted are 173463fc-f323-402a-a364-69357a12d43f

TEST 2

I mocked the nakadi endpoint using the same tool with following responses:

  1. [500 - Internal Server Error] : default response
  2. [207 - Multi-Status] : when the request contains 4 items
  3. [200 - Ok] : when the request contains 1 element, specifically eid 173463fc-f323-402a-a364-69357a12d43f

When calling the Events.publish function, the following requests are performed:

  1. POST /event-types/fake_shipment_shipped/events
    with body:
[
    {"metadata":{"eid":"9bb77ac1-382b-46e3-a3d1-0c4e2ade8071","occurred_at":"2020-08-14T14:46:41.257+02:00","event_type":"fake_shipment_shipped"}, ...}]},
    {"metadata":{"eid":"173463fc-f323-402a-a364-69357a12d43f","occurred_at":"2020-08-14T14:46:41.259+02:00","event_type":"fake_shipment_shipped"}, ...}]},
    {"metadata":{"eid":"26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86","occurred_at":"2020-08-14T14:46:41.26+02:00","event_type":"fake_shipment_shipped"}, ...}]},
    {"metadata":{"eid":"5539be2c-b9f5-471b-927c-c68c8fcb4db7","occurred_at":"2020-08-14T14:46:41.261+02:00","event_type":"fake_shipment_shipped"}, ...}]}
]

Response: 207
body:

[
    {"eid":"173463fc-f323-402a-a364-69357a12d43f","publishing_status":"submitted"},
    {"eid":"26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86","publishing_status":"failed","step":"publishing"},
    {"eid":"5539be2c-b9f5-471b-927c-c68c8fcb4db7","publishing_status":"failed","step":"publishing"},
    {"eid":"9bb77ac1-382b-46e3-a3d1-0c4e2ade8071","publishing_status":"failed","step":"publishing"}
]
  1. POST /event-types/fake_shipment_shipped/events
    with body:
[
    {"metadata":{"eid":"173463fc-f323-402a-a364-69357a12d43f","occurred_at":"2020-08-14T14:46:41.259+02:00","event_type":"fake_shipment_shipped"}, ...}]}
]

Response: 200
empty body

Result: the publish function returns a Future.Successful

Logs from Kanadi:

[info] 2020-08-14 14:46:41,854 WARN  org.zalando.kanadi.api.Events - Events with eid's 26bb6d0b-d9bd-4ade-9ed2-4f8ff1eb9d86,5539be2c-b9f5-471b-927c-c68c8fcb4db7,9bb77ac1-382b-46e3-a3d1-0c4e2ade8071 failed to submit, retrying in 15000 millis

Suggestion

The bug could be in Events.scala line 333

   
    val (notValid, retry) = errors.partition(
        response =>
            response.step
                .contains(Events.Step.Validating) || response.publishingStatus == Events.PublishingStatus.Submitted)
    val toRetry = events.filter { event =>
        eventWithUndefinedEventIdFallback(event) match {
            case Some(eid) => !retry.exists(_.eid.contains(eid))
            case None      => false
        }

the partition function will save successful events in the notValid variable and all failed events in the retry variable.
But then toRetry will contain all events that are not in the retry list (!retry.exists(_.eid.contains(eid))) --> the successful one

Problem cannot be parsed

Hi

It looks the Problem data type cannot be parsed (for example here).

The Problem case class has the following structure:

final case class Problem(`type`: String,
                         title: Option[String] = None,
                         status: Option[Int] = None,
                         detail: Option[String] = None,
                         instance: Option[String] = None,
                         extraFields: JsonObject = JsonObject.empty)

and as we can see type is a non-optional.

In the documentation it is mandatory as well.

But the reality is different, the server never returns it (or stopped returning at some time):

{"title":"Forbidden","status":403,"detail":"Access on READ subscription:e1eeeb6d-1819-48de-924e-ded2fe983c9e denied"}

{"title":"Conflict","status":409,"detail":"Resetting subscription cursors request is still in progress"}

According the conversation in Zalando ticket : ARUHA / issues / 118 there is an intention to fix the documentation, not the server.

In this way it might better either to update webmodels and make this field optional or import that datatype to kanadi and make the field optional. So the Problem can be parsed correctly.

Nakadi clients resilience to partial outage and partial success

Nakadi publishing API accepts events in batches. It can fail to publish some events from the batch to underlying storage (Apache Kafka). In that case Nakadi publishing API will return error that batch was partially successful.
It can create problems the following problems, depending on how the Nakadi client and the publishing application deals with this partial success response:

  • increase in traffic on Nakadi publishing API due to Nakadi clients retrying the whole batch over and over
  • the application retries identical batches which prevents application from progressing

The following should be done to decrease the possibility of mentioned problems:

  • Nakadi client should contain a note to developers that publishing can experience partial success. This should be in the client documentation and ideally also within the self contained code documentation, raising awareness for the users, e.g. via docstrings.

  • An optional retry method on batch level can be provided for the whole batch, but the default strategy must contain a backoff - solution in case of continued errors to publish to Nakadi.

  • An optional retry method can be provided that only re-publishes unsuccessful events to Nakadi. This retry must also support a backoff strategy by default.

  • Clients must expose the result of a publishing request in a way that developers can understand that there is the possibility of a partial success for batch publishing.

akka-streams-json will move to pekko

@gchudnov Since you appear to be maintaining this project I just want to inform you that a library that this project depends on , akka-streams-json will have its final release in the next coming weeks at which point I will put it under maintenance due to the Akka license change. At the point of the final release I will fork akka-astreams-json to use Pekko which is an open source fork of Akka and its ecosystem into the Apache Software Foundation (see https://pekko.apache.org/, https://github.com/apache/incubator-pekko and https://lists.apache.org/[email protected]).

From that point on only the fork (which will be named pekko-streams-circe) will be maintained. If you are willing to continue this project and/or have any additional questions please let me know in the issue.

Null values are omitted when publishing events

By default, Kanadi omits all null values from the event JSON when publishing events because of the default configuration.

For example, when using a schema like

{
    "$schema": "http://json-schema.org/draft-07/schema#",
    "title": "Event",
    "type": "object",
    "properties": {
        "test": {
            "type": [
                "string",
                "null"
            ]
        }
    },
    "required": [
        "test"
    ]
}

An event with data {"test": null} is going to be successfully published when calling Nakadi directly, but it's going to fail when publishing through Kanadi, because the field test is going to be dropped.

Consider using Monix BIO/Observable

I am considering using Monix's Task or even better their new bifunctor, i.e. https://github.com/monix/monix-bio and also using observable as an alternative for akka-http streams. The reason for using a bifunctor is that we can represent well typed errors from the server on the Right and unexpected errors as Exception's.

Why Monix Task?

  • Is part of the cats ecosystem so its actually considered stable from a binary compatibility perspective (we have cats on the classpath anyways due to Circe)
  • Has very good support for opentracing/future/MDC, i.e. see https://github.com/mdedetrich/monix-mdc and https://github.com/mdedetrich/monix-opentracing. This means that even if you are not buying into the PureFP/IO/Task you can still integrate it within your own ecosystem without any troubles.

Subscriptions.commitCursors throws Unmarshaller.NoContentException$ when offset is submitted successfully

Calling to org.zalando.kanadi.api.Subscriptions.commitCursors fails with akka.http.scaladsl.unmarshalling.Unmarshaller.NoContentException$ when the offset is submitted successfully.

This happens since https://nakadi.io/manual.html#/subscriptions/subscription_id/cursors_post returns 204 No Content so the line in https://github.com/zalando-incubator/kanadi/blob/master/src/main/scala/org/zalando/kanadi/api/Subscriptions.scala#L769 Unmarshal(response.entity.httpEntity.withContentType(ContentTypes.application/json)) fails throwing the Exception and making the Future fail.

Add timeout window for initial response headers/status code

In Zalando staging instances of nakadi, we had issues where when Nakadi disconnected a stream, kanadi would then try to reconnect however it would never happen thereby causing the stream to hang.

My suspicion is that akka-http client has a timeout, but it only applies for when the whole response happens (which will never happen on a nakadi stream). If this is the case then we need to add a timeout for when a request is made for a stream until we get the headers/status code. If we don't get it within this timeout window then we disconnect the stream (causing it to reconnect).

More investigation needs to be done if this is the actual cause, #36 added necessary logging to help diagnose this issue.

Provide default flowId for Subscriptions API methods.

The following functions in Subscriptions has a default value for flowId if it is not provided.

  • eventsStreamed
  • eventsStrictUnsafe
  • eventsStreamedManaged
    but the following are not
  • eventsStreamedSourceManaged
  • eventsStreamedSource
    what forces to provide FlowId manually by the client. It is impossible to use function org.zalando.kanadi.api#randomFlowId because it is private.

I think it is worth to provide default value for flowId for functions I mentioned above as well. What do you think?

Use sbt-release

Use sbt-release so we don't have issues with outdated releases anymore

EventCallback.successPredicateFuture and max_uncommited_events problem

Let's say we have some asynchronous event batch handler:

Async Handler

    val asyncCallback = EventCallback.successPredicateFuture[SomeEvent] { eventCallbackData =>
      eventCallbackData.subscriptionEvent.events.getOrElse(List.empty).traverse {
        _ =>
          println(s"got offset: ${eventCallbackData.subscriptionEvent.cursor.offset}")
          val promise = Promise[Boolean]()

          system.scheduler.scheduleOnce(Random.nextInt(4).seconds){
            println(s"committing offset: ${eventCallbackData.subscriptionEvent.cursor.offset}")
            promise.success(true)
          }

          promise.future

      }.map(_.forall(b => b))
    }

if we have max_uncommited_events parameter greater then batch_limit then potentially we could receive several batches from the same partition before we commit the offset:

Subscription

subscriptionsClient.eventsStreamedManaged[SomeEvent](
      subscriptionId,
      asyncCallback,
      streamConfig = Subscriptions.StreamConfig(maxUncommittedEvents = Some(5),batchLimit = Some(1))
    )

The output

as a result, we have next output:

got offset: Cursor(Partition(0),001-0001-000000000000002939,EventTypeName(nakadi-client-test),CursorToken(f4455cc6-860a-457f-b52c-dde1beca69b8))
got offset: Cursor(Partition(0),001-0001-000000000000002940,EventTypeName(nakadi-client-test),CursorToken(1734ec11-b3fe-4ece-aa7d-0386fb8d76ac))
got offset: Cursor(Partition(0),001-0001-000000000000002941,EventTypeName(nakadi-client-test),CursorToken(05a22e44-4d56-4c0a-9f7e-64c1dd85ba3b))
got offset: Cursor(Partition(0),001-0001-000000000000002942,EventTypeName(nakadi-client-test),CursorToken(342553ba-fd6b-4753-a18b-3554e7ce2ba3))
got offset: Cursor(Partition(0),001-0001-000000000000002943,EventTypeName(nakadi-client-test),CursorToken(b95d98d5-42af-4b4a-8bc5-31bb68a6b05a))
committing offset: Cursor(Partition(0),001-0001-000000000000002942,EventTypeName(nakadi-client-test),CursorToken(342553ba-fd6b-4753-a18b-3554e7ce2ba3))
got offset: Cursor(Partition(0),001-0001-000000000000002944,EventTypeName(nakadi-client-test),CursorToken(63024c14-e063-4598-8116-9c476d80167c))
got offset: Cursor(Partition(0),001-0001-000000000000002945,EventTypeName(nakadi-client-test),CursorToken(c0632672-dacd-4463-b867-196737ba8345))
got offset: Cursor(Partition(0),001-0001-000000000000002946,EventTypeName(nakadi-client-test),CursorToken(d5189ee0-65ce-464b-b0b7-eb9b1d4127af))
got offset: Cursor(Partition(0),001-0001-000000000000002947,EventTypeName(nakadi-client-test),CursorToken(eba1e032-3be4-4ecc-a7a3-afeed1b2ffb2))
committing offset: Cursor(Partition(0),001-0001-000000000000002939,EventTypeName(nakadi-client-test),CursorToken(f4455cc6-860a-457f-b52c-dde1beca69b8))
committing offset: Cursor(Partition(0),001-0001-000000000000002940,EventTypeName(nakadi-client-test),CursorToken(1734ec11-b3fe-4ece-aa7d-0386fb8d76ac))
committing offset: Cursor(Partition(0),001-0001-000000000000002941,EventTypeName(nakadi-client-test),CursorToken(05a22e44-4d56-4c0a-9f7e-64c1dd85ba3b))
2019-02-20 22:12:36,382 WARN  Subscriptions {org.zalando.kanadi.api.Subscriptions $anonfun$commitCursors$7} - SubscriptionId: c8c40e43-5869-4747-99b5-d1c2c86e6a28, StreamId: 21b07ef9-380b-4c37-b35d-0f2e637c2c84 At least one cursor failed to commit, details are CommitCursorResponse(List(CommitCursorItemResponse(Cursor(Partition(0),001-0001-000000000000002939,EventTypeName(nakadi-client-test),CursorToken(f4455cc6-860a-457f-b52c-dde1beca69b8)),outdated)))
2019-02-20 22:12:36,397 WARN  Subscriptions {org.zalando.kanadi.api.Subscriptions $anonfun$commitCursors$7} - SubscriptionId: c8c40e43-5869-4747-99b5-d1c2c86e6a28, StreamId: 21b07ef9-380b-4c37-b35d-0f2e637c2c84 At least one cursor failed to commit, details are CommitCursorResponse(List(CommitCursorItemResponse(Cursor(Partition(0),001-0001-000000000000002940,EventTypeName(nakadi-client-test),CursorToken(1734ec11-b3fe-4ece-aa7d-0386fb8d76ac)),outdated)))
2019-02-20 22:12:36,411 WARN  Subscriptions {org.zalando.kanadi.api.Subscriptions $anonfun$commitCursors$7} - SubscriptionId: c8c40e43-5869-4747-99b5-d1c2c86e6a28, StreamId: 21b07ef9-380b-4c37-b35d-0f2e637c2c84 At least one cursor failed to commit, details are CommitCursorResponse(List(CommitCursorItemResponse(Cursor(Partition(0),001-0001-000000000000002941,EventTypeName(nakadi-client-test),CursorToken(05a22e44-4d56-4c0a-9f7e-64c1dd85ba3b)),outdated)))

The problem

If the batch with offset (2939 -2941) fails we do not retry the batch, because of the offset indirectly already committed with 000000000000002942.

commit_timeout not exposed

Hi
a question: is there any specific reason why commit_timeout is not exposed in the API of the library?
in this way it is always 60 s by default.

Nakadi subscriptions endpoint returns non absolute uri, but Kanadi expects an absolute uri

When querying the /subscriptions endpoint in Nakadi, a SubscriptionQuery is returned, that contains a list of subscriptions, as well as links (pagination links).

These pagination links contain a URI to the next or previous page:

"_links": {
        "next": {
            "href": "/subscriptions?event_type=de.zalando.logistics.laas.hecate.tour-unload_process_status_changed&owning_application=reroute-service-dev&offset=20&limit=20"
        }
    }

The Kanadi URI decoder expects these links to be absolute URIs:

Uri.parseAbsolute(ParserInput(value))

However as seen above Nakadi returns the URI without the host, so an exception is thrown here.

API change proposal

As a part of a new project, we are considering to use kanadi as a driver for streaming from nakadi.
Going through the source code I found some inconvenient parts which put some low-level responsibility on the client:
1)

 val callback = EventCallback.successAlways[SomeEvent] { eventCallbackData =>
    eventCallbackData.subscriptionEvent.events.getOrElse(List.empty).foreach{ 
      case e: Event.Business[SomeEvent] =>
        ..
      case e: Event.DataChange[_] =>
        ...
      case e: Event.Undefined[_] =>
        ...
    }
  }

I suggest to refactor it to:

val callback = EventCallback.successAlways[Event.Business[SomeEvent]] { // Or other message type
  events => // which alredy has type List[Event.Business[SomeEvent]]
    ...
  }
}

The reason for this is the api of nakadi do not allow you to have several message types in the same event type, so this pattern matching becomes an unnecessary boilerplate every time when you creating the event batch handler.

2)
Change the type of SubscriptionEvent.events from Option[List[Event[T]]] to NonEmptyList[Event[T]]
As a user of the driver, you do not expect to receive an empty batch. And in case of an error, it should be reported internally in the driver.

sbt release is always releasing to repositories/snapshots repository

When making a release using:

sbt release

it is always releasing SNAPSHOT version to the repositories/snapshots repository:

sbt release
...
[info] Main Scala API documentation successful.
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT.jar
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT-javadoc.jar
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT-sources.jar
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT-sources.jar.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT-javadoc.jar.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT.jar.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT.pom.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/content/repositories/snapshots/org/zalando/kanadi_2.12/0.9.1-SNAPSHOT/kanadi_2.12-0.9.1-SNAPSHOT.pom
..

the expected behavior is to release to a staging repository:

[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1.jar
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1-javadoc.jar
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1-sources.jar
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1-sources.jar.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1-javadoc.jar.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1.jar.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1.pom.asc
[info] 	published kanadi_2.12 to https://oss.sonatype.org/service/local/staging/deploy/maven2/org/zalando/kanadi_2.12/0.9.1/kanadi_2.12-0.9.1.pom

At the moment the release is done manually by running +publishSigned with the version set to one without -SNAPSHOT suffix.

The future of Kanadi (i.e. 1.0)

So I want to finally create a stabalized Kanadi which is going to be 1.0, this issue is considering all of the open points I want to consider

  • Using Monix BIO/observable #129
  • Improving the API (even in breaking ways) such as #89

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.