Coder Social home page Coder Social logo

googlecloudplatform / spring-cloud-gcp Goto Github PK

View Code? Open in Web Editor NEW
387.0 46.0 292.0 36.79 MB

New home for Spring Cloud GCP development starting with version 2.0.

License: Apache License 2.0

Shell 0.37% Java 99.01% Kotlin 0.11% HTML 0.41% CSS 0.01% FreeMarker 0.01% Python 0.03% Starlark 0.05%

spring-cloud-gcp's Introduction

HEAD Unit Tests HEAD Integration Tests HEAD SonarCloud Analysis Quality Gate Status

Spring Framework on Google Cloud

This project makes it easy for Spring users to run their applications on Google Cloud. You can check our project website here.

For a deep dive into the project, refer to the Spring Framework on Google Cloud Reference documentation or Javadocs:

If you prefer to learn by doing, try taking a look at the Spring Framework on Google Cloud sample applications or the Spring on Google Cloud codelabs.

Currently, this repository provides support for:

If you have any other ideas, suggestions or bug reports, please use our GitHub issue tracker and let us know!

If you want to collaborate in the project, we would also love to get your Pull Requests. Before you start working on one, please take a look at our collaboration manual.

Compatibility with Spring Project Versions

This project has dependency and transitive dependencies on Spring Projects. The table below outlines the versions of Spring Cloud, Spring Boot and Spring Framework versions that are compatible with certain Spring Framework on Google Cloud version.

Spring Framework on Google Cloud Spring Cloud Spring Boot Spring Framework Supported

5.x

2023.0.x (Leyton)

3.2.x

6.1.x

Yes

4.x

2022.0.x (Kilburn)

3.0.x, 3.1.x

6.x

Yes

3.x

2021.0.x (Jubilee)

2.6.x, 2.7.x

5.3.x

Yes

2.0.x

2020.0.x (Ilford)

2.4.x, 2.5.x

5.3.x

No

Spring Initializr

Spring Initializr contains Spring Framework on Google Cloud auto-configuration support through the GCP Support entry.

GCP Messaging contains the Spring Framework on Google Cloud messaging support with Google Cloud Pub/Sub working out of the box.

Similarly to GCP Messaging, GCP Storage contains the Google Cloud Storage support with no other dependencies needed.

Spring Framework on Google Cloud Bill of Materials (BOM)

If you’re a Maven user, add our BOM to your pom.xml <dependencyManagement> section. This will allow you to not specify versions for any of the Maven dependencies and instead delegate versioning to the BOM.

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.google.cloud</groupId>
            <artifactId>spring-cloud-gcp-dependencies</artifactId>
            <version>5.4.1</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Snapshots Repository

We offer SNAPSHOT versions of the project that always reflect the latest code changes to the underlying GitHub repository for Spring Framework on Google Cloud via the Sonatype Snapshots Repository:

<repositories>
    <repository>
        <id>snapshots-repo</id>
        <url>https://google.oss.sonatype.org/content/repositories/snapshots</url>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </repository>
</repositories>

Spring Boot Starters

Spring Boot greatly simplifies the Spring Framework on Google Cloud experience. Our starters handle the object instantiation and configuration logic so you don’t have to.

Every starter depends on the GCP starter to provide critical bits of configuration, like the GCP project ID or OAuth2 credentials location. You can configure these as properties in, for example, a properties file:

spring.cloud.gcp.project-id=[YOUR_GCP_PROJECT_ID]
spring.cloud.gcp.credentials.location=file:[LOCAL_PRIVATE_KEY_FILE]
spring.cloud.gcp.credentials.scopes=[SCOPE_1],[SCOPE_2],[SCOPE_3]

These properties are optional and, if not specified, Spring Boot will attempt to automatically find them for you. For details on how Spring Boot finds these properties, refer to the documentation.

Note
If your app is running on Google App Engine or Google Compute Engine, in most cases, you should omit the spring.cloud.gcp.credentials.location property and, instead, let the Spring Framework on Google Cloud Core Starter find the correct credentials for those environments.

spring-cloud-gcp's People

Contributors

artembilan avatar balopat avatar bijukunjummen avatar burkedavison avatar chengyuanzhao avatar ddixit14 avatar dependabot-preview[bot] avatar dependabot[bot] avatar dhoard avatar diegomarquezp avatar dmitry-s avatar dzou avatar eddumelendez avatar elefeint avatar emmileaf avatar joaoandremartins avatar joewang1127 avatar kioie avatar marcingrzejszczak avatar meltsufin avatar mpeddada1 avatar prash-mi avatar release-please[bot] avatar renovate-bot avatar s13o avatar saturnism avatar spencergibb avatar suztomo avatar viniciusccarvalho avatar zhumin8 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring-cloud-gcp's Issues

Build broken by `spring-boot-maven-plugin:2.4.0-SNAPSHOT`

Seems to be broken on Spring Cloud GCP Code Samples

[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.4.0-SNAPSHOT:repackage (repackage) on project spring-cloud-gcp-samples: Execution repackage of goal org.springframework.boot:spring-boot-maven-plugin:2.4.0-SNAPSHOT:repackage failed: A required class was missing while executing org.springframework.boot:spring-boot-maven-plugin:2.4.0-SNAPSHOT:repackage: org/apache/maven/shared/artifact/filter/collection/ArtifactFilterException
[ERROR] -----------------------------------------------------
[ERROR] realm =    plugin>org.springframework.boot:spring-boot-maven-plugin:2.4.0-SNAPSHOT
[ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
[ERROR] urls[0] = file:/home/ttomsu/.m2/repository/org/springframework/boot/spring-boot-maven-plugin/2.4.0-SNAPSHOT/spring-boot-maven-plugin-2.4.0-SNAPSHOT.jar
[ERROR] urls[1] = file:/home/ttomsu/.m2/repository/org/codehaus/plexus/plexus-utils/1.1/plexus-utils-1.1.jar
[ERROR] Number of foreign imports: 1
[ERROR] import: Entry[import  from realm ClassRealm[maven.api, parent: null]]
[ERROR] 
[ERROR] -----------------------------------------------------
[ERROR] : org.apache.maven.shared.artifact.filter.collection.ArtifactFilterException
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginContainerException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :spring-cloud-gcp-samples

Rename "master" branch to "main"

  • create main branch
  • protect main branch the same as master currently
  • update CI/CD scripts and config to refer to the new branch
  • delete master

Anything else we would need to do?

Support `spring.config.import` for Secret Manager

Spring Boot 2.4 introduced the ability to import configuration properties from sources other than application.properties, using the spring.config.import property.

See "Supporting Additional Locations" here.
Also see: ConfigDataLocationResolver and ConfigDataLoader.

This might be a useful feature for Secrets Manager in addition to our bootstrap properties source support.

WIP PR: #145.
Additional context: #41.

Unable to save POJO using FirestoreReactiveRepository

Using Spring Cloud GCP 1.2.6.RELEASE:

@Repository
public interface RecommendationRepository extends FirestoreReactiveRepository<Recommendation> {}
@Data
public class Recommendation {
  @DocumentId private String id;
  private String contestId;
  private String authorName;
  private String title;
  private String publisher;
  private String comment;
  private String url;
  private String submissionTypeId;
  private Recommender recommender;
}
@Data
public class Recommender {
  private String name;
  private String email;
}

When saving a recommendation I get the following error:

o.grpc.StatusRuntimeException: INVALID_ARGUMENT: Document name "projects/<PROJECT_NAME>/databases/(default)/documents/recommendation/" has invalid trailing "/".
	at io.grpc.Status.asRuntimeException(Status.java:533) ~[grpc-api-1.33.0.jar:1.33.0]
	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 
Error has been observed at the following site(s):
	|_ checkpoint ⇢ Handler com.submie.mocksha.controllers.RecommendationController#save(Recommendation, String, ServerHttpRequest) [DispatcherHandler]
	|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
	|_ checkpoint ⇢ HTTP POST "/api/v1/recommendations" [ExceptionHandlingWebHandler]
Stack trace:
		at io.grpc.Status.asRuntimeException(Status.java:533) ~[grpc-api-1.33.0.jar:1.33.0]
		at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:478) ~[grpc-stub-1.33.0.jar:1.33.0]
		at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:616) ~[grpc-core-1.33.0.jar:1.33.0]
		at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69) ~[grpc-core-1.33.0.jar:1.33.0]
		at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:802) ~[grpc-core-1.33.0.jar:1.33.0]
		at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:781) ~[grpc-core-1.33.0.jar:1.33.0]
		at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) ~[grpc-core-1.33.0.jar:1.33.0]
		at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) ~[grpc-core-1.33.0.jar:1.33.0]
		at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[na:na]
		at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[na:na]
		at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]

Updates needed for Secret Manager Bootstrap Configuration

In a change in spring-cloud-context, we will now need to set a property in order to enable secret bootstrap initialization: spring.config.use-legacy-processing=true

spring-cloud/spring-cloud-commons#703

@meltsufin - Should we ask users to set this themselves? Or I was thinking - we could add this setting in our secret-manager-starter application.properties file? You could say that the addition of our starter implies they want to enable this...

Add support for Logstash Markers

Is your feature request related to a problem? Please describe.
I am trying spring-cloud-gcp-starter-logging (Spring Cloud 2.2.5.RELEASE) and it works great on GKE. Everything is included properly in the JSON payload, except for the Logstash Markers I'm using when logging.

Describe the solution you'd like
I would like StackdriverJsonLayout to be enhanced, so it would also consider Logstash Markers to be included on the JSON payload.

I think this could be easily achieved if the method toJsonMap in StackdriverJsonLayout is changed to look for Logstash Markers and add them to the map if present. Something like (this code is inspired on net.logstash.logback.composite.loggingevent.LogstashMarkersJsonProvider):

(...)

@Override
protected Map<String, Object> toJsonMap(ILoggingEvent event) {
    (...)

    addLogstashMarkerIfNecessary(event.getMarker());

    return map;
}

private void addLogstashMarkerIfNecessary(Map<String, Object> jsonMap, Marker marker) {
    if (marker == null) {
        return;
    }

    if (marker instanceof ObjectAppendingMarker) {
        ObjectAppendingMarker objectAppendingMarker = (ObjectAppendingMarker) marker;
        jsonMap.put(objectAppendingMarker.getFieldName(), objectAppendingMarker.getFieldValue());
    }

    if (marker.hasReferences()) {
        for (Iterator<?> i = marker.iterator(); i.hasNext(); ) {
            Marker next = (Marker) i.next();
            addLogstashMarkerIfNecessary(jsonMap, next);
        }
    }
}

(...)

Note: I created a Layout that extends org.springframework.cloud.gcp.logging.StackdriverJsonLayout and overrides the toJsonMap method, just adding that piece of logic, and it works as expected:

(...)

@Override
protected Map<String, Object> toJsonMap(ILoggingEvent event) {
    Map<String, Object> jsonMap = super.toJsonMap(event);
    addLogstashMarkerIfNecessary(jsonMap, event.getMarker());
    return jsonMap;
}

(...)

So, technically, it is possible. I do understand this adds Logstash as a dependency, but I think is worth it (not sure if there is a way to do it generically or in a more decoupled way).

Note: maybe it would also be good to add a property to control whether or not these should be added (just like it happens with MDC, for instance)

Describe alternatives you've considered
I did try setting up logback-spring.xml in some way that would combine the behaviours of org.springframework.cloud.gcp.logging.StackdriverJsonLayout and net.logstash.logback.composite.loggingevent.LogstashMarkersJsonProvider. Had no success in any combination I tried, which lead me to look for a solution at a lower level.

FirestoreTemplate.withParent enhancement

Usingorg.springframework.cloud: spring-cloud-gcp-dependencies: 1.2.6.RELEASE and org.springframework.cloud: spring-cloud-gcp-starter-data-firestore.

To work with sub collections, we currently use <T> FirestoreReactiveOperations withParent(T parent). To use this method we have to either:

  • Have an instance of the parent object
  • Create a dummy instance of the parent object.

There are cases where neither of these are feasible. For example, there are situations where we don't have an instance of the parent object but the id. Thus we have to create a dummy/fake instance of the parent object as the sample project demonstrate. But this can result in unreadable code because the parent object can be big with many properties. For example:

firestormTemplate.withParent(new User("the_actual_id", "", "", "", new NestedObject("", new NestedObject2()), "")) etc

Reading the source code, firestormTemplate.withParent will only extract the id value from the supplied parent and discard the rest. So, my wish is an overloaded method:

public FirestoreReactiveOperations withParent(Object id, Class<?> clazz)

With the supplied class object, we should be able find the collection name etc. We don't need an actual instance of the parent object and we don't need to create fake objects.

Trace ID not populated when using AsyncAppender

When logging configuration is modified to use the AsyncAppender, the trace id is not populated in the Stackdriver log entry.

	<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
		<appender-ref ref="STACKDRIVER" />
	</appender>

	<root level="INFO">
		<appender-ref ref="ASYNC" />
	</root>

Internal ref: b/170958814

Add reactive support for Spring Cloud GCP PubSub Stream Binder

Spring Cloud GCP PubSub Stream Binder currently doesn't fully support reactive programming with non-blocking I/O. PubSubMessageChannelBinder uses PubSubTemplate instead of PubSubReactiveFactory , and makes use of the org.springframework.cloud.gcp.pubsub.integration.[inbound,outbound] packages from Spring Cloud GCP PubSub, which currently don't support reactive types.

Remove spring-javaformat-checkstyle.version override

Spring Cloud Build keeps the version up to date on master, but the old 2.3.x branch is still on a very old version due to some downstream version incompatibilities.

In this project, we can remove the override in both main pom.xml and in the samples' pom.xml

Datastore tests compilation seems to be broken

The datastore tests seem to have a compilation error when run with mvn test. This is reproducible locally on the latest master branch sync.

Reported by @suztomo, see #50 for context.

My guess is there was an upstream change in the snapshots we depend on which caused this breakage.

Pub/Sub Spring Cloud Bus Config incompatible with Ilford

I believe there are API changes upstream in spring-cloud-context which breaks something for our Pub/Sub integration for Spring Cloud Bus. This is related to the Secret Manager breakage here too: #41 (see context there, same root cause PR).

Marking as P0 as a reminder to resolve this before release.

To reproduce the error:

Run:

cd spring-cloud-gcp-pubsub-bus-config-sample/spring-cloud-gcp-pubsub-bus-config-sample-server-local
mvn spring-boot:run

Then visit: http://localhost:8888/application/default

You will get the following IllegalStateException

java.lang.IllegalStateException: ConfigFileApplicationListener [org.springframework.boot.context.config.ConfigFileApplicationListener] is deprecated and can only be used as an EnvironmentPostProcessor
	at org.springframework.boot.context.config.ConfigFileApplicationListener.onApplicationEvent(ConfigFileApplicationListener.java:195) ~[spring-boot-2.4.0-SNAPSHOT.jar:2.4.0-SNAPSHOT]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:203) ~[spring-context-5.3.0-M2.jar:5.3.0-M2]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:196) ~[spring-context-5.3.0-M2.jar:5.3.0-M2]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:170) ~[spring-context-5.3.0-M2.jar:5.3.0-M2]
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:148) ~[spring-context-5.3.0-M2.jar:5.3.0-M2]
	at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:80) ~[spring-boot-2.4.0-SNAPSHOT.jar:2.4.0-SNAPSHOT]
	at org.springframework.boot.SpringApplicationRunListeners.lambda$environmentPrepared$0(SpringApplicationRunListeners.java:58) ~[spring-boot-2.4.0-SNAPSHOT.jar:2.4.0-SNAPSHOT]
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1540) ~[na:na]

Provide helper classes to make it easer to write custom serializers for FirestoreTemplate

Spring Cloud GCP already provide a way to write custom mappers to FirestoreTemplate(FirestoreClassMapper). This feature is partially important if you use data classes in Kotlin(or records in Java 14) because the default deserializer works poorly with these.

It's however hard to write custom converters, mostly because all convenient util-methods in Google libs are package private(which makes sense because Googles API does not support custom mappers AFAIK).
Currently, all we have is public Map<String, Value> getFieldsMap() in class Document. Value can be 11 different types and can contain nested maps. To sum up, here's what every developer using spring and want to write a custom serializer have to do:

  • Sub string the document name to get the id
  • Use reflection to find the id field name
  • Traverse the map(returned by getFieldsMap)
  • Decode Value(11 different types)

Also worth noting that it's not possible to copy googles decode function fully because it uses objects that we can't create or don't have access to.

App Engine Standard: Cloud Logging lines of the same request are grouped but no main log level

Hello,

after I managed to fix a problem of grouping log lines of the same request, I discovered a behaviour that I don't know if it's intentional or I'm missing a configuration.

In the first generation of App Engine (now Java8), the log-level of the main request was related to the "worst" log level of every log line under that very request.
Like

  • if the request produced 1 INFO, 2 WARNING and 1 SEVERE, the log-level of the main request was SEVERE
  • if the request produced 3 INFO the log-level of the main request was INFO

With the current implementation of spring logging, it is true that I can correctly group all the log lines of the same request, but the log-level of the main request is ANY

This is the main request log
image
image

Is there a way to replicate the previous App Engine behaviour where the log-level of the main request was taking in consideration the actual log levels of each line?

This behaviour was very nice because with the filter component you would retrieve all the requests with at least one warning or severe log
image

But right now, because all of them are ANY value, this filter is not possibile anymore
image

Of course I can still filter by http status code, but some request ends successully still creating warning logs for example, in this way I have more difficulties filtering on them

The CI does not appear to run all the tests of a module

The github actions CI is not running all the tests. See logs: https://github.com/GoogleCloudPlatform/spring-cloud-gcp/runs/945379936

Typically you will see something like this: 0 tests run in modules with tests. Will investigate what is going on.

[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ spring-cloud-gcp-logging ---
[INFO] 
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 

Bad Request This combination of host and port requires TLS.

Describe the bug
I am using spring-cloud-gcp-starter with custom domain + custom certificate(not google managed).
Although the configuration in app.yaml works fine but at the backend the redirection doesn't work as intended as I am seeing the error:-
Bad Request This combination of host and port requires TLS. on a HTTPS connection.
This is my app.yaml

runtime: java11
instance_class: F2
handlers:
  - url: /.*
    script: auto
    secure: always
    redirect_http_response_code: 301

And the related application.properties

server.ssl.enabled=true
server.ssl.protocol=TLS
server.port=${PORT:8080}
server.ssl.client-auth=none
server.ssl.key-store=classpath:data/someKeystore.p12
server.ssl.key-store-password=somePassword
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=something
server.ssl.key-password=anotherpassword

Sample
I have tested the app locally, for some reason the redirection on localhost doesn't work at all but if I add https the request processes without the 'Bad Request' error.
To reproduce: Get a Cert from CA> Add it to GAE> Use the same cert to Configure Spring boot server for SSL> Deploy the app> See the error.

Docs: To Host or Not To Host?

That is the question 💀

The Spring team has mentioned that it's better for them if the reference docs ("Reference Doc" links in https://spring.io/projects/spring-cloud-gcp#learn) contain links to documentation we can control, because it's easier for us to update our own content than for them to republish.

So.. where could we do that? One easy answer is GitHub Pages - it's all in here anyway, we might as well use it. One big requirement would be to have each docset versioned, preferably with a widget control like https://googleapis.dev/java/spring-cloud-gcp/1.2.6.RELEASE/index.html but having the version in the URL like Spring does it now is fine too (example: https://docs.spring.io/spring-cloud-gcp/docs/1.2.6.RELEASE/reference/html/)

Is this repo active now

Hey guys, is this repo now active? if not, any chance you'd let us know what the plans are for the repo?

Refdoc generation broken

It looks like docs/generate-docs.sh is missing a small maven config to bind to a lifecycle phase, so it's not actually doing any asciidoc --> HTML conversion.

Autoconfigure R2DBC support for Cloud SQL

When using the Spring Cloud GCP Cloud SQL PostgreSQL starter (spring-cloud-gcp-starter-sql-postgresql) you get a JDBC-based stack using org.springframework.boot:spring-boot-starter-jdbc and the org.postgresql:postgresql JDBC driver.

For Spring WebFlux applications I would like to be able to use Cloud SQL via R2DBC instead of JDBC.

I can use R2DBC with Spring Boot and a plain PostgreSQL database via org.springframework.boot:spring-boot-starter-data-r2dbc and the io.r2dbc:r2dbc-postgresql R2DBC driver for PostgreSQL.

I have found https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory/blob/master/r2dbc-postgres and an example project at https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/cloud-sql/r2dbc that shows what dependencies to add and how to create a ConnectionFactory bean, but it would be great if Spring Cloud GCP would have a starter and auto-configuration for this, so Spring Cloud GCP users don't need to manage and configure these dependencies.

PubSub binder integration test broken on Boot 2.4 + Config Server 3.0.0.M3

Our https://github.com/GoogleCloudPlatform/spring-cloud-gcp/blob/master/spring-cloud-gcp-samples/spring-cloud-gcp-pubsub-bus-config-sample/spring-cloud-gcp-pubsub-bus-config-sample-test/src/test/java/com/example/LocalSampleAppIntegrationTest.java test is currently broken because of an incompatibility between our floating Spring Boot version (currently 2.4-SNAPSHOT) and pinned Config server (currently, 2.2.3.BUILD-SNAPSHOT).

The break manifests as the Config server not starting up. I think the source incompatibility is from https://spring.io/blog/2020/08/14/config-file-processing-in-spring-boot-2-4

Upgrading the Config Server to 3.0.0-M3 fixes the server startup, but the client app doesn't appear to pick up the config on bootstrap. I think it's related to spring-cloud/spring-cloud-config#1695.

Spring Cloud KMS integration

Is your feature request related to a problem? Please describe.
Key management service has various usages. It is integrated with the GCP infrastructure or can be used in order to encrypt/decrypt text in an application (for example database field encryption)

Describe the solution you'd like
A spring GCP KMS integration that encrypts and decrypts text

FirestoreTemplate.withParent enhancement

Usingorg.springframework.cloud: spring-cloud-gcp-dependencies: 1.2.6.RELEASE and org.springframework.cloud: spring-cloud-gcp-starter-data-firestore.

To work with sub collections, we currently use <T> FirestoreReactiveOperations withParent(T parent). To use this method we have to either:

  • Have an instance of the parent object
  • Create a dummy instance of the parent object.

There are cases where neither of these are feasible. For example, there are situations where we don't have an instance of the parent object but the id. Thus we have to create a dummy/fake instance of the parent object as the sample project demonstrate. But this can result in unreadable code because the parent object can be big with many properties. For example:

firestormTemplate.withParent(new User("the_actual_id", "", "", "", new NestedObject("", new NestedObject2()), "")) etc

Reading the source code, firestormTemplate.withParent will only extract the id value from the supplied parent and discard the rest. So, my wish is an overloaded method:

public FirestoreReactiveOperations withParent(Object id, Class<?> clazz)

With the supplied class object, we should be able find the collection name etc. We don't need an actual instance of the parent object and we don't need to create fake objects.

Can't create ConsumerDestination with group value only (Spring Cloud GCP PubSub Stream Binder)

Currently i can't create consumer bean with supscription value from the group value only
I also check the current code ,it does'nt support subscription with value just group value
as it's using <topicName>.<group> as subscription name
image

Below is my sample
spring.cloud.stream.bindings.consumeMessage-in-0.destination=order
spring.cloud.stream.bindings.consumeMessage-in-0.group=order-supscription

Can i create new comsumer which comsuming message from order-supscription supscription

Firestore transaction error - Async resource cleanup failed after onComplete

When an error occurs on server response, the original exception is replaced by

java.lang.RuntimeException: Async resource cleanup failed after onComplete

Test that replicates the issue:

@Test
public void writeTransactionException() {

   FirestoreTemplate template = getFirestoreTemplate();

   ReactiveFirestoreTransactionManager txManager =
         new ReactiveFirestoreTransactionManager(this.firestoreStub, this.parent, this.classMapper);
   TransactionalOperator operator = TransactionalOperator.create(txManager);

   doAnswer(invocation -> {
      StreamObserver<CommitResponse> streamObserver = invocation.getArgument(1);

      streamObserver.onError(new RuntimeException("Our exception"));

      return null;
   }).when(this.firestoreStub).commit(any(), any());

   template.save(new FirestoreTemplateTests.TestEntity("e2", 100L))
         .as(operator::transactional)
         .as(StepVerifier::create)
         .verifyErrorMessage("Our exception");

   verify(this.firestoreStub).beginTransaction(any(), any());
   verify(this.firestoreStub).commit(any(), any());
}

Produces the following error:

16:41:38.536 [main] ERROR o.s.t.r.TransactionalOperatorImpl - Application exception overridden by rollback exception
java.lang.RuntimeException: Async resource cleanup failed after onComplete
	at reactor.core.publisher.FluxUsingWhen$CommitInner.onError(FluxUsingWhen.java:533)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.secondError(MonoFlatMap.java:192)
	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onError(MonoFlatMap.java:259)
	at reactor.core.publisher.Operators$MonoSubscriber.onError(Operators.java:1831)
	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreInner.onError(MonoIgnoreThen.java:243)
	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:106)
	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:139)
	at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
	at reactor.core.publisher.Mono.subscribe(Mono.java:3987)
	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
	at reactor.core.publisher.Operators$MonoSubscriber.onError(Operators.java:1831)
	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreInner.onError(MonoIgnoreThen.java:243)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)
	at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onError(MonoPeekTerminal.java:258)
	at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:189)
	at com.google.cloud.spring.data.firestore.util.ObservableReactiveUtil$UnaryStreamObserver.onError(ObservableReactiveUtil.java:137)
	at com.google.cloud.spring.data.firestore.transaction.ReactiveFirestoreTransactionManagerTest.lambda$writeTransactionException$5(ReactiveFirestoreTransactionManagerTest.java:173)
	at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:40)
	at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:99)
	at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
	at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:33)
	at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82)
	at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:147)
	at com.google.firestore.v1.FirestoreGrpc$FirestoreStub.commit(FirestoreGrpc.java:1187)
	at com.google.cloud.spring.data.firestore.transaction.ReactiveFirestoreTransactionManager.lambda$doCommit$3(ReactiveFirestoreTransactionManager.java:112)
	at com.google.cloud.spring.data.firestore.util.ObservableReactiveUtil.lambda$unaryCall$0(ObservableReactiveUtil.java:51)
	at reactor.core.publisher.MonoCreate.subscribe(MonoCreate.java:57)
	at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
	at reactor.core.publisher.Mono.subscribe(Mono.java:3987)
	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:173)
	at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
	at reactor.core.publisher.Mono.subscribe(Mono.java:3987)
	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:173)
	at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2346)
	at reactor.core.publisher.FluxMap$MapSubscriber.request(FluxMap.java:162)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
	at reactor.core.publisher.FluxMap$MapSubscriber.onSubscribe(FluxMap.java:92)
	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
	at reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)
	at reactor.core.publisher.Mono.subscribe(Mono.java:3987)
	at reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onComplete(FluxUsingWhen.java:397)
	at reactor.core.publisher.MonoNext$NextSubscriber.onComplete(MonoNext.java:102)
	at reactor.core.publisher.MonoNext$NextSubscriber.onNext(MonoNext.java:83)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:210)
	at reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:100)
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.request(FluxPeekFuseable.java:144)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onSubscribeInner(MonoFlatMapMany.java:150)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onSubscribe(MonoFlatMapMany.java:245)
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onSubscribe(FluxPeekFuseable.java:178)
	at reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70)
	at reactor.core.publisher.Flux.subscribe(Flux.java:8095)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:195)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2346)
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onSubscribe(MonoFlatMapMany.java:141)
	at reactor.core.publisher.MonoCurrentContext.subscribe(MonoCurrentContext.java:36)
	at reactor.core.publisher.MonoFromFluxOperator.subscribe(MonoFromFluxOperator.java:81)
	at reactor.core.publisher.MonoUsingWhen.subscribe(MonoUsingWhen.java:87)
	at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)
	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)
	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)
	at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)
	at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.drain(MonoIgnoreThen.java:148)
	at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:56)
	at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:52)
	at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784)
	at reactor.core.publisher.FluxDefaultIfEmpty$DefaultIfEmptySubscriber.onComplete(FluxDefaultIfEmpty.java:107)
	at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:142)
	at reactor.core.publisher.Operators.complete(Operators.java:135)
	at reactor.core.publisher.MonoEmpty.subscribe(MonoEmpty.java:45)
	at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2346)
	at reactor.core.publisher.FluxMap$MapSubscriber.request(FluxMap.java:162)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
	at reactor.core.publisher.FluxMap$MapSubscriber.onSubscribe(FluxMap.java:92)
	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
	at reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)
	at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2346)
	at reactor.core.publisher.FluxMap$MapSubscriber.request(FluxMap.java:162)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
	at reactor.core.publisher.FluxMap$MapSubscriber.onSubscribe(FluxMap.java:92)
	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
	at reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)
	at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
	at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2346)
	at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
	at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
	at reactor.core.publisher.MonoDeferContextual.subscribe(MonoDeferContextual.java:55)
	at reactor.core.publisher.Mono.subscribe(Mono.java:3987)
	at reactor.test.DefaultStepVerifierBuilder$DefaultStepVerifier.toVerifierAndSubscribe(DefaultStepVerifierBuilder.java:868)
	at reactor.test.DefaultStepVerifierBuilder$DefaultStepVerifier.verify(DefaultStepVerifierBuilder.java:824)
	at reactor.test.DefaultStepVerifierBuilder$DefaultStepVerifier.verify(DefaultStepVerifierBuilder.java:816)
	at reactor.test.DefaultStepVerifierBuilder.verifyErrorMessage(DefaultStepVerifierBuilder.java:668)
	at com.google.cloud.spring.data.firestore.transaction.ReactiveFirestoreTransactionManagerTest.writeTransactionException(ReactiveFirestoreTransactionManagerTest.java:182)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
	at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
	at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
	at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:220)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:53)
Caused by: java.lang.RuntimeException: Our exception
	... 122 common frames omitted

Might be similar to spring-projects/spring-data-r2dbc#48

Add support for Pub/Sub message ordering

PubSubAdmin.createSubscription() will need to be expanded to allow enabling ordering, and whenever we create a PubsubMessage, we need to be able to set the ordering key for it.
Also, the Pub/Sub Cloud Stream Binder will need to be updated to support the option.
See: https://cloud.google.com/pubsub/docs/ordering
See: https://cloud.google.com/pubsub/docs/publisher#using_ordering_keys

Q: Does it makes sense to expose the keys as partitions in a Kafka-binder-like API?

Ability to provide ConsumerEndpointCustomizer for PubSubMessageChannelBinder

I'm evaluating pub/sub emulator for development purposes (using dedicated @Profile). I'm unable to force the emulator to stop redelivering messages (retry policies are not supported), which is undermining the whole concept. What I want to achieve is to ack all messages regardless whether exception is being thrown or not.

As, there is no AckMode supporting this case - I'd like to be able to provide my implementation of ConsumerEndpointCustomizer to the PubSubMessageChannelBinder.
Due to the different behaviour of bootstrapping Spring Cloud binders (then any other Spring Boot / Cloud components), I'm not able to get the Binder bean in my Configuration class and simply invoking setConsumerEndpointCustomizer on that.

The only way I was able to achieve my goal was to duplicate whole PubSubBinderConfiguration and due to the @ConditionalOnMissingBean(Binder.class) I could provide my own version of PubSubMessageChannelBinder instantiation, where I set up different error handler (via mentioned ConsumerEndpointCustomizer) and swallow any exception that my @StreamListener gets.

I'm using spring-cloud-gcp-pubsub-stream-binder:1.2.5.RELEASE

Could not autowire. no beans of type FirestoreTemplate found

Describe the bug
When trying to init firestore template, IntelliJ reports "Could not autowire. no bean of type FirestoreTemplate found". Is there a simple way to resolve this bug?

Using:
'org.springframework.boot' version '2.3.5.RELEASE'
"org.springframework.cloud:spring-cloud-gcp-data-firestore:1.2.6.RELEASE"
"org.springframework.cloud:spring-cloud-gcp-starter-data-firestore:1.2.6.RELEASE"

Sample
Screenshot 2020-12-10 at 16 48 11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.