Coder Social home page Coder Social logo

aws / aws-advanced-jdbc-wrapper Goto Github PK

View Code? Open in Web Editor NEW
192.0 10.0 35.0 8.22 MB

The Amazon Web Services JDBC Driver has been redesigned as an advanced JDBC wrapper. This wrapper is complementary to and extends the functionality of an existing JDBC driver to help an application take advantage of the features of clustered databases such as Amazon Aurora.

License: Apache License 2.0

Kotlin 0.14% Java 98.34% Shell 0.01% HTML 1.51%

aws-advanced-jdbc-wrapper's Introduction

Amazon Web Services (AWS) JDBC Driver

build_status License Maven Central Javadoc Qodana

The Amazon Web Services (AWS) JDBC Driver has been redesigned as an advanced JDBC wrapper.

The wrapper is complementary to an existing JDBC driver and aims to extend the functionality of the driver to enable applications to take full advantage of the features of clustered databases such as Amazon Aurora. In other words, the AWS JDBC Driver does not connect directly to any database, but enables support of AWS and Aurora functionalities on top of an underlying JDBC driver of the user's choice.

The AWS JDBC Driver is targeted to work with any existing JDBC driver. Currently, the AWS JDBC Driver has been validated to support the PostgreSQL JDBC Driver, MySQL JDBC Driver, and MariaDB JDBC Driver.

In conjunction with the JDBC Drivers for PostgreSQL, MySQL, and MariaDB, the AWS JDBC Driver enables functionalities from Amazon Aurora such as fast failover for PostgreSQL and MySQL Aurora clusters. It also introduces integration with AWS authentication services such as AWS Identity and Access Management (IAM) and AWS Secrets Manager.

About the Wrapper

Hosting a database cluster in the cloud via Aurora is able to provide users with sets of features and configurations to obtain maximum performance and availability, such as database failover. However, at the moment, most existing drivers do not currently support those functionalities or are not able to entirely take advantage of it.

The main idea behind the AWS JDBC Driver is to add a software layer on top of an existing JDBC driver that would enable all the enhancements brought by Aurora, without requiring users to change their workflow with their databases and existing JDBC drivers.

What is Failover?

In an Amazon Aurora database cluster, failover is a mechanism by which Aurora automatically repairs the cluster status when a primary DB instance becomes unavailable. It achieves this goal by electing an Aurora Replica to become the new primary DB instance, so that the DB cluster can provide maximum availability to a primary read-write DB instance. The AWS JDBC Driver is designed to understand the situation and coordinate with the cluster in order to provide minimal downtime and allow connections to be very quickly restored in the event of a DB instance failure.

Benefits of the AWS JDBC Driver

Although Aurora is able to provide maximum availability through the use of failover, existing client drivers do not currently support this functionality. This is partially due to the time required for the DNS of the new primary DB instance to be fully resolved in order to properly direct the connection. The AWS JDBC Driver allows customers to continue using their existing community drivers in addition to having the AWS JDBC Driver fully exploit failover behavior by maintaining a cache of the Aurora cluster topology and each DB instance's role (Aurora Replica or primary DB instance). This topology is provided via a direct query to the Aurora DB, essentially providing a shortcut to bypass the delays caused by DNS resolution. With this knowledge, the AWS JDBC Driver can more closely monitor the Aurora DB cluster status so that a connection to the new primary DB instance can be established as fast as possible.

Enhanced Failure Monitoring

Since a database failover is usually identified by reaching a network or a connection timeout, the AWS JDBC Driver introduces an enhanced and customizable manner to faster identify a database outage.

Enhanced Failure Monitoring (EFM) is a feature available from the Host Monitoring Connection Plugin that periodically checks the connected database node's health and availability. If a database node is determined to be unhealthy, the connection is aborted (and potentially routed to another healthy node in the cluster).

Using the AWS JDBC Driver with RDS Multi-AZ DB Clusters

The AWS RDS Multi-AZ DB Clusters are capable of switching over the current writer node to another node in the cluster within approximately 1 second or less, in case of minor engine version upgrade or OS maintenance operations. The AWS JDBC Driver has been optimized for such fast failover when working with AWS RDS Multi-AZ DB Clusters.

With the failover plugin, the downtime during certain DB cluster operations, such as engine minor version upgrades, can be reduced to one second or even less with finely tuned parameters. It supports both MySQL and PostgreSQL clusters.

Visit this page for more details.

Using the AWS JDBC Driver with plain RDS databases

The AWS JDBC Driver also works with RDS provided databases that are not Aurora.

Please visit this page for more information.

Getting Started

For more information on how to download the AWS JDBC Driver, minimum requirements to use it, and how to integrate it within your project and with your JDBC driver of choice, please visit the Getting Started page.

Maven Central

You can find our driver by searching in The Central Repository with GroupId and ArtifactId software.amazon:aws-advanced-jdbc-wrapper.

Maven Central

<!-- Add the following dependency to your pom.xml, -->
<!-- replacing LATEST with the specific version as required -->

<dependency>
  <groupId>software.amazon.jdbc</groupId>
  <artifactId>aws-advanced-jdbc-wrapper</artifactId>
  <version>LATEST</version>
</dependency>

Properties

Parameter Reference Documentation Link
wrapperDialect DialectManager.DIALECT Dialects, and whether you should include it.
wrapperPlugins PropertyDefinition.PLUGINS
secretsManagerSecretId AwsSecretsManagerConnectionPlugin.SECRET_ID_PROPERTY SecretsManagerPlugin
secretsManagerRegion AwsSecretsManagerConnectionPlugin.REGION_PROPERTY SecretsManagerPlugin
wrapperDriverName DriverMetaDataConnectionPlugin.WRAPPER_DRIVER_NAME DriverMetaDataConnectionPlugin
failoverMode FailoverConnectionPlugin.FAILOVER_MODE FailoverPlugin
clusterInstanceHostPattern AuroraHostListProvider.CLUSTER_INSTANCE_HOST_PATTERN FailoverPlugin
enableClusterAwareFailover FailoverConnectionPlugin.ENABLE_CLUSTER_AWARE_FAILOVER FailoverPlugin
failoverClusterTopologyRefreshRateMs FailoverConnectionPlugin.FAILOVER_CLUSTER_TOPOLOGY_REFRESH_RATE_MS FailoverPlugin
failoverReaderConnectTimeoutMs FailoverConnectionPlugin.FAILOVER_READER_CONNECT_TIMEOUT_MS FailoverPlugin
failoverTimeoutMs FailoverConnectionPlugin.FAILOVER_TIMEOUT_MS FailoverPlugin
failoverWriterReconnectIntervalMs FailoverConnectionPlugin.FAILOVER_WRITER_RECONNECT_INTERVAL_MS FailoverPlugin
failureDetectionCount HostMonitoringConnectionPlugin.FAILURE_DETECTION_COUNT HostMonitoringPlugin
failureDetectionEnabled HostMonitoringConnectionPlugin.FAILURE_DETECTION_ENABLED HostMonitoringPlugin
failureDetectionInterval HostMonitoringConnectionPlugin.FAILURE_DETECTION_INTERVAL HostMonitoringPlugin
failureDetectionTime HostMonitoringConnectionPlugin.FAILURE_DETECTION_TIME HostMonitoringPlugin
monitorDisposalTime MonitorServiceImpl.MONITOR_DISPOSAL_TIME_MS HostMonitoringPlugin
iamDefaultPort IamAuthConnectionPlugin.IAM_DEFAULT_PORT IamAuthenticationPlugin
iamHost IamAuthConnectionPlugin.IAM_HOST IamAuthenticationPlugin
iamRegion IamAuthConnectionPlugin.IAM_REGION IamAuthenticationPlugin
iamExpiration IamAuthConnectionPlugin.IAM_EXPIRATION IamAuthenticationPlugin
awsProfile PropertyDefinition.AWS_PROFILE AWS Advanced JDBC Driver Parameters
wrapperLogUnclosedConnections PropertyDefinition.LOG_UNCLOSED_CONNECTIONS AWS Advanced JDBC Driver Parameters
wrapperLoggerLevel PropertyDefinition.LOGGER_LEVEL Logging
wrapperProfileName PropertyDefinition.PROFILE_NAME Configuration Profiles
autoSortWrapperPluginOrder PropertyDefinition.AUTO_SORT_PLUGIN_ORDER Plugins
loginTimeout PropertyDefinition.LOGIN_TIMEOUT AWS Advanced JDBC Driver Parameters
connectTimeout PropertyDefinition.CONNECT_TIMEOUT AWS Advanced JDBC Driver Parameters
socketTimeout PropertyDefinition.SOCKET_TIMEOUT AWS Advanced JDBC Driver Parameters
tcpKeepAlive PropertyDefinition.TCP_KEEP_ALIVE AWS Advanced JDBC Driver Parameters

A Secret ARN has the following format: arn:aws:secretsmanager:<Region>:<AccountId>:secret:SecretName-6RandomCharacters

Logging

Enabling logging is a very useful mechanism for troubleshooting any issue one might potentially experience while using the AWS JDBC Driver.

In order to learn how to enable and configure logging, check out the Logging section.

Documentation

Technical documentation regarding the functionality of the AWS JDBC Driver will be maintained in this GitHub repository. Since the AWS JDBC Driver requires an underlying JDBC driver, please refer to the individual driver's documentation for driver-specific information.

Using the AWS JDBC Driver

To find all the documentation and concrete examples on how to use the AWS JDBC Driver, please refer to the AWS JDBC Driver Documentation page.

Known Limitations

Amazon RDS Blue/Green Deployments

Although the AWS Advanced JDBC Wrapper is not compatible with AWS Blue/Green Deployments and does not officially support them, the combination of the AWS Advanced JDBC Wrapper and the Failover Plugin has been validated for use with clusters that employ Blue/Green Deployments. While general basic connectivity to both Blue and Green clusters is always in place, some failover cases are not fully supported.

The current limitations are:

  • After a Blue/Green switchover, the wrapper may not be able to properly detect the new topology and handle failover, as there are discrepancies between the metadata and the available endpoints.
  • The specific version requirements for Aurora MySQL versus Aurora PostgreSQL may vary, as the internal systems used by the wrapper can differ1.

The development team is aware of these limitations and is working to improve the wrapper's awareness and handling of Blue/Green switchovers. In the meantime, users can consider utilizing the enableGreenNodeReplacement configuration parameter, which allows the driver to override incorrect topology metadata and try to connect to available new Blue endpoints.

Amazon Aurora Global Databases

This driver currently does not support failover with Amazon Aurora Global Databases. While it is possible to connect to global databases, failing over to a secondary cluster will result in errors and there may be additional unforeseen errors when working with global databases. Support for Amazon Aurora Global Databases is in the backlog, but we cannot comment on a timeline right now.

Virtual Threading

Due to the use of sychronized in the AWS JDBC Driver, pinning with virtual threads may occur. Note that this will not cause the AWS JDBC Driver to behave incorrectly, but may hinder scalability when using virtual threads with the AWS JDBC Driver.

Examples

Description Examples
Using the AWS JDBC Driver to get a simple connection PostgreSQL
Using the AWS JDBC Driver with failover handling PostgreSQL
Using the AWS IAM Authentication Plugin with DriverManager PostgreSQL
MySQL
MariaDB
Using the AWS Secrets Manager Plugin with DriverManager PostgreSQL
MySQL
Using the AWS Credentials Manager to configure an alternative AWS credentials provider. PostgreSQL and MySQL
Using the AWS JDBC Driver with AWSWrapperDatasource PostgreSQL
Using the Driver Metadata Plugin to override driver name, this plugin enables specific database features that may only be available to target drivers PostgreSQL
Using the Read/Write Splitting Plugin with DriverManager PostgreSQL
MySQL
Using the Read/Write Splitting Plugin with Spring PostgreSQL
MySQL
Using HikariCP with the AWSWrapperDatasource PostgreSQL
Using HikariCP with the AWSWrapperDatasource with failover handling PostgreSQL
Using Spring and HikariCP with the AWS JDBC Driver PostgreSQL
Using Spring and HikariCP with the AWS JDBC Driver and failover handling PostgreSQL
Using Spring and Hibernate with the AWS JDBC Driver PostgreSQL
Using Spring and Wildfly with the AWS JDBC Driver PostgreSQL
Using Vert.x and c3p0 with the AWS JDBC Driver PostgreSQL
Using the AWS JDBC Driver with Telemetry and using the AWS Distro for OpenTelemetry Collector PostgreSQL
Using the AWS JDBC Driver with Telemetry and using the AWS X-Ray Daemon PostgreSQL

Getting Help and Opening Issues

If you encounter a bug with the AWS JDBC Driver, we would like to hear about it. Please search the existing issues to see if others are also experiencing the issue before reporting the problem in a new issue. GitHub issues are intended for bug reports and feature requests.

When opening a new issue, please fill in all required fields in the issue template to help expedite the investigation process.

For all other questions, please use GitHub discussions.

How to Contribute

  1. Set up your environment by following the directions in the Development Guide.
  2. To contribute, first make a fork of this project.
  3. Make any changes on your fork. Make sure you are aware of the requirements for the project (e.g. do not require Java 7 if we are supporting Java 8 and higher).
  4. Create a pull request from your fork.
  5. Pull requests need to be approved and merged by maintainers into the main branch.
    Note: Before making a pull request, run all tests and verify everything is passing.

Code Style

The project source code is written using the Google checkstyle, and the style is strictly enforced in our automation pipelines. Any contribution that does not respect/satisfy the style will automatically fail at build time.

Releases

The aws-advanced-jdbc-wrapper has a regular monthly release cadence. A new release will occur during the last week of each month. However, if there are no changes since the latest release, then a release will not occur.

Aurora Engine Version Testing

This aws-advanced-jdbc-wrapper is being tested against the following Community and Aurora database versions in our test suite:

Database Versions
MySQL 8.0.36
PostgreSQL 16.2
Aurora MySQL - LTS version, see here for more details.

- Latest release, as shown on this page.
Aurora PostgreSQL - LTS version, see here for more details.

- Latest release, as shown on this page.)

The aws-advanced-jdbc-wrapper is compatible with MySQL 5.7 and MySQL 8.0 as per the Community MySQL Connector/J 8.0 Driver.

License

This software is released under the Apache 2.0 license.

Footnotes

  1. Aurora MySQL requires v3.07 or later. โ†ฉ

aws-advanced-jdbc-wrapper's People

Contributors

aaron-congo avatar aaronchung-bitquill avatar adalevinaws avatar alecc-bq avatar brunos-bq avatar congoamz avatar crystall-bitquill avatar dalbani avatar davecramer avatar dependabot[bot] avatar fangli avatar firebike avatar geetanjalij avatar hsuamz avatar jameshcovington avatar jasonlamz avatar jasonli-improving avatar jliuhtonen avatar joyc-bq avatar justing-bq avatar karenc-bq avatar krisiye avatar matthewh-bq avatar sergiyv-improving avatar sergiyvamz avatar sovietaced avatar susanmdouglas-aws avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-advanced-jdbc-wrapper's Issues

Accept URLs without wrapper prefix in the scheme

Describe the feature

Today, the driver class only accepts URLs that start with jdbc:aws-wrapper:. I've hit some cases where this is a limitation. Could the class also accept "un-wrapped" schemes like jdbc:postgresql://?

Use Case

When using Apache Spark to read/write with JDBC, there are per-database dialects to map the database types from and to internal data types. For PostgreSQL, the dialect enables only when
URL starts with jdbc:postgresql://
. Hence, it is not currently possible to use jdbc:aws-wrapper:postgresql with the Apache Spark and JDBC.

However, Apache Spark supports setting the driver class name explicitly: i.e., we can pass software.amazon.jdbc.Driver. If this driver class accepted regular URLs, it would thus be possible to use the JDBC source/sink with a postgresql:// URL and the AWS JDBC wrapper.

Proposed Solution

  • Accepting jdbc:postgresql:// or similar in software.amazon.jdbc.Driver would be a good first step.
  • Overall, it'd be useful to be able to use the wrapper with the standard schemes - i.e., have some way to have the wrapper register itself at jdbc:postgresql:// directly. This would also help with integration in other systems.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.0

JDK version used

1.11

Operating System and version

Amazon Linux 2

Reduced performance with aws-advanced-jdbc-wrapper with IAM auth compared to org.postgresql.Driver with password auth

Describe the bug

When using aws-advanced-jdbc-wrapper with IAM authentication plugin and the Postgres driver to access my Aurora Postgres database, the performance of my application drops by 10x compared to using the org.postgresql.Driver with password authentication. This issue is observed when running k6 tests, and the throughput drops from 850 messages per second to ~85 when using IAM authentication.

Code Snippets
Using AwsWrapperDataSource with IAM authentication:

    AwsWrapperDataSource dataSource = new AwsWrapperDataSource();
    dataSource.setJdbcProtocol(AWS_JDBC_POSTGRESQL_DRIVER_PROTOCOL);

    dataSource.setDatabasePropertyName(DATABASE_PROPERTY_NAME);
    dataSource.setServerPropertyName(SERVER_PROPERTY_NAME);
    dataSource.setPortPropertyName(PORT_PROPERTY_NAME);

    dataSource.setTargetDataSourceClassName("org.postgresql.ds.PGSimpleDataSource");

    Properties targetDataSourceProps = new Properties();
    targetDataSourceProps.setProperty(SERVER_PROPERTY_NAME, url);
    targetDataSourceProps.setProperty(DATABASE_PROPERTY_NAME, databaseName);
    targetDataSourceProps.setProperty(PORT_PROPERTY_NAME, port);
    targetDataSourceProps.setProperty(PropertyDefinition.PLUGINS.name, "iam");
    targetDataSourceProps.setProperty(PropertyDefinition.USER.name, username);
    dataSource.setTargetDataSourceProperties(targetDataSourceProps);
    return dataSource;

Using org.postgresql.Driver with password authentication:

public DataSource getDataSourceWithPasswordAuth() {
    DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
    dataSourceBuilder.driverClassName(POSTGRESQL_DRIVER);
    dataSourceBuilder.url(url);
    dataSourceBuilder.username(username);
    dataSourceBuilder.password(password);
    return dataSourceBuilder.build();
}

Expected Behavior

The performance of the application when using aws-advanced-jdbc-wrapper with IAM plugin should be comparable to the performance when using the org.postgresql.Driver with password authentication.

What plugins are used? What other connection properties were set?

IAM

Current Behavior

The application's performance drops significantly when using aws-advanced-jdbc-wrapper with IAM authentication plugin compared to using org.postgresql.Driver with password authentication.

Reproduction Steps

  1. Implement two separate configurations for database connections, one returning a AwsWrapperDataSource with IAM authentication plugin and another using org.postgresql.Driver with password authentication. (See the code snippets above)
  2. Run performance tests that access the database on the application.
  3. Observe that the throughput drops

Possible Solution

Is it possible/recommended to use connection pooling with the IAM plugin when using AwsWrapperDataSource to improve performance? If yes, how can it be implemented, and could an example be included with the IAM auth example?

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

2.1.1

JDK version used

17.0.7

Operating System and version

Amazon Linux 2

Auto-scaling support for Aurora Postgres

Describe the feature

When running deployments using the JDBC wrapper against auto-scaling enabled clusters, we have observed that when the cluster scales up, all the existing connections remain with the instances that they were attached to and the new instances added by auto-scaling are never utilized.

Given that we are able to track cluster topology changes over time and we are able to determine which connections are currently active, it should be possible to add support for re-balancing connections across read replicas in scale up and down events.

Use Case

When running our deployments with the vanilla jdbc driver configuration, scale up events seem to get a relatively even distribution of connections/requests across all the nodes in the autoscaling group (including new nodes that were provisioned when the cluster is under load). The downside is this requires us to run very aggressive health checks to ensure that when the cluster scales down the read replica pool, our deployments are able to quickly detect when their connections to the cluster are dead and re-establish them. This polling approach does result in failed requests in down scaling scenarios and therefore this limits the use-cases to background jobs that are able to resume when some subset of DB requests fail.

In a ideal scenario, knowledge of the cluster topology changes should enhance the application's ability to move inactive connections during auto-scaling events. Especially in the case of high throughput, low latency requests, this would enable the cluster to accommodate the needs of the applications that rely on application-level auto-scaling (horizontal pod auto-scaling).

Proposed Solution

One idea would be to assign connections to a read-only endpoint across all the reader instances in a round robin fashion. Then periodically re-evaluate the cluster topology and when a scale up/down event occurs, gracefully rebalance the connections that are not currently in use across the latest state of the cluster.

An additional measure to mitigate downtime during scale down events could be to pin a percentage of the connections in a connection pool to the fixed instances in the cluster, this way a subset of connections is guaranteed to be available to the application for ongoing requests. While this may temporarily limit throughput, it ensures that the connections left for auto-scaling are able to readily move to new nodes to take load off of the fixed instances.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.0

JDK version used

jdk-17.0.5+8

Operating System and version

Ubuntu 20.04

actions need to be upgraded

Many actions have been deprecated

Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: actions/checkout@v2, actions/setup-java@v1, crazy-max/ghaction-import-gpg@v4. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.

feat: Default Profile for Plugin Order

Description

  • Implement a default profile in which plugins and ordering are optimal to improve the "out of the box" experience.

Design

If no wrapper plugin codes were specified as part of the configuration parameters, the driver will use a default list of plugin codes:
https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/wrapper/src/main/java/software/amazon/jdbc/ConnectionPluginManager.java#L175

This list is currently empty but we propose to include the following plugins:

  • auroraConnectionTracker, a new plugin to track opened connections and handle impacted connections after a failover. It is pending to be merged in PR #298
  • failover
  • efm

Runtime Exception: No wrapper class exists for 'org.postgresql.jdbc.PgArray'. when calling resultSet.getObject

We are looking at using the aws jdbc wrapper w/ aurora postgres to improve our database failovers but during some testing we are encountering some runtime exceptions w/ wrapper classes when we call getObject on a query results

ex:

Caused by: java.lang.RuntimeException: No wrapper class exists for 'org.postgresql.jdbc.PgArray'.
	at software.amazon.jdbc.util.WrapperUtils.wrapWithProxyIfNeeded(WrapperUtils.java:268)
	at software.amazon.jdbc.util.WrapperUtils.executeWithPlugins(WrapperUtils.java:228)
	at software.amazon.jdbc.wrapper.ResultSetWrapper.getObject(ResultSetWrapper.java:708)
...Truncated irrelevant stack

Custom driver name disables hibernate support of advanced postgres types

Describe the feature

For postgres-like databases, Hibernate ORM uses driver name obtained via DatabaseMetaData::getDriverName to determine driver kind, which affects the way advanced types are handled. Would id be possible to allow passthrough of the underlying driver name in certain situations?

https://github.com/hibernate/hibernate-orm/blob/main/hibernate-core/src/main/java/org/hibernate/dialect/PostgreSQLDriverKind.java#L27
https://github.com/hibernate/hibernate-orm/blob/6.2/hibernate-core/src/main/java/org/hibernate/dialect/PostgreSQLDialect.java#L1342

Use Case

I'd like to be able to configure wrapped driver to use original driver name, so that hibernate correctly recognises it and enables native support for advanced types.

Proposed Solution

Perhaps an additional property could be defined to enable driver-name passthrough from the wrapped driver?

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.1

JDK version used

Temurin-17.0.6+10

Operating System and version

macOS 13.2.1

Should return the value from the function to avoid Dead Code Elimination

feat: Support MariaDB JDBC Connector

Describe the feature

Currently jdbc scheme's are hard coded to postgres and mysql. Remove hard coded schemes in favour of a dialect

Use Case

no support for mariadb

Proposed Solution

use a dialect to figure out which one to load

Other Information

#342

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

Latest

JDK version used

1.8

Operating System and version

any

Internal connection pooling does not work with Read-Write Splitting Plugin when cluster url is used

Describe the bug

Internal connection pooling does not work with Read-Write Splitting Plugin when cluster url is used.

Expected Behavior

  • Perform internal connection pooling also when aurora cluster url is used in the jdbc url.
  • Cluster topology is discovered by the driver and connections are opened to the available reader and writer instances.
  • Read and write connection are kept in dedicated connection pools

What plugins are used? What other connection properties were set?

readWriteSplitting

Current Behavior

Application startup logs when cluster url is used

Redacted url: jdbc:aws-wrapper:postgresql://us-dev-pg-cluster.cluster-xxxxxxxxxx.us-east-1.rds.amazonaws.com/xxxxxxx

15:47:41.386 [main] INFO  o.h.validator.internal.util.Version - HV000001: Hibernate Validator 6.1.7.Final - {} 
15:47:41.798 [main] INFO  o.h.jpa.internal.util.LogHelper - HHH000204: Processing PersistenceUnitInfo [name: default] - {} 
15:47:41.854 [main] INFO  org.hibernate.Version - HHH000412: Hibernate ORM core version 5.4.33 - {} 
15:47:42.009 [main] INFO  o.h.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {5.1.2.Final} - {} 
15:47:44.556 [main] INFO  org.hibernate.dialect.Dialect - HHH000400: Using dialect: infra.db.boundary.CustomPostgresqlDialect - {} 
15:47:46.687 [main] WARN  org.hibernate.cfg.AnnotationBinder - HHH000491: The [createdBy] association in the [features.commerceorders.entity.Status] entity uses both @NotFound(action = NotFoundAction.IGNORE) and FetchType.LAZY. The NotFoundAction.IGNORE @ManyToOne and @OneToOne associations are always fetched eagerly. - {} 
15:47:48.782 [main] INFO  o.h.e.t.j.p.i.JtaPlatformInitiator - HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] - {} 
15:47:48.786 [main] INFO  o.s.o.j.LocalContainerEntityManagerFactoryBean - Initialized JPA EntityManagerFactory for persistence unit 'default' - {} 

Application startup logs when instance url is used

Redacted url: jdbc:aws-wrapper:postgresql://backenddevusstoragestack-databas-auroradbinstance1-jjaw8nsn0cul.xxxxxxxxxx.us-east-1.rds.amazonaws.com/xxxxxxxx

15:50:45.312 [main] INFO  o.h.validator.internal.util.Version - HV000001: Hibernate Validator 6.1.7.Final - {} 
15:50:45.716 [main] INFO  o.h.jpa.internal.util.LogHelper - HHH000204: Processing PersistenceUnitInfo [name: default] - {} 
15:50:45.758 [main] INFO  org.hibernate.Version - HHH000412: Hibernate ORM core version 5.4.33 - {} 
15:50:45.878 [main] INFO  o.h.annotations.common.Version - HCANN000001: Hibernate Commons Annotations {5.1.2.Final} - {} 
15:50:46.100 [main] INFO  com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting... - {} 
15:50:47.938 [main] INFO  com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Start completed. - {} 
15:50:48.768 [main] INFO  org.hibernate.dialect.Dialect - HHH000400: Using dialect: infra.db.boundary.CustomPostgresqlDialect - {} 
15:50:51.060 [main] WARN  org.hibernate.cfg.AnnotationBinder - HHH000491: The [createdBy] association in the [features.commerceorders.entity.Status] entity uses both @NotFound(action = NotFoundAction.IGNORE) and FetchType.LAZY. The NotFoundAction.IGNORE @ManyToOne and @OneToOne associations are always fetched eagerly. - {} 
15:50:53.565 [main] INFO  o.h.e.t.j.p.i.JtaPlatformInitiator - HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] - {} 
15:50:53.573 [main] INFO  o.s.o.j.LocalContainerEntityManagerFactoryBean - Initialized JPA EntityManagerFactory for persistence unit 'default' - {} 

Reproduction Steps

  • Use cluster url in the connection string
  • Set plugins to either auroraHostList,readWriteSplitting or readWriteSplitting,failover,efm
  • Startup the application

Possible Solution

HikariPooledConnectionProvider works only for urls of type RdsUrlType.RDS_INSTANCE, but RDS_WRITER_CLUSTER url is supplied by the wrapper.

Additional Information/Context

The behavior is identical no matter what set of plugins I use, be it readWriteSplitting,failover,efm or auroraHostList,readWriteSplitting

The AWS Advanced JDBC Driver version used

2.1.2

JDK version used

17.0.7

Operating System and version

Linux pop-os 6.2.6-76060206-generic #202303130630168375320722.04~77c1465 SMP PREEMPT_DYNAMIC Wed M x86_64 x86_64 x86_64 GNU/Linux

Question about properties

Why do we need wrapperUserName and wrapperPassword, etc? Why don't we just pass the properties to the underlying driver ?

IAM database authentication fails when local port is set other than 3306 in SSH port forwarding.

Describe the bug

IamAuthConnectionPlugin will generate a wrong token if set up MYSQL_CONNECTION_STRING with non-default port such as 23306.
The port 23306 come from SSH port forwarding like below.

local_port=23306
remote_port=3306
ssh -fNL ${local_port}:${db_host}:${remote_port} ${login_user}@${gateway_host}

sample code:

package software.amazon;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.auth.credentials.SystemPropertyCredentialsProvider;
import software.amazon.jdbc.AwsWrapperProperty;
import software.amazon.jdbc.PropertyDefinition;
import software.amazon.jdbc.authentication.AwsCredentialsManager;
import software.amazon.jdbc.plugin.IamAuthConnectionPlugin;

public class AwsIamAuthenticationMysqlExample {
  public static final String MYSQL_CONNECTION_STRING =
      "jdbc:aws-wrapper:mysql://127.0.0.1:23306";
  private static final String USERNAME = "iam-db-auth-user";

  public static void main(String[] args) throws SQLException {

    final Properties properties = new Properties();

    // Enable AWS IAM database authentication and configure driver property values
    properties.setProperty(PropertyDefinition.PLUGINS.name, "iam");
    properties.setProperty(PropertyDefinition.USER.name, USERNAME);
    properties.setProperty(IamAuthConnectionPlugin.IAM_HOST.name, "my-aurora-cluster.cluster-xxxxxxxx.ap-northeast-1.rds.amazonaws.com");
    properties.setProperty(IamAuthConnectionPlugin.IAM_DEFAULT_PORT.name, "3306");
    properties.setProperty(IamAuthConnectionPlugin.IAM_REGION.name, "ap-northeast-1");

    // Attempt a connection
    try (Connection conn = DriverManager.getConnection(MYSQL_CONNECTION_STRING, properties);
        Statement statement = conn.createStatement();
        ResultSet result = statement.executeQuery("select 1")) {

      System.out.println(Util.getResult(result));
    }
  }
}

IamAuthConnectionPlugin will generate token using following parameters:

  • database host: my-aurora-cluster.cluster-xxxxxxxx.ap-northeast-1.rds.amazonaws.com
  • database port: 23306 from MYSQL_CONNECTION_STRING
  • database region: ap-northeast-1
  • database user: iam-db-auth-user

Expected Behavior

IamAuthConnectionPlugin will generate proper token.

What plugins are used? What other connection properties were set?

iam

Current Behavior

IamAuthConnectionPlugin generates wrong token and fails database authentication.

Reproduction Steps

  1. Clone this repository
  2. Build examples/AWSDriverExample/src/main/java/software/amazon/AwsIamAuthenticationMysqlExample
  3. Paste sample code the above to AwsIamAuthenticationMysqlExample.java
  4. Run main
    • Set credentials as you like. (I use environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

Possible Solution

Define new property such as IAM_PORT to overwrite port from hostSpec.

https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/9be3a6a49a81118c294052e62dd0fd64cac9e426/wrapper/src/main/java/software/amazon/jdbc/plugin/IamAuthConnectionPlugin.java#L111

Additional Information/Context

I want to use aws-advances-jdbc-wrapper with DBeaver.
Because I found this sentence here

But I cannot find documents about DBeaver configuration in this repository.

The AWS Advanced JDBC Driver version used

2.1.2

JDK version used

17

Operating System and version

Debian GNU/Linux sid

Provide option to bypass ResultSetWrapper

Describe the feature

I'd like an method to return the underlying ResultSet which can be used to directly access the returned rows. This could be used in cases where a bunch of fields are being read and the overhead from all the wrapping isn't needed.

Use Case

Based on some performance testing, using this jdbc wrapper around our postgres caused our CPU usage to increase by about 100%. The majority of the costs seem to come from the calls to ResultSet.getLong, ResultSet.getBoolean, etc since we call these many times for each record returned and some queries return thousands of rows.

Proposed Solution

I'd like an method to return the underlying ResultSet e.g.

public ResultSet getRawResultSet() {
  return this.resultSet;
}

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.1

JDK version used

17.0.6

Operating System and version

Linux x64 on AWS

Option for reader failover to respect the topology

Describe the feature

This morning I ran into a production issue that I believe was related to this library and the failover plugin specifically. We have an RDS Aurora cluster with a reader and writer node. We typically use the reader node to offload read-only traffic from the writer node (ie. to support UI workflows). As such, our reader node is 2-4 times the size of a writer node.

We ran into a situation where our reader node got overwhelmed and temporarily refused DB connections. This situation caused the failover plugin to attempt a reader failover and since the reader node was refusing DB connections the writer node won the connection race. Unfortunately the overwhelming read load ended up spilling over to the writer node, where it overloaded the writer node, and finally broke critical workloads.

Prior to using this JDBC driver we would rely on the RDS Aurora cluster endpoints, which would remain consistent with the underlying database topology and didn't change when a DB instance was under heavy load.
I understand that this behavior is documented and frankly I see a use case for having read traffic failover to the writer node.

Having said that I think it would be useful to have an option for reader failover to respect the topology in a similar manner to how writer failover works. In this mode read-only connections would only be made to the writer node if and only if it was the only remaining host in the topology.

For our team we specifically write our applications to leverage read replicas because we don't want to overwhelm the writer node since it is a finite resource (relative to readers) that can only scale vertically.

Use Case

We use read replicas to offload traffic from our writer node. It is not acceptable for us to send overflow read traffic to our writer nodes. We would like to pin read traffic to reader nodes as long as they exist in the topology.

Proposed Solution

I suggest adding a feature flag to the failover plugin like strictFailover or strictReaderFailover which ensures that reader failover respects the topology and only fails over to a writer node if it is the last resort.

Other Information

Happy to submit a PR.

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.0

JDK version used

JDK 11

Operating System and version

Ubuntu

fix: PSQLException when persisting json with Hibernate

Describe the bug

Persisting JSON data with the wrapper throws an PSQLException:


Caused by: org.postgresql.util.PSQLException: ERROR: column "jsonstring" is of type jsonb but expression is of type character varying
  Hint: You will need to rewrite or cast the expression.
  Position: 79
  Location: File: parse_target.c, Routine: transformAssignedExpr, Line: 785
  Server SQLState: 42804
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2676)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2366)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:356)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:496)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:413)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:190)
	at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:152)
	at software.amazon.jdbc.wrapper.PreparedStatementWrapper.lambda$executeUpdate$17(PreparedStatementWrapper.java:253)
	at software.amazon.jdbc.plugin.DefaultConnectionPlugin.execute(DefaultConnectionPlugin.java:98)
	at software.amazon.jdbc.ConnectionPluginManager.lambda$execute$3(ConnectionPluginManager.java:329)
	at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$0(ConnectionPluginManager.java:269)
	at software.amazon.jdbc.ConnectionPluginManager.executeWithSubscribedPlugins(ConnectionPluginManager.java:251)
	at software.amazon.jdbc.ConnectionPluginManager.execute(ConnectionPluginManager.java:326)
	at software.amazon.jdbc.util.WrapperUtils.executeWithPlugins(WrapperUtils.java:226)
	at software.amazon.jdbc.wrapper.PreparedStatementWrapper.executeUpdate(PreparedStatementWrapper.java:247)
	at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:197)
	... 20 more

Expected Behavior

No exceptions. Same code works fine with the community Postgres driver.

What plugins are used? What other connection properties were set?

None

Current Behavior

PSQLException

Reproduction Steps

Add the following files to a Java project and run TestJsonHibernate.java.

Disclaimer: since the error was first seen by running the Hibernate ORM test suite, part of the code snippets below are derived from some of that test code, including EntityWithJson and StringNode.

TestJsonHibernate.java

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.annotations.JdbcTypeCode;
import org.hibernate.cfg.Configuration;
import org.hibernate.type.SqlTypes;
import jakarta.persistence.Entity;
import jakarta.persistence.Id;
import jakarta.persistence.Table;

public class TestJsonHibernate {

  private static final String create = "CREATE TABLE IF NOT EXISTS EntityWithJson\n"
      + "(\n"
      + "    id INTEGER NOT NULL,\n"
      + "    jsonString jsonb,\n"
      + "    list jsonb,\n"
      + "    objectMap jsonb,\n"
      + "    payload jsonb,\n"
      + "    primary key (id)\n"
      + ")";

  public static void main(String[] args) throws SQLException {
    try (Connection connection = DriverManager.getConnection("jdbc:aws-wrapper:postgresql://cluster-domain/db", "user", "pass");
        Statement statement = connection.createStatement()) {
      statement.executeUpdate(create);
    }

    Configuration configuration = new Configuration();
    configuration.configure("hibernate.cfg.xml");
    configuration.addAnnotatedClass(EntityWithJson.class);

    // Create Session Factory
    SessionFactory sessionFactory = configuration.buildSessionFactory();

    // Initialize Session Object
    Session session = sessionFactory.openSession();

    Map<String, String> stringMap = Map.of("name", "ABC");
    Map<StringNode, StringNode> objectMap = Map.of(new StringNode("name"), new StringNode("ABC"));
    List<StringNode> list = List.of(new StringNode("ABC"));
    String json = "{\"name\":\"abc\"}";

    EntityWithJson entity = new EntityWithJson(3, stringMap, objectMap, list, json);

    session.beginTransaction();
    session.persist(entity);
    session.getTransaction().commit();
  }

  @Entity(name = "EntityWithJson")
  @Table(name = "EntityWithJson")
  public static class EntityWithJson {

    @Id
    private Integer id;

    @JdbcTypeCode(SqlTypes.JSON)
    private Map<String, String> payload;

    @JdbcTypeCode(SqlTypes.JSON)
    private Map<StringNode, StringNode> objectMap;

    @JdbcTypeCode(SqlTypes.JSON)
    private List<StringNode> list;

    @JdbcTypeCode(SqlTypes.JSON)
    private String jsonString;

    public EntityWithJson() {
    }

    public EntityWithJson(
        Integer id,
        Map<String, String> payload,
        Map<StringNode, StringNode> objectMap,
        List<StringNode> list,
        String jsonString) {
      this.id = id;
      this.payload = payload;
      this.objectMap = objectMap;
      this.list = list;
      this.jsonString = jsonString;
    }
  }

  public static class StringNode {

    private String string;

    public StringNode() {
    }

    public StringNode(String string) {
      this.string = string;
    }

    public String getString() {
      return string;
    }

    public void setString(String string) {
      this.string = string;
    }

    @Override
    public boolean equals(Object o) {
      if (this == o) {
        return true;
      }
      if (o == null || getClass() != o.getClass()) {
        return false;
      }

      StringNode that = (StringNode) o;

      return Objects.equals(string, that.string);
    }

    @Override
    public int hashCode() {
      return string != null ? string.hashCode() : 0;
    }
  }
}

JacksonJsonFormatMapper.java

https://github.com/hibernate/hibernate-orm/blob/2ca0e1042952dbb3f80cecde9af87ddb0bc34cea/hibernate-core/src/main/java/org/hibernate/type/jackson/JacksonJsonFormatMapper.java

hibernate.config.xml

<?xml version = "1.0" encoding = "utf-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
  "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
  "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
  <session-factory>
    <property name="hibernate.connection.url">
      <!--      jdbc:postgresql://cluster-domain/db-->
      jdbc:aws-wrapper:postgresql://cluster-domain/db
    </property>
    <property name="hibernate.connection.username">foo</property>
    <property name="hibernate.connection.password">bar</property>
    <property name="hibernate.connection.driver_class">software.amazon.jdbc.Driver</property>
    <!--    <property name="hibernate.connection.driver_class">org.postgresql.Driver</property>-->
    <property name="hibernate.show_sql">true</property>
  </session-factory>
</hibernate-configuration>

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

1.0.1

JDK version used

11

Operating System and version

Windows 11 & Ubuntu

Custom AwsCredentialsProvider

Describe the feature

The IAM authentication plugin currently uses the DefaultCredentialsProvider. It would be great if there were a way to customise this.

Use Case

This would be useful in cases where we need to use a role assumed through STS to do the IAM auth.

Proposed Solution

The connection string could take a parameter specifying the name of a user-defined factory class to use to provide the AwsCredentialsProvider. There might also need to be a mechanism to pass some configuration to that factory class, eg. through a second parameter.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.0

JDK version used

N/A

Operating System and version

N/A

Datasources and connection pooling in presence of RDS proxy.

Describe the issue

For datasources documentation it is also good to mention about RDS proxy.

See Failover section of rds proxy docs

For applications that maintain their own connection pool, going through RDS Proxy means that most connections stay alive during failovers or other disruptions. Only connections that are in the middle of a transaction or SQL statement are canceled. RDS Proxy immediately accepts new connections. When the database writer is unavailable, RDS Proxy queues up incoming requests.

For applications that don't maintain their own connection pools, RDS Proxy offers faster connection rates and more open connections. It offloads the expensive overhead of frequent reconnects from the database. It does so by reusing database connections maintained in the RDS Proxy connection pool. This approach is particularly important for TLS connections, where setup costs are significant.

I also have follow up questions regarding this:

  1. So does this mean, I should not create connection pool inside java application?
  2. RDS proxy pool is internal inside vpc as I understand, so how will we create datasource in this case?

Please help me clarifying about rds proxy and is it safe to use any kind of connection pool like hikari or tomcat pool.

Links

https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/DataSource.md

AwsSecretsManager Plugin with Application Server (Tomcat) Resource Configuration - Credentials not read from secret

I couldn't find any information if it is possible to use this wrapper by tomcat xml configuration only but tried anyway. After a lot of trial and error I'm stuck now and would kindly ask for help.

I'm trying to connect my java application (Liferay) running on a tomcat 9.0.67 in an EKS 1.24.7 cluster with an Aurora PostgreSQL 11.13. My database configuration is done in tomcat/conf/Catalina/localhost/ROOT.xml and looks like this:

<Context crossContext="true">
    ...
    <Resource name="jdbc/LiferayPool"
              auth="Container"
              type="software.amazon.jdbc.ds.AwsWrapperDataSource"
              factory="software.amazon.jdbc.ds.AwsWrapperDataSourceFactory"
              driverClassName="software.amazon.jdbc.Driver"
              jdbcUrl="jdbc:aws-wrapper:postgresql://${jdbc.host}/${jdbc.database}?sslmode=require"
              user="-"
              password="-"
              wrapperPlugins="awsSecretsManager"
              secretsManagerSecretId="liferay-db-secrets"
              secretsManagerRegion="eu-central-1"
              targetDataSourceClassName="org.postgresql.ds.PGSimpleDataSource"
              jdbcProtocol="jdbc:postgresql:"
              serverPropertyName="serverName"
              portPropertyName="port"
              urlPropertyName="jdbcUrl"
              databasePropertyName="databaseName"
              connectionTimeout="30000"
              idleTimeout="600000"
              maxLifetime="1800000"
              connectionTestQuery="SELECT 1"
              minimumIdle="10"
              maxTotal="30"
              maximumPoolSize="40"
              registerMbeans="true"
    />
</Context>

and I have the following dependencies as jar files in my tomcat/lib/ext directory:

com.fasterxml.jackson.core:jackson-annotations:2.14.1
com.fasterxml.jackson.core:jackson-core:2.14.1
com.fasterxml.jackson.core:jackson-databind:2.14.1
org.reactivestreams:reactive-streams:1.0.4
org.slf4j:slf4j-api:2.0.6
software.amazon.awssdk:auth:2.19.13
software.amazon.awssdk:aws-core:2.19.13
software.amazon.awssdk:aws-json-protocol:2.19.13
software.amazon.awssdk:aws-query-protocol:2.19.13
software.amazon.awssdk:endpoints-spi:2.19.13
software.amazon.awssdk:http-client-spi:2.19.13
software.amazon.awssdk:json-utils:2.19.13
software.amazon.awssdk:metrics-spi:2.19.13
software.amazon.awssdk:profiles:2.19.13
software.amazon.awssdk:protocol-core:2.19.13
software.amazon.awssdk:rds:2.19.13
software.amazon.awssdk:regions:2.19.13
software.amazon.awssdk:sdk-core:2.19.13
software.amazon.awssdk:secretsmanager:2.19.13
software.amazon.awssdk:sts:2.19.13
software.amazon.awssdk:third-party-jackson-core:2.19.13
software.amazon.awssdk:url-connection-client:2.19.13
software.amazon.awssdk:utils:2.19.13
software.amazon.jdbc:aws-advanced-jdbc-wrapper:1.0.0

My problem is, that I cannot omit (or provide an empty string for) the two attributes "user" and "password", as I receive the following exception if I do:

12-Jan-2023 06:51:30.837 WARNING [main] org.apache.naming.NamingContext.lookup Unexpected exception resolving reference
        java.lang.NullPointerException
                at software.amazon.jdbc.ds.AwsWrapperDataSourceFactory.getObjectInstance(AwsWrapperDataSourceFactory.java:58)
                at org.apache.naming.factory.FactoryBase.getObjectInstance(FactoryBase.java:96)
                at java.naming/javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:341)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:864)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:158)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:850)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:158)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:850)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:158)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:850)
                at org.apache.naming.NamingContext.lookup(NamingContext.java:172)
                at org.apache.naming.SelectorContext.lookup(SelectorContext.java:161)
                at java.naming/javax.naming.InitialContext.lookup(InitialContext.java:409)
                at com.liferay.portal.kernel.jndi.JNDIUtil._lookup(JNDIUtil.java:134)
                at com.liferay.portal.kernel.jndi.JNDIUtil.lookup(JNDIUtil.java:33)
                at com.liferay.portal.dao.jdbc.DataSourceFactoryImpl.initDataSource(DataSourceFactoryImpl.java:172)
                at com.liferay.portal.kernel.dao.jdbc.DataSourceFactoryUtil.initDataSource(DataSourceFactoryUtil.java:39)
                at com.liferay.portal.dao.init.DBInitUtil._initDataSource(DBInitUtil.java:223)
                at com.liferay.portal.dao.init.DBInitUtil.init(DBInitUtil.java:70)
                at com.liferay.portal.spring.context.PortalContextLoaderListener.contextInitialized(PortalContextLoaderListener.java:240)
                at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4769)
                at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5231)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726)
                at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698)
                at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696)
                at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:690)
                at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1889)
                at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
                at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
                at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
                at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:583)
                at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:473)
                at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618)
                at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319)
                at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
                at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
                at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
                at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946)
                at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396)
                at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386)
                at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
                at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
                at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
                at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919)
                at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:265)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
                at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
                at org.apache.catalina.startup.Catalina.start(Catalina.java:772)
                at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.base/java.lang.reflect.Method.invoke(Method.java:566)
                at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345)
                at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476)

But If I provide some dummy string like "-" this information is used instead of the actual secrets value even thouh I can see that the the awsSecretsManager Plugin is used in the stacktrace:

12-Jan-2023 07:04:07.055 SEVERE [main] org.apache.catalina.core.StandardContext.listenerStart Exception sending context initialized event to listener instance of class [com.liferay.portal.spring.context.PortalContextLoaderListener]
      java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "-"
              at com.liferay.portal.spring.context.PortalContextLoaderListener.contextInitialized(PortalContextLoaderListener.java:250)
              at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4769)
              at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5231)
              at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
              at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726)
              at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698)
              at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696)
              at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:690)
              at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1889)
              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
              at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
              at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:583)
              at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:473)
              at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618)
              at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319)
              at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
              at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
              at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
              at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946)
              at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835)
              at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
              at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396)
              at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
              at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
              at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919)
              at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:265)
              at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
              at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432)
              at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
              at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
              at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
              at org.apache.catalina.startup.Catalina.start(Catalina.java:772)
              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
              at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
              at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
              at java.base/java.lang.reflect.Method.invoke(Method.java:566)
              at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345)
              at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476)
      Caused by: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "-"
              at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:646)
              at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:180)
              at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:235)
              at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
              at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247)
              at org.postgresql.Driver.makeConnection(Driver.java:434)
              at org.postgresql.Driver.connect(Driver.java:291)
              at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
              at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:228)
              at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:103)
              at org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:87)
              at software.amazon.jdbc.DataSourceConnectionProvider.connect(DataSourceConnectionProvider.java:104)
              at software.amazon.jdbc.plugin.DefaultConnectionPlugin.connect(DefaultConnectionPlugin.java:126)
              at software.amazon.jdbc.ConnectionPluginManager.lambda$connect$4(ConnectionPluginManager.java:314)
              at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$0(ConnectionPluginManager.java:246)
              at software.amazon.jdbc.ConnectionPluginManager.lambda$null$1(ConnectionPluginManager.java:250)
              at software.amazon.jdbc.plugin.AwsSecretsManagerConnectionPlugin.connect(AwsSecretsManagerConnectionPlugin.java:151)
              at software.amazon.jdbc.ConnectionPluginManager.lambda$connect$4(ConnectionPluginManager.java:314)
              at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$2(ConnectionPluginManager.java:250)
              at software.amazon.jdbc.ConnectionPluginManager.executeWithSubscribedPlugins(ConnectionPluginManager.java:228)
              at software.amazon.jdbc.ConnectionPluginManager.connect(ConnectionPluginManager.java:311)
              at software.amazon.jdbc.wrapper.ConnectionWrapper.init(ConnectionWrapper.java:131)
              at software.amazon.jdbc.wrapper.ConnectionWrapper.<init>(ConnectionWrapper.java:85)
              at software.amazon.jdbc.ds.AwsWrapperDataSource.getConnection(AwsWrapperDataSource.java:123)
              at software.amazon.jdbc.ds.AwsWrapperDataSource.getConnection(AwsWrapperDataSource.java:77)
              at com.liferay.portal.dao.jdbc.util.DBInfoUtil.lambda$getDBInfo$0(DBInfoUtil.java:47)
              at com.liferay.petra.concurrent.ConcurrentMapperHashMap.lambda$computeIfAbsent$1(ConcurrentMapperHashMap.java:114)
              at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
              at com.liferay.petra.concurrent.ConcurrentMapperHashMap.computeIfAbsent(ConcurrentMapperHashMap.java:111)
              at com.liferay.portal.dao.jdbc.util.DBInfoUtil.getDBInfo(DBInfoUtil.java:44)
              at com.liferay.portal.spring.hibernate.DialectDetector.getDialect(DialectDetector.java:51)
              at com.liferay.portal.dao.init.DBInitUtil._initDataSource(DBInitUtil.java:226)
              at com.liferay.portal.dao.init.DBInitUtil.init(DBInitUtil.java:70)
              at com.liferay.portal.spring.context.PortalContextLoaderListener.contextInitialized(PortalContextLoaderListener.java:240)
              ... 41 more

The secret is provided through a k8s service account that references an IAM role with the permission to read the secret in question. My environment variables look like this:

env | grep AWS
AWS_ROLE_ARN=arn:aws:iam::XXXXXXXXXX:role/YYYYYYYYYYYYYYYYYYYYYYYY
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
AWS_STS_REGIONAL_ENDPOINTS=regional
AWS_DEFAULT_REGION=eu-central-1
AWS_REGION=eu-central-1

I have verified that the connection and permissions are correctly setup by modifying one of the sample projects you provided and running it on the pod in question with tomcat/lib/ext/ as classpath:

package com.example.aws.sdk.test;

import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;
import software.amazon.awssdk.services.secretsmanager.model.SecretsManagerException;

public class AwsTest {
    public static void main(String[] args) {
        try {
            SecretsManagerClient secretsClient = SecretsManagerClient.builder().region(Region.EU_CENTRAL_1)
                .credentialsProvider(DefaultCredentialsProvider.create()).build();

            GetSecretValueRequest valueRequest = GetSecretValueRequest.builder().secretId("liferay-db-secrets").build();

            GetSecretValueResponse valueResponse = secretsClient.getSecretValue(valueRequest);
            String secret = valueResponse.secretString();
            System.out.println(secret);

        } catch (SecretsManagerException e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
    }
}

Am I missing something or is this not intended to work like this?

NoClassDefError when connecting to Aurora Postgres

Describe the bug

Upgrading our code from version 1.0.2 to 2.1.0 results in the exception below.

Here is how we create the connection:

        username = creds.username
        password = creds.password
        dataSourceClassName = AwsWrapperDataSource::class.java.name
        addDataSourceProperty("jdbcProtocol", "jdbc:postgresql:")
        addDataSourceProperty("databasePropertyName", "databaseName")
        addDataSourceProperty("portPropertyName", "portNumber")
        addDataSourceProperty("serverPropertyName", "serverName")
        addDataSourceProperty("targetDataSourceClassName", PGSimpleDataSource::class.java.name)
        val targetDataSourceProps = Properties()
        targetDataSourceProps.setProperty("serverName", dbDetails.host.value)
        targetDataSourceProps.setProperty("databaseName", db.name)
        targetDataSourceProps.setProperty("portNumber", dbDetails.port.value.toString())
        targetDataSourceProps.setProperty("wrapperLoggerLevel", "CONFIG")
        targetDataSourceProps.setProperty("wrapperLogUnclosedConnections", "true")
        val plugins = if (dbDetails.isAurora) {
            // https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/UsingTheJdbcDriver.md#list-of-available-plugins
            // TODO: This is where you can enable IAM auth
            "auroraConnectionTracker,failover,efm"
        } else {
            ""
        }
        targetDataSourceProps.setProperty("wrapperPlugins", plugins)
        addDataSourceProperty("targetDataSourceProperties", targetDataSourceProps)

Expected Behavior

Connect to Aurora

What plugins are used? What other connection properties were set?

"auroraConnectionTracker,failover,efm"

Current Behavior

java.lang.NoClassDefFoundError: com/mysql/cj/exceptions/WrongArgumentException
at software.amazon.jdbc.plugin.failover.FailoverConnectionPlugin.lambda$initHostProvider$0(FailoverConnectionPlugin.java:213) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.plugin.failover.FailoverConnectionPlugin.initHostProvider(FailoverConnectionPlugin.java:245) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.plugin.failover.FailoverConnectionPlugin.initHostProvider(FailoverConnectionPlugin.java:209) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.ConnectionPluginManager.lambda$initHostProvider$8(ConnectionPluginManager.java:517) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$2(ConnectionPluginManager.java:278) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.ConnectionPluginManager.executeWithSubscribedPlugins(ConnectionPluginManager.java:256) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.ConnectionPluginManager.initHostProvider(ConnectionPluginManager.java:513) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.wrapper.ConnectionWrapper.init(ConnectionWrapper.java:128) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.wrapper.ConnectionWrapper.(ConnectionWrapper.java:88) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.ds.AwsWrapperDataSource.createConnectionWrapper(AwsWrapperDataSource.java:163) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at software.amazon.jdbc.ds.AwsWrapperDataSource.getConnection(AwsWrapperDataSource.java:133) ~[aws-advanced-jdbc-wrapper-2.1.0.jar:?]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359) ~[HikariCP-5.0.1.jar:?]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) ~[HikariCP-5.0.1.jar:?]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470) ~[HikariCP-5.0.1.jar:?]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-5.0.1.jar:?]
at com.zaxxer.hikari.pool.HikariPool.(HikariPool.java:100) ~[HikariCP-5.0.1.jar:?]
at com.zaxxer.hikari.HikariDataSource.(HikariDataSource.java:81) ~[HikariCP-5.0.1.jar:?]
at com.paxos.messaging.db.HikariDataSourceFactoryKt.createHikariDatasource-dWUq8MI(HikariDataSourceFactory.kt:214) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.HikariDataSourceFactoryKt.createHikariDatasource-9VgGkz4(HikariDataSourceFactory.kt:139) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.DefaultHikariDataSourceFactory$createHikariDataSource$2.invoke(HikariDataSourceFactory.kt:66) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.DefaultHikariDataSourceFactory$createHikariDataSource$2.invoke(HikariDataSourceFactory.kt:63) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.DefaultHikariDataSourceFactory.createHikariDataSource$lambda$0(HikariDataSourceFactory.kt:63) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708) ~[?:?]
at com.paxos.messaging.db.DefaultHikariDataSourceFactory.createHikariDataSource(HikariDataSourceFactory.kt:63) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.DefaultDataSourceFactory.createDataSource(DefaultDataSourceFactory.kt:12) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.DefaultDbTransactionFactory.createReadOnlyDbTransaction(DefaultDbTransactionFactory.kt:15) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at com.paxos.messaging.db.AbstractDbTransactionFactory$roTx$2.invokeSuspend(AbstractDbTransactionFactory.kt:20) ~[messaging-utils-db-db-impl-1679-d6375b82.jar:?]
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) ~[kotlin-stdlib-1.8.21.jar:1.8.21-release-380(1.8.21)]
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664) ~[kotlinx-coroutines-core-jvm-1.6.4.jar:?]
Caused by: java.lang.ClassNotFoundException: com.mysql.cj.exceptions.WrongArgumentException
at jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) ~[?:?]
at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) ~[?:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[?:?]
... 35 more

Reproduction Steps

Upgrade to 2.1.0
Try to connect to Aurora Postgres

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

2.1.0

JDK version used

17.0.6

Operating System and version

linux

Rollback to or release of savepoint fails with ClassCastException

Describe the bug

Calling Connection.rollback(savepoint) or Connection.releaseSavepoint(savepoint) with a savepoint that was returned by Connection.setSavepoint() will fail with a ClassCastException.

Expected Behavior

No exception, instead the operation should succeed as it does when not using the wrapper library.

What plugins are used? What other connection properties were set?

None

Current Behavior

Exception in thread "main" java.lang.ClassCastException: class software.amazon.jdbc.wrapper.SavepointWrapper cannot be cast to class org.postgresql.jdbc.PSQLSavepoint (software.amazon.jdbc.wrapper.SavepointWrapper and org.postgresql.jdbc.PSQLSavepoint are in unnamed module of loader 'app')
	at org.postgresql.jdbc.PgConnection.rollback(PgConnection.java:1809)
	at software.amazon.jdbc.wrapper.ConnectionWrapper.lambda$rollback$39(ConnectionWrapper.java:678)
	at software.amazon.jdbc.util.WrapperUtils.lambda$runWithPlugins$1(WrapperUtils.java:169)
	at software.amazon.jdbc.plugin.DefaultConnectionPlugin.execute(DefaultConnectionPlugin.java:98)
	at software.amazon.jdbc.ConnectionPluginManager.lambda$execute$3(ConnectionPluginManager.java:327)
	at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$0(ConnectionPluginManager.java:267)
	at software.amazon.jdbc.ConnectionPluginManager.executeWithSubscribedPlugins(ConnectionPluginManager.java:249)
	at software.amazon.jdbc.ConnectionPluginManager.execute(ConnectionPluginManager.java:324)
	at software.amazon.jdbc.util.WrapperUtils.executeWithPlugins(WrapperUtils.java:226)
	at software.amazon.jdbc.util.WrapperUtils.runWithPlugins(WrapperUtils.java:162)
	at software.amazon.jdbc.wrapper.ConnectionWrapper.rollback(ConnectionWrapper.java:672)
	at AwsWrapperTest.main(AwsWrapperTest.java:9)

Reproduction Steps

Run code below, using your own values for the parameters to DriverManager.getConnection.

import java.sql.*;

public class AwsWrapperTest {

    public static void main(String[] args) throws SQLException {
        Connection connection = DriverManager.getConnection("jdbc:aws-wrapper:postgresql://localhost:49153/postgres", "username", "password");
        connection.setAutoCommit(false);
        Savepoint savepoint = connection.setSavepoint();
        connection.rollback(savepoint);
//        connection.releaseSavepoint(savepoint); // fails too when used instead of previous line
    }
}

Possible Solution

No response

Additional Information/Context

PostgreSQL version 15.1.

PostgreSQL JDBC driver "org.postgresql:postgresql:42.5.2"

The AWS Advanced JDBC Driver version used

1.0.1

JDK version used

openjdk version "17.0.3" 2022-04-19 OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7) OpenJDK 64-Bit Server VM Temurin-17.0.3+7 (build 17.0.3+7, mixed mode, sharing)

Operating System and version

Windows 11

Check for ordering of plugins

There are comments in the docs that suggest that there is a preferred order of plugins. Do we check this when we load the plugins ?

Ability to use IAM over Jump Boxes

Describe the feature

Be able to use the iam plugin when going through a jump box. Currently, it errors saying that localhost does not match the rds endpoint structure.

Use Case

I'm trying to use JDBC to test applications locally connecting to Aurora Database through JDBC where it is required to use a jump box. With the jump box configuration, you have to use the localhost address when connecting to the database.

Proposed Solution

Add a field to the wrapper that would be the endpoint of the database instance (failover??) that when localhost is used the endpoint field could be used in its place.

Other Information

Unsupported AWS hostname localhost. Amazon domain name in format *.AWS-Region.rds.amazonaws.com or *.rds.AWS-Region.amazonaws.com.cn is expected.

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.0

JDK version used

OpenJDK 18.0.02

Operating System and version

Windows 11

chore: confirm that all PostgreSQL extensions work

Describe the bug

Confirm that CopyManager works
Geometry types work

Expected Behavior

CopyManager works

What plugins are used? What other connection properties were set?

N/A

Current Behavior

N/A

Reproduction Steps

N/A

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

Latest

JDK version used

1.8

Operating System and version

any

v1.0.2 cannot connect to an RDS PostgreSQL instance

Describe the bug

When any connection is created I'm getting the following error:

PSQLException
ERROR: function aurora_replica_status() does not exist
  Hint: No function matches the given name and argument types. You might need to add explicit type casts.
  Position: 35

Expected Behavior

Same behavior as with v1.0.1 of the wrapper.

What plugins are used? What other connection properties were set?

None

Current Behavior

org.postgresql.util.PSQLException: ERROR: function aurora_replica_status() does not exist
  Hint: No function matches the given name and argument types. You might need to add explicit type casts.
  Position: 35
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2401)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:368)
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:498)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:415)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:335)
    at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:321)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:297)
    at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:246)
    at software.amazon.jdbc.hostlistprovider.AuroraHostListProvider.queryForTopology(AuroraHostListProvider.java:349)
    at software.amazon.jdbc.hostlistprovider.AuroraHostListProvider.getTopology(AuroraHostListProvider.java:262)
    at software.amazon.jdbc.hostlistprovider.AuroraHostListProvider.refresh(AuroraHostListProvider.java:478)
    at software.amazon.jdbc.PluginServiceImpl.refreshHostList(PluginServiceImpl.java:320)
    at software.amazon.jdbc.plugin.failover.FailoverConnectionPlugin.connect(FailoverConnectionPlugin.java:795)
    at software.amazon.jdbc.ConnectionPluginManager.lambda$connect$4(ConnectionPluginManager.java:348)
    at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$2(ConnectionPluginManager.java:275)
    at software.amazon.jdbc.ConnectionPluginManager.lambda$null$1(ConnectionPluginManager.java:275)
    at software.amazon.jdbc.plugin.AuroraConnectionTrackerPlugin.connect(AuroraConnectionTrackerPlugin.java:96)
    at software.amazon.jdbc.ConnectionPluginManager.lambda$connect$4(ConnectionPluginManager.java:348)
    at software.amazon.jdbc.ConnectionPluginManager.lambda$makePluginChainFunc$2(ConnectionPluginManager.java:275)
    at software.amazon.jdbc.ConnectionPluginManager.executeWithSubscribedPlugins(ConnectionPluginManager.java:253)
    at software.amazon.jdbc.ConnectionPluginManager.connect(ConnectionPluginManager.java:345)
    at software.amazon.jdbc.wrapper.ConnectionWrapper.init(ConnectionWrapper.java:131)
    at software.amazon.jdbc.wrapper.ConnectionWrapper.<init>(ConnectionWrapper.java:85)
    at software.amazon.jdbc.ds.AwsWrapperDataSource.createConnectionWrapper(AwsWrapperDataSource.java:164)
    at software.amazon.jdbc.ds.AwsWrapperDataSource.getConnection(AwsWrapperDataSource.java:134)
    at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359)
    at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201)
    at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470)
    at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
    at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:100)
    at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81)

Reproduction Steps

Create a new maven project and make sure to include the following dependencies:

<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.6.0</version>
</dependency>

<dependency>
    <groupId>software.amazon.jdbc</groupId>
    <artifactId>aws-advanced-jdbc-wrapper</artifactId>
    <version>1.0.2</version>
</dependency>

Add the following java file and run against an RDS instance running Postgres 15.2 (i.e. not an Aurora cluster).

ReproduceRdsConnectionIssue.java

import software.amazon.jdbc.ds.AwsWrapperDataSource;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.util.Properties;

public class ReproduceRdsConnectionIssue {

    public static void main(String[] args) throws Exception {
        AwsWrapperDataSource ds = getDataSource();
        try (final Connection conn = ds.getConnection("myUser", "myPassword");
             final Statement statement = conn.createStatement();
             final ResultSet rs = statement.executeQuery("SELECT 1")) {
            if (rs.next()) {
                System.out.println("Got " + rs.getInt(1));
            }
        }
    }

    static AwsWrapperDataSource getDataSource() {
        AwsWrapperDataSource ds = new AwsWrapperDataSource();
        ds.setJdbcProtocol("jdbc:postgresql:");
        ds.setDatabasePropertyName("databaseName");
        ds.setServerPropertyName("serverName");
        ds.setPortPropertyName("port");
        ds.setTargetDataSourceClassName("org.postgresql.ds.PGSimpleDataSource");
        Properties targetDataSourceProps = new Properties();
        targetDataSourceProps.setProperty("serverName", "myServerName");
        targetDataSourceProps.setProperty("databaseName", "myDatabaseName");
        targetDataSourceProps.setProperty("port", "myPort");
        ds.setTargetDataSourceProperties(targetDataSourceProps);
        return ds;
    }
}

Possible Solution

No response

Additional Information/Context

  • Postgres 15.2 on RDS
  • Java 17
  • Used HikariCP for database pooling but don't think it matters

The AWS Advanced JDBC Driver version used

1.0.2

JDK version used

17

Operating System and version

Ubuntu

Set wrapperPlugins without writing code

The latest announcement advertises this driver as drop-in compatible. I understand that I have to add the jar to my project and change the protocol prefix to jdbc:aws-wrapper: in the JDBC connection URL. But how do I set wrapperPlugins without changing my code? The docs do not suggest any method for that. Can I add the connection plugin manager parameters as URL parameters to the JDBC connection URL?

Option to specify priority/weight in HostSpec

Describe the feature

I'm currently working on a HostListProvider that does topology discovery using AWS APIs to support Aurora Global Database. I'm trying to make it such that DB connections will be made to local regions when possible.

When reader failover occurs the available reader hosts are shuffled and connected to at random. I think this is a good strategy for distributing connections among a local cluster with many readers but less optimal in a global database where reads to a distant region can add significant latency to DB queries.

Use Case

I'm currently working on a HostListProvider that does topology discovery using AWS APIs to support Aurora Global Database. I'm trying to make it such that DB connections will be made to local regions when possible. I need a way to weigh/prioritize HostSpec objects such that clients will failover to local readers as opposed to remote readers.

Proposed Solution

I propose the option to specify an optional (default to 0) priority/weight to each HostSpec object such that certain hosts will be higher priority than others. Then the reader failover behavior could be updated to group hosts by priority and shuffle at each priority. For the AuroraHostListProvider there would be no behavior change.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

The AWS Advanced JDBC Driver version used

1.0.0

JDK version used

JDK 11

Operating System and version

Ubuntu 20.04

Datasource and IAM plugin

Hey I have a question about how to configure Datasource with IAM plugin. Tried using this following configuration as DataSource:

       ds.setJdbcProtocol("jdbc:postgresql:");
       ds.setDatabasePropertyName("databaseName");
       ds.setServerPropertyName("serverName");
       ds.setPortPropertyName("port");
       ds.setUser("app_user");
//        ds.setPassword("pass");


       // Specify the driver-specific data source:
       ds.setTargetDataSourceClassName("org.postgresql.ds.PGSimpleDataSource");


       // Configure the driver-specific data source:
       Properties targetDataSourceProps = new Properties();
       targetDataSourceProps.setProperty("serverName", "x");
       targetDataSourceProps.setProperty("databaseName", "database_name");
       targetDataSourceProps.setProperty("port", "5432");
       targetDataSourceProps.setProperty("currentSchema", "schema");
       targetDataSourceProps.setProperty("ssl", "true");
       targetDataSourceProps.setProperty("sslmode", "require");
       targetDataSourceProps.setProperty(PropertyDefinition.PLUGINS.name, "iam");
       ds.setTargetDataSourceProperties(targetDataSourceProps);

       return ds;

Unfortunately no joy. It generates a token that ends up in properties under password but in a later stage in DefaultConnectionProvider it ends up being removed and while obtaining a connection it tries to use a password instead of a token.

MySQL Aurora failover ends up on reader instead of writer

Describe the bug

We are seeing our connections to RDS MySQL Aurora 'failover' incorrectly to the reader node rather than the writer node when using the following configuration:

  • JDBC Wrapper 1.0.2
  • MySQL Connector 8.0.32
  • Hikari 5.0.1
  • A Custom Domain CNAME to the cluster writer (TTL = 1s)
  • A jdbc url like this: jdbc:aws-wrapper:mysql://[custom-domain]:3306/db?clusterInstanceHostPattern=?.[clusterid].us-east-1.rds.amazonaws.com&wrapperLoggerLevel=ALL

Our test is similar to the one described in: awslabs/aws-mysql-jdbc#377

We simply have a loop periodically getting the 1 and only connection from the connection pool, doing some transactions, releasing the connection. We then initiate a failover from the writer to a reader. We see this issue when the connection is idle in the pool and subsequently checked out. We then get this sort of error:

java.sql.SQLException: The MySQL server is running with the --read-only option so it cannot execute this statement

Attached is the anonymized debug log during the cut over.

When using the cluster writer endpoint directly the issue does not seem to happen.

Expected Behavior

The failover should end up on the writer node.

What plugins are used? What other connection properties were set?

Default plugins are used. No additional properties other than in the JDBC url.

Current Behavior

The connection ends up on the reader instead of the writer.

Reproduction Steps

See description. Attached is the debug log.
wrapper.txt

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

1.0.2

JDK version used

Corretto 11.0.17.8.1

Operating System and version

Amazon Linux 2 (4.14.296-222.539.amzn2.x86_64)

Mysterious Hibernate dialect class `s.a.u.AuroraPostgreSQLDialect`

In https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/examples/HibernateExample/src/main/resources/META-INF/persistence.xml which accompanies the Hibernate example, there's a reference to this class:

        <property name="hibernate.dialect" value="software.amazon.utility.AuroraPostgreSQLDialect"/>

Unless I've missed it somewhere, this class doesn't seem present in the codebase.
Could you clarify what it's supposed to do, or if it's actually necessary after all?

ReadWriteSplittingPlugin is not available in release version 1.0.0 and Not able to build the aws-advanced-jdbc-wrapper in local

Hi Team,

There are two issues/queries with aws-advanced-jdbc-wrapper.

  1. As per the documentation, tried to use the ReadWriteSplittingPlugin for our project. But it looks like ReadWriteSplittingPlugin feature is yet to be released as it is not available as part of version 1.0.0.

https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheReadWriteSplittingPlugin.md

Could you please advise when this plugin will be available?

  1. Hence, tried to clone the repository in the local system and build the project by running the command "./gradlew build".
    But It's generating the jar only with manifest file. Couldn't able to build in local system. Please advise on this.

Attached for your reference.

aws-advanced-jdbc-wrapper-1.0.0.jar.zip

Password change not picked up

Describe the bug

When are using the wrapper in conjunction with Hikari and you change the username and password the Wrapper seems to be caching credentials and not changing them.
In our application we have rotating credentials and this causes a failure.
This is a regression in 1.0.1 from 1.0.0

Expected Behavior

changing the username and password in hikari should update the AWS wrapper credentials

What plugins are used? What other connection properties were set?

none

Current Behavior

Hikari connections keep using old credentails

Reproduction Steps

I have attached a gradle project with a unit test which fails with version 1.0.1 and passes with version 1.0.0
To run the test simply run
./gradlew test
bug.tar.gz

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

1.0.1

JDK version used

17.0.4

Operating System and version

MacOS Ventura and Linux

[SecretsManagerPlugin] secretsManagerSecretId in ARN form already contains Region

Describe the bug

secretsManagerRegion should not be mandatory if ARN provided for secretsManagerSecretId.

Ref https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheAwsSecretsManagerPlugin.md

Expected Behavior

Region is parsed from ARN secretsManagerSecretId.

What plugins are used? What other connection properties were set?

SecretsManagerPlugin

Current Behavior

secretsManagerRegion is mandatory

Reproduction Steps

Documentation states secretsManagerRegion is mandatory
https://github.com/awslabs/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheAwsSecretsManagerPlugin.md#aws-secrets-manager-connection-plugin-parameters

Possible Solution

No response

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

any

JDK version used

any

Operating System and version

1.0.2

fix: NullPointerException from WrapperUtils

Describe the bug

The method WrapperUtils#getConnectionFromSqlObject may throw an NPE here if rs.getStatement() returns null.

Caused by: java.lang.NullPointerException
	at software.amazon.jdbc.util.WrapperUtils.getConnectionFromSqlObject(WrapperUtils.java:535)
	at software.amazon.jdbc.ConnectionPluginManager.execute(ConnectionPluginManager.java:318)
	at software.amazon.jdbc.util.WrapperUtils.executeWithPlugins(WrapperUtils.java:226)
	at software.amazon.jdbc.wrapper.ResultSetWrapper.next(ResultSetWrapper.java:1202)
	at org.hibernate.tool.schema.extract.internal.AbstractInformationExtractorImpl.extractNameSpaceTablesInformation(AbstractInformationExtractorImpl.java:688)
	at org.hibernate.tool.schema.extract.internal.AbstractInformationExtractorImpl.lambda$getTables$3(AbstractInformationExtractorImpl.java:570)
	at org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl.processTableResultSet(InformationExtractorJdbcDatabaseMetaDataImpl.java:69)
	at org.hibernate.tool.schema.extract.internal.AbstractInformationExtractorImpl.getTables(AbstractInformationExtractorImpl.java:564)
	at org.hibernate.tool.schema.extract.internal.DatabaseInformationImpl.getTablesInformation(DatabaseInformationImpl.java:122)
	at org.hibernate.tool.schema.internal.GroupedSchemaMigratorImpl.performTablesMigration(GroupedSchemaMigratorImpl.java:71)
	at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.performMigration(AbstractSchemaMigrator.java:225)
	at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.doMigration(AbstractSchemaMigrator.java:126)
	at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:284)
	at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.lambda$process$5(SchemaManagementToolCoordinator.java:143)
	at java.base/java.util.HashMap.forEach(HashMap.java:1337)
	at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:140)
	at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:336)
	at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:415)
	at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1423)

Expected Behavior

No NullPointerException.

What plugins are used? What other connection properties were set?

None

Current Behavior

NullPointerException

Reproduction Steps

This error is thrown when running the Hibernate-ORM test suite with the AWS Advanced JDBC driver.

Possible Solution

Add a null check.

Additional Information/Context

No response

The AWS Advanced JDBC Driver version used

1.0.1

JDK version used

Java 11

Operating System and version

Ubuntu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.