Coder Social home page Coder Social logo

aws-broker's People

Contributors

afeld avatar apburnes avatar bengerman13 avatar ccostino avatar chrismcgowan avatar cnelson avatar cweibel avatar dandersonsw avatar dependabot[bot] avatar dlapiduz avatar ephraim-g avatar folksgl avatar geramirez avatar jameshochadel avatar jasonthemain avatar jcscottiii avatar jmcarp avatar kwadwok15 avatar linuxbozo avatar markdboyd avatar mogul avatar pburkholder avatar rbogle avatar rogeruiz avatar sharms avatar siennathesane avatar soutenniza avatar tammersaleh avatar timothy-spencer avatar wjwoodson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-broker's Issues

Readonly S3 access key and secret using the cloud.gov S3 service

In order to support Auditors for our FDIC website, we want auditors to have a readonly access to S3 bucket which they can use for auditing content but they should not have the ability to modify, create or delete content. Currently we cirmumvent this by building a web based readonly User Interface to drill into the contents of the S3 bucket. However, this is limited since auditors would also like to use the credentials to pull content usingthe S3 API.

Acceptance Criteria

  • GIVEN an S3 instance
    WHEN entering cf create-service-key
    THEN developers should have the ability to pass READONLY attribute
    AND a S3 service key with READONLY access id/secret should be provisioned

Security considerations

[note any potential changes to security boundaries, practices, documentation, risk that arise directly from this story]

Implementation sketch

[links to background notes, sketches, and/or relevant documentation

  • [first thing to do]
  • [another thing to do]

Provision a bucket for RDS logs

In order to ship RDS logs to a customer via S3, the broker should be able to provision a bucket if necessary.

Acceptance Criteria

  • GIVEN a customer opting in to RDS log shipping to S3
    AND without an existing bucket for RDS logs in the space
    WHEN the customer opts in to ship RDS logs
    THEN the broker provisions an S3 bucket (service instance) in the customer space

  • GIVEN a customer opting in to RDS log shipping to S3
    AND with an existing bucket for RDS logs in the space
    WHEN the customer opts in to ship RDS logs
    THEN the broker uses the existing S3 bucket (service instance) in the customer space


Security considerations

By leveraging the broker, we minimize any security risks (they are the same with any other S3 bucket). Because the service instance is in the customer space, the customer can generate bindings and service keys as needed.

Allow enabling performance schema for mysql instances

In order to give users greater visibility into performance, we want to allow enabling the performance schema on mysql databases (this is a mysql-specific thing - postgres has this on always). We should consider whether we want this on always, on by default and togglable, or off by default and toggleable

Add support for setting custom parameters when creating RDS instances

In order to allow customers to set custom parameters that are offered by RDS, we want to expand the support for custom parameter groups and allow users to set the parameters directly when brokering a new RDS instance, e.g., cf create-service aws-rds $PLAN $SERVICE_NAME -c '{"custom_param": $value, "custom_param": $value}'.

Custom parameter groups are already implemented in the broker and are created per instance, when they are instantiated. That is currently only for enabling functions for MySQL, though. This would be expanding the functionality to allow for more.

Acceptance Criteria

  • GIVEN custom parameters provided by the -c argument in cf create-service
    WHEN the broker processes the arguments
    AND confirms that they are valid parameters with acceptable values
    THEN a new custom parameter group is created with the new parameters set
    AND associated with instantiated RDS instance
  • GIVEN an existing brokered RDS instance with custom parameters is deleted
    WHEN the broker processes the removal of the instance
    THEN the associated custom parameter group is also deleted

Security considerations

We'll want to make sure the parameters that we allow customers set do not add any risk to the platform and that they will be supported by the automated provisioning. This could mean we only allow a subset of custom parameters to be configured, and may vary between database engines (e.g., MySQL and PostgreSQL).

Implementation sketch

  • Research which custom parameters would make sense to enable in the platform - check with our compliance lead, especially if this involves having to file an SCR
  • Expand the custom parameter group functionality in createDB to allow for these new options
  • Verify that the removal of an RDS instance also deletes its associated custom parameter group (handled by cleanupCustomParameterGroups)
  • Update the RDS documentation in cg-site
  • Write unit tests to cover the new functionality
  • Write smoke tests to cover the new functionality

Users can make databases undeletable

Using the shared plan, user databases can become undeletable:

FAILED
Server error, status code: 502, error code: 10001, message: Service instance databasename: Service broker error: There was an error deleting the instance. Error: pq: role "username" cannot be dropped because some objects depend on it

Log-exporter: Support shipping RDS logs to an S3 bucket

In order to support shipping RDS logs to customers via S3, the log-exporter needs to support copying download archives to S3 buckets.

Depends on: #134

Acceptance Criteria

The log-exporter needs to:

  • Be able to copy downloaded archives to a customer S3 bucket
  • Ensure logs are copied the correct bucket (and only the correct bucket).

Security considerations

None provided the log exporter lands archives in the correct buckets.

Configure MySQL logging to go to a file

In order to support downloading of MySQL logs for customers, we need MySQL to log to a file.

Acceptance Criteria

  • GIVEN MySQL instances that are logging
    WHEN logs are generated
    THEN they are written to a file
    AND downloadable via the AWS CLI

Security considerations

Logs are currently stored within AWS. This will not change as logs will remain in AWS, just in logs instead of a table. Postgres already logs to files now.

Implementation sketch

Setting changes: https://aws.amazon.com/premiumsupport/knowledge-center/rds-download-instance-logs/

  • Update existing standalone MySQL RDS instances to log to a file
  • Update the broker to ensure logging to a file is the default setting for standalone instances

Expose RDS disk usage/capacity to customers

As developer of an app hosted on cloud.gov, I want to be able to monitor the disk usage/capacity of a managed RDS instance so I can capacity plan and avoid unexpected downtime. See related ticket #18.

I would like to be able to query RDS disk space utilization and provisioned capacity using the cf CLI, with the response in a machine-readable format. I'd also like to be able to view these figures on the cloud.gov dashboard.

For RDS for PostgreSQL specifically, I can query the disk usage via:

SELECT pg_size_pretty( pg_database_size('my-db') );

but don't have a way to query for the provisioned capacity.

Retain Elasticsearch Service backups for 14 days after deletion (1/18)

In order to maintain backups as documented, we want to keep backups of Elasticsearch instances for 14 days after service deletion

Acceptance Criteria

  • GIVEN a aws-es service instance
    WHEN a user calls cf delete-service on the instance
    AND the broker finishes deleting the service instance
    THEN existing, automated backups are still available for 14 days
    AND existing automated backups are deleted after 15 days

Security considerations

Protects data in case of accident or malicious deletion

Implementation sketch

  • check if this is possible automatically using current AWS API calls
  • reach out to AWS rep if not
  • consider alternatives if AWS rep doesn't provide timeline for this to be possible with current API

Upgrade shared and internal DB in prod from pg 9.5 (2/8)

Upgrade your RDS for PostgreSQL 9.5 databases before Feb 16, 2021
The RDS for PostgreSQL 9.5 end-of-life date is approaching. Your database is using a version that must be upgraded to 12 or higher as soon as possible. We plan to automatically upgrade RDS for PostgreSQL 9.5 databases to 12 starting February 16, 2021 00:00:01 AM UTC.

For more information, see the RDS for PostgreSQL deprecation timeline in the Amazon RDS forum.

Provide way to provision AWS Buckets with access restricted to agency IP addresses

In almost every case, we would like the access to the AWS S3 Buckets from the Agency IPs or from cloud.gov IPs. A gap in security here is that developers who leave organizations would still have access to the AWS S3 buckets from home or other places. Its not easy to guarantee the keys to be rotated on time espacially when the develoeprs have access to all keys in the space. One possible implementation - while provisioning an AWS S3 service, IP ranges can be passed as a parameter to restrict access to agency IPs. If possible, this should be done to public AWS S3 as well to prevent anyone from logging in and wiping contents (example of this is our website).

Acceptance Criteria

WHEN provisioning a new AWS S3 service
THEN allow Agency IP addresses to be passed to restrict access
AND when service keys are created OR application is bound to service, resulting access keys can be usable only from Agency IPs and cloud.gov IPs


Security considerations

[note any potential changes to security boundaries, practices, documentation, risk that arise directly from this story]

Implementation sketch

[links to background notes, sketches, and/or relevant documentation

  • [first thing to do]
  • [another thing to do]

As a cloud.gov user, I want to be able to resize RDS databases when the storage is full.

The FDIC team filled up a MySQL database provisioned by the broker. Once storage was full, running cf update-service to increase storage would result in an error. The workaround was for support to use the AWS console to increase service.

Acceptance Criteria

  • GIVEN an RDS database that has used all of its storage
    WHEN the user runs a cf update-service to increase storage
    THEN the update succeeds

Security considerations

Updates are controlled by the same RBAC

Log-exporter: Support sending RDS logs to Kibana

In order to support shipping RDS logs to Kibana, the log-exporter needs to support sending downloaded archives to Kibana ingestors.

Depends on:

Acceptance Criteria

The log-exporter needs to:

  • Be able to send downloaded archives Kibana ingestors
  • Customer RDS logs must be scoped correctly to a single org.

Security considerations

See: #132

Allow users to subscribe to RDS alerts

From @afeld:

hey, so @lindsay mentioned that FEC's database ran out of disk space last night. that one happens to be managed outside of cloud.gov so they are able to set up alerts through AWS directly, but made me realize that we don't have a way for users to get those kinds of alerts for managed databases. thoughts?

RDS service instances that fail to create are left in an orphaned state and can't be deleted

When a user creates a new RDS service instance with cf create-service... but the command fails, the service instance can be left in an orphaned state with no actual service and the user is not able to delete the service instance to try again. For example, this is the error output we recently saw that caused this situation:

TIMESTAMP [APP/PROC/WEB/0] OUT InvalidParameterCombination RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.m3.medium, Engine=postgres, EngineVersion=13.3, LicenseModel=postgresql-license. For supported combinations of instance class and database engine version, see the documentation. <nil>
TIMESTAMP [APP/PROC/WEB/0] OUT InvalidParameterCombination RDS does not support creating a DB instance with the following combination: DBInstanceClass=db.m3.medium, Engine=postgres, EngineVersion=13.3, LicenseModel=postgresql-license. For supported combinations of instance class and database engine version, see the documentation. 400
TIMESTAMP [APP/PROC/WEB/0] OUT [martini] Completed 400 Bad Request in 3.33056033s

We need to adjust the error handling in the createDB functions to make sure AWS API calls that result in 400s are handled and cleaned up properly.

Acceptance criteria:

  • When a user runs cf create-service... and the command fails, it handles the API error gracefully and cleans things up properly so orphaned services aren't left over.

Security considerations:

  • Fixing this helps prevent cruft from building up in the platform and helps our users with better error handling

Implementation sketch

  • Fix/improve the error handling in the createDB functions where AWS API calls are made

Add support for upgrading existing PostgreSQL DB instance versions

In order to make database updates easier, we want to enable the ability to update existing brokered DB instances to newer versions.

You would not be able to go back to an older version (AWS doesn't allow this anyway), but it would allow customers to manage a database update themselves instead of having to make a support request.

Acceptance Criteria

  • GIVEN an existing brokered DB service instance
    WHEN a user calls cf update-service -c '{"version": <version>}'
    THEN the broker will verify the version is valid and compatible with their DB instance
    AND perform the database upgrade

Security considerations

  • Helps our customers stay secure and compliant by providing the ability to upgrade their database instances
  • Helps platform operators manage RDS instances more easily

Implementation sketch

  • Add ability to support the version option we added to the CreateInstance method in the ModifyInstance method
  • Adjust the AWS API call to make sure it takes in the DBVersion parameter as a part of the payload

Readonly credentials for RDS during service key creation

In order to support developer access to production databases during debugging, we want developers to have just read access to the DB for a short time.

Acceptance Criteria

  • GIVEN an existing RDS instance
    WHEN entering cf create-service-key
    THEN we should have an option to pass READONLY attribute
    AND service key with READONLY db credentials should be provisioned

Security considerations

[note any potential changes to security boundaries, practices, documentation, risk that arise directly from this story]

Implementation sketch

[links to background notes, sketches, and/or relevant documentation

  • [first thing to do]
  • [another thing to do]

Add support for creating MySQL DB instances with specific versions

In order to provide full version management support for MySQL and the rest of the RDS broker, we need to shift the version information into the catalog and remove the checks for PostgreSQL instances.

You would not be able to go back to an older version (AWS doesn't allow this anyway), but it would allow customers to manage a database update themselves instead of having to make a support request.

This would also allow us to shift our MySQL plans to offer the latest version available instead of one that is pinned several versions behind.

Acceptance Criteria

  • GIVEN a new brokered MySQL DB service instance
    WHEN a user calls `cf create-service -c '{"version": }" to create the instance
    THEN the broker will verify the version is valid and compatible
    AND perform the database creation

Security considerations

  • Helps our customers stay secure and compliant by providing the ability to create newer versions of supported MySQL instances
  • Helps platform operators manage RDS instances more easily

Implementation sketch

  • Shift version information (minimum and maximum allowed) to the catalog plans
  • Modify the version checks to account for the information from the plans
  • Remove the PostgreSQL specific checks
  • Adjust MySQL plans to always provision the latest default version available in AWS just like our PostgreSQL plans

Incorrect status returned during provisioning

The broker seems to be returning succeeded immediately, instead of in progress in entity.last_operation.state, as required by the spec. This causes an issue for automation that depends on this value to determine whether the DB is ready for use (such as in nulldriver/cf-cli-resource#77)

$ cf cs aws-rds medium-psql test && cf curl "/v2/service_instances/$(cf service test --guid)"   
Creating service instance test in org REDACTED / space REDACTED as REDACTED...
OK

{
   "metadata": {
      "guid": "REDACTED",
      "url": "/v2/service_instances/REDACTED",
      "created_at": "2020-04-02T15:20:30Z",
      "updated_at": "2020-04-02T15:20:30Z"
   },
   "entity": {
      "name": "test",
      "credentials": {},
      "service_plan_guid": "REDACTED",
      "space_guid": "REDACTED",
      "gateway_data": null,
      "dashboard_url": null,
      "type": "managed_service_instance",
      "last_operation": {
         "type": "create",
         "state": "succeeded",
         "description": "The instance was created",
         "updated_at": "2020-04-02T15:20:30Z",
         "created_at": "2020-04-02T15:20:30Z"
      },
      "tags": [],
      "maintenance_info": {},
      "service_guid": "REDACTED",
      "space_url": "/v2/spaces/REDACTED",
      "service_plan_url": "/v2/service_plans/REDACTED",
      "service_bindings_url": "/v2/service_instances/REDACTED/service_bindings",
      "service_keys_url": "/v2/service_instances/REDACTED/service_keys",
      "routes_url": "/v2/service_instances/REDACTED/routes",
      "service_url": "/v2/services/REDACTED",
      "shared_from_url": "/v2/service_instances/REDACTED/shared_from",
      "shared_to_url": "/v2/service_instances/REDACTED/shared_to",
      "service_instance_parameters_url": "/v2/service_instances/REDACTED/parameters"
   }
}

Improve the names and descriptions of the database plan offerings so that they are clear and make sense

The current names and descriptions for the database plans that are offered by our RDS broker don't make much sense to our customers, nor do they adequately reflect the actual service instance details they represent. For instance medium-gp-psql doesn't mean much to anyone (what does the gp mean, anyway? -- it's mean for general purpose, but that's beside the point), and the description of Dedicated higher workload medium RDS PostgreSQL DB instance isn't that helpful, either.

We ought to update these names and descriptions so they are clearer, use plain language, and accurately represent their respective service details.

Acceptance criteria:

  • When a user runs cf marketplace -e aws-rds they should see a useful listing of all database plans we currently support that have clear names and useful descriptions.

Security considerations:

  • None; any details we share are already publicly visible in the catalog-template.yml file.

Implementation sketch:

  • Update the catalog-template.yml names and descriptions once we know what want to put

Elasticsearch Dev Plan (es-dev) instances take multiple hours to provision

In order to avoid long provisioning times that seem to happen with development instances of elasticesearch ( currently T2 instances ), we want to change the Elasticsearch broker plan for development type es-dev to use the t3 generation.

Acceptance Criteria

  • A customer specifies the use of ES-Dev service instance, in the plan the instance type is specified as T3, and the broker requests the provisioning of a T3 instance type instead of the current T2 type.
  • Es-dev instances provision in a comparable time with other plan types and instances.

Security considerations

No obvious impacts

Implementation sketch

  • update the catalog and plans for the elasticsearch broker to use t3 instead of t2.

Fix usernames within Postgres RDS instances to be lowercase

We discovered an issue where usernames for PostgreSQL RDS instances were being created with both lowercase and uppercase characters, which causes issues for users trying to connect directly to a PostgreSQL instance (the psql client needs them to be lowercase). We've rolled out a fix for this, but need to adjust existing usernames that contain mixed case characters to be just lower case and ensure their permissions and access still work properly.

Acceptance Criteria

  • GIVEN all affected Postgres users in the CCDB
    WHEN the usernames are changed to be all lowercase
    AND permissions are verified
    THEN users should still be able to connect properly
    AND app access should not break

Security considerations

  • None; this is just swapping affected usernames to become all lowercase

Implementation sketch

Switch to go mod

In order to use current best-practices for go, we want to change from godep to go mod

Security considerations

None

Implementation sketch

start here, maybe

Update MySQL RDS catalog plans to have parity with PostgreSQL and reflect recent AWS instance class changes

In addition to the updates with PostgreSQL, our MySQL offerings should be updated to also account for the new instance class generations offered by AWS. This also includes the ability to now provision db.t3.* instances for MySQL, which means we can now offer a micro-mysql plan that supports encryption at rest (db.t2.micro does not).

Acceptance criteria:

  • All MySQL plans are updated to support the new AWS RDS instance classes that are offered
  • A new micro-mysql plan (or whatever it ends up being called in the future is now available

Security considerations:

  • Helps keep our database offerings up to date within the platform

Implementation sketch:

  • Update catalog-template.yml with new instance classes
  • Add a new micro-mysql plan to catalog-template.yml

Allow for configurable custom backup options with RDS instances

In order to provide enhanced backup services for our customer database instances, we want to explore what it would take to allow for configurable backups so that we can determine if this is feasible within our RDS broker.

This should not have any adverse impact on the existing backup settings we currently have (snapshots taken daily and stored for 14 days). Any such option should be provided in addition to the existing backup configuration and policy.

Ideally, this would be exposed and function exactly the same way as what we do with the storage option currently. The backup days are set in the catalog by default, but we ought to be able to override it with an option that is passed in.

Acceptance Criteria:

  • GIVEN a means of using the AWS API to configure backup policies AND a way of integrating this into the service broker THEN we can formulate a plan for how to approach this AND what (if any) pricing should look like for the option/capability
  • Identify a min/max for what this can be set at - 90 days maximum, 14 days minimum seems appropriate
  • This option is available for both creating a new instance and updating an existing one

Security Considerations:

  • This shouldn't change anything regarding our security boundaries or have any impact on compliance requirements; if anything, it may provide added benefits by allow more fine-grained control, especially with the 14 day daily backup window as the baseline regardless.

Implementation Sketch:

allow advanced options on Elasticsearch service instances

In order to support more complex elasticsearch use cases, we want to allow users to specify advanced options for elasticsearch

Acceptance Criteria

  • WHEN a user requests a new elasticsearch service instance
    AND the user supplies "advanced options" in the configuration body
    THEN the broker should validate the keys against an allowlist
    AND the broker should validate the data type of the values
    AND the broker should create the service instance with the specified advanced options
  • GIVEN an update request
    WHEN a user calls update
    AND the user supplies "advanced options" in the configuration body
    THEN the broker should validate the keys against an allowlist
    AND the broker should validate the data type of the values
    AND the broker should update the service instance to have the specified advanced options

Optional/consider

We might validate the values of the advanced options to give faster feedback for invalid options, instead of waiting on AWS.


Security considerations

None

Trigger prod deployment if no Terraform changes

we have thorough acceptance tests on this app, so we are safe to deploy code-only changes to prod without human intervention

Acceptance Criteria

  • GIVEN a new version of the broker code
    WHEN the version has passed staging acceptance tests
    AND the version does not trigger any changes in terraform
    THEN the version should trigger a production build
  • GIVEN a new version of the broker code
    WHEN the version has passed staging acceptance tests
    AND the version triggers changes in terraform
    THEN the version should require human intervention to trigger a production deployment

Security considerations

None

Add support for brokering earlier Postgres versions in AWS

In order to allow customers to set which version of postgres they are using when instantiating an RDS instance, we want to expand the support for use of custom parameter groups and allow users to set this parameter directly when brokering a new RDS instance, e.g., cf create-service aws-rds $PLAN $SERVICE_NAME -c '{"version": $value}'.

Note: we want to make sure that we are only allowing the customer to dictate the broad scope version of postgres and then we set the broker to pull the most stable subversion of that specified parameter.

Example: if the user specifies postgres version 12, we then set the broker to grab version 12.x -- where .x is the most stable or newest release of version 12.

Acceptance Criteria

  • GIVEN custom parameter provided by the -c argument in cf create-service
    WHEN the broker processes the argument
    AND confirms that the version number is valid
    THEN a new custom parameter group is created with the version set
    AND associated with instantiated RDS instance

  • GIVEN an existing brokered RDS instance with custom parameters is deleted
    WHEN the broker processes the removal of the instance
    THEN the associated custom parameter group is also deleted


Security considerations

By specifying the most stable release of the version specified, I think we are mitigating any potential security or compliance considerations.

Implementation sketch

  • Expand the custom parameter group functionality in createDB to allow for setting the postgres version specified by the end user
  • Verify that the removal of an RDS instance also deletes its associated parameter group (handled by cleanupCustomParameterGroups)
  • Update the RDS documentation in cg-site
  • Write unit tests to cover the postgres version specification
  • Write smoke tests to cover the postgres version specification

Investigate: Scoping RDS logs to a customer

In order to ingest RDS logs, we need a strategy to ensure RDS logs are scoped to a customer.

Question:

  • Can the org setting be applied on a "per file basis"?
  • Do we have to modify each log entry to include an "org-id"?

Security considerations

This is a research task to define the correct method to ensure the security of customer data.

S3 service should have a plan for provisioning FIPS compliant URI

In order to have applications follow FIPS compliance standards when connecting to AWS S3 Rest API, FDIC Security wants the URI that is provisioned as part of the S3 service key to be FIPS compliant. The workaround as this point is for the application to ignore the URI in the service key and manually use the FIPS compliant URI (provided by AWS). A better idea is to update the URI to be FIPS compliant or include a plan which can provide that

Acceptance Criteria

WHEN creating a AWS service Instance
AND creating a service key or binding the instance to application
THEN provide option for the URI to be FIPS compliant \


Security considerations

[note any potential changes to security boundaries, practices, documentation, risk that arise directly from this story]

Implementation sketch

[links to background notes, sketches, and/or relevant documentation

  • [first thing to do]
  • [another thing to do]

Investigate adding platform tags to Redis instances

In order to maintain consistency between service instances and be able to view information quickly, we would like to investigate if we can add our own custom tags to the Redis instances to match the behavior we have for RDS and Elasticsearch instances.

Acceptance Criteria

  • Given a new Redis instance provisioned by the broker, when a platform operator views in the instance in the AWS console or via the AWS CLI, custom tags containing things like the org GUID, space GUID, service instance GUID, etc., are present.

Security considerations

  • A variety of platform entity GUIDs will be present on the instance; this is consistent with our other brokered services, and only accessible by platform operators

Implementation sketch

  • See AWS ElastiCache API documentation for Go

Add versions when viewing AWS RDS psql service instances

Expose brokered service software versions to user programmatically (through the API)

Needs include:
EPA
Determining which database versions are being used across all orgs and spaces.

FEC
So that they can more easily audit their systems developers at the FEC want to be able to view the postgres versions of AWS RDS psql service instances.

Proposition:
Add support to the aws-broker to GET service instance parameters, which is part of the API but we do not implement. However, this would only give the major DB version (like Postgres 13, 14, 15…), not the full version, so it wouldn’t meet their needs. As of 9/12/2023 there is no widely-accepted format for delivering service version numbers like there is for software libraries.

Acceptance Criteria

  • GIVEN an existing brokered service instance (such as an RDS instance)
    WHEN I run the command cf service <service_name>
    THEN I'd like to be able to see the version of the brokered service

Security considerations

No known security considerations other than exposure of service instance versions.

Implementation sketch

[links to background notes, sketches, and/or relevant documentation

  • [first thing to do]
  • [another thing to do]

Support opt-in of shipping RDS logs to Kibana

In order to support customers needing RDS logs in external (non cloud.gov) systems, customers with standalone RDS instances need to be able to opt-in to log shipping to S3.

The scope of this story is only to opt in to the configuration. It is does not encompass shipping logs.

Acceptance Criteria

  • GIVEN a customer with a dedicated RDS instance (not shared)
    WHEN the customer needs logs shipped to S3
    THEN the customer can opt-in \

Security considerations

None. This should just be a configuration setting in the broker DB. The scope of this story is only to opt-in to the configuration. It does not encompass shipping logs.

Implementation sketch

  • Enable a configuration setting to be passed to the broker on service instance creation or update

Add to DB credentials block for greater compatibility

In order to deploy and use existing code that works with Pivotal and AWS' RDS brokers, cloud.gov customers want the DB name to be provided in an additional "name" field in VCAP_SERVICES.

Context: A couple of times previously, I've run into a problem where code written by people using other CF deployments doesn't work with bound cloud.gov RDS instances. It happened again today.

(Leaving everything below for the team to fill out.)


Acceptance Criteria

  • GIVEN [a precondition]
    AND [another precondition]
    WHEN [test step]
    AND [test step]
    THEN [verification step]
    AND [verification step]

Security considerations

[note any potential changes to security boundaries, practices, documentation, risk that arise directly from this story]

Implementation sketch

[links to background notes, sketches, and/or relevant documentation

  • [first thing to do]
  • [another thing to do]

RDS broker no longer able to create instances of PostgreSQL without specifying a version

The RDS broker will no longer create a new RDS instance without specifying a version number because AWS now defaults to version 13.x, which is only compatible with the db.m5.* and db.m6g.* instance classes now. We need to update our service plan offerings to account for this change and make sure that the older versions can upgrade cleanly to the new release.

This is a bit more complicated because our existing service offerings are paired with db.t3.* or db.m4.* instance classes. A bit of preliminary research showed that the cost difference is negligible/non-existant (and in one case, slightly cheaper) to go with the current generation instance classes, so we ought to consider modifying our catalog to update each of these.

As a part of this work, the catalog entry for the micro-psql plan also needs to be updated to have the correct parameters:

  • Remove minVersion and maxVersion parameters
  • Add approvedMajorVersions parameter with the list of acceptable versions

Acceptance criteria:

  • When a user runs cf create-service... to create a new PostgreSQL instance without specifying a version, the command runs successfully and a new instance is created with AWS' defaults
  • When a user creates a micro-psql service instance, they are able to correctly specify a major version that is approved

Security considerations:

  • We need to make sure we're able to offer PostgreSQL 13.x
  • This work will help us and our customers stay up to date with the latest RDS database offerings

Implementation sketch:

  • Update catalog-template.yml with the changes needed to support the new version
  • Update catalog-template.yml with the changes needed to support the new instance classes
  • Update catalog-template.yml to fix the micro-psql plan

Retain Redis backups for 14 days after service deletion (1/18)

In order to maintain backups as documented, we want to keep backups of elasticache instances for 14 days after service deletion

Acceptance Criteria

  • GIVEN a aws-redis service instance
    WHEN a user calls cf delete-service on the instance
    AND the broker finishes deleting the service instance
    THEN existing, automated backups are still available for 14 days
    AND existing automated backups are deleted after 15 days

Security considerations

Protects data in case of accident or malicious deletion

Implementation sketch

  • check if this is possible automatically using current AWS API calls
  • reach out to AWS rep if not
  • consider alternatives if AWS rep doesn't provide timeline for this to be possible with current API

Add support for modifying custom parameters of an existing RDS instance

In order to allow customers to modify custom parameters that are configured on their existing brokered RDS instances, we want to expand the support of modifying existing instances and allow users to modify the custom parameters, e.g., cf update-service aws-rds $PLAN $SERVICE_NAME -c '{"custom_param": $value, "custom_param": $value}'.

Unlike the current createDB method, which includes support for custom parameter groups, the modifyDB method does not currently support them at all.

Acceptance Criteria

  • GIVEN custom parameters provided by the -c argument in cf update-service
    WHEN the broker processes the arguments
    AND confirms that they are valid parameters with acceptable values
    THEN a new custom parameter group is created with the new parameters set
    AND associated with instantiated RDS instance

Security considerations

We'll want to make sure the parameters that we allow customers set do not add any risk to the platform and that they will be supported by the automated provisioning. This could mean we only allow a subset of custom parameters to be configured, and may vary between database engines (e.g., MySQL and PostgreSQL).

Implementation sketch

  • Research which custom parameters would make sense to enable in the platform - check with our compliance lead, especially if this involves having to file an SCR; these should also match what we enable with cf create-service
  • Expand the custom parameter group functionality in modifyDB to enable custom parameter support
  • Update the RDS documentation in cg-site
  • Write unit tests to cover the new functionality
  • Write smoke tests to cover the new functionality

Allow users to restore RDS snapshots from backup

We currently store database snapshots, created at the default interval, but don't give users a way to restore from snapshots. If a user requested a restore from a snapshot, we'd have to restore manually and do some extra work to make the restored database available via the broker. To fix, we can expose snapshot ID or timestamp as an optional parameter to provision, and/or allow users to restore to a point in time during update.

Can be changed on this repo or https://github.com/cf-platform-eng/rds-broker.

Research a means of restoring a database from a snapshot within the RDS broker

After helping a customer regain access to a restored copy of their database, @apburnes and I talked about what support for this process within the broker itself would look like. As of now we have the steps documented in our public runbooks, but after doing a bit of research we saw that there are a few API calls that we might be able to make use of within the RDS broker to help facilitate this process:

Ideally, it'd be nice if the available snapshots (and some other metadata, e.g., DB engine, version, and available storage) were visible when getting information of a database service instance (cf service <db-service-instance-name>), and we created a means of supporting the operation, but this would/should bear in mind the Open Service Broker API.

At the very least, it might be possible to construct a script to accomplish what we do manually in the runbook today via aws cli calls in an aws-vault session. Once we have that, we might be able to evaluate if we can take it a step further and expose such functionality to customers directly.


Security considerations

  • Automating the backup and restore process as it currently exists does not change anything about our current security posture; the script would only be run by platform admins who have access to the system
  • Adding the capability to see existing snapshots and restoring from them could have security and/or system boundary implications, especially if we consider exposing any of this information and/or functionality to customers.

Implementation sketch

  • What would automating the existing backup and restore process for RDS look like?
  • Could we add this support to the broker?
  • Is there another option or path here, such as automatic backups taken to an S3 bucket that the customer owns and restoring those?

Investigate: What is the impact on indexing if we ingest RDS logs?

In order to determine the feasibility of ingesting RDS logs in Kibana, we need to understand the impact of RDS logs on indexes.

The purpose of this task is to investigate and document any potential impacts.


Security considerations

None

Implementation sketch

Try it?

Improve documentation for local setup and testing

In order to make it easier for folks to work on our AWS broker, we want to update and improve our documentation around the local setup and testing of the broker so that it is easy to get up and running quickly to perform maintenance or add new features.

Acceptance Criteria

  • GIVEN updated documentation on local setup and testing instructions
    WHEN someone checks out the repo
    AND starts to work on the broker
    THEN they should be able to get the broker working easily locally
    AND be able to run existing tests easily
    AND be able to add new tests easily

Security considerations

  • None; this is just improving our documentation on how to work with an existing open source, publicly available project

Implementation sketch

  • Updated documentation on how to configure the project for local development: catalog and secret management, installing Go and Go dependencies, shell configuration, etc.
  • Updated documentation on how to configure and run tests locally: how to set up testing with Go, how to run the current tests, how to add a new test, etc.
  • Updated documentation on how to test broker changes within our platform environment(s) when performing integration or smoke tests
  • Consider adding a dev script that we've been including in other recent projects (e.g., here, here, and here)

There have been a few revisions and updates to Go since this documentation was last looked at. We may want to make sure this works with recent versions, especially with consideration to updates in how Go testing works. If this requires more work than updating Go dependencies and changing a couple of references to modules/commands to run, we should write a separate ticket for that work.

Alternatively, Docker-izing this repo may be a better approach to this to help keep things up to date and easy to run locally.

Log exporter: Support downloading of RDS logs from AWS

In order to ship RDS logs to customers, we need a log exporter application that can download a customer's RDS logs from AWS using the ACL.

Acceptance Criteria

The application needs to:

  • support downloading RDS log archive files via the AWS CLI
  • only download logs for specific RDS instances
  • track the last downloaded log archive for each RDS instance
  • leverage the minimum needed RDS credentials to achieve downloading
  • support downloading on a configurable schedule (ie every 2 hours)

Security considerations

The application should leverage the least privileged credentials required to download logs.

Support opt-in of shipping RDS logs to an S3 bucket

In order to support customers needing RDS logs in external (non cloud.gov) systems, customers with standalone RDS instances need to be able to opt-in to log shipping to S3.

The scope of this story is only to opt in to the configuration. It is does not encompass shipping logs.

Acceptance Criteria

  • GIVEN a customer with a dedicated RDS instance (not shared)
    WHEN the customer needs logs shipped to S3
    THEN the customer can opt-in

Security considerations

None. This should just be a configuration setting in the broker DB. The scope of this story is only to opt-in to the configuration. It does not encompass shipping logs.

Implementation sketch

  • Enable a configuration setting to be passed to the broker on service instance creation or update

Encrypt RDS instances

In order for RDS instances created by the broker to be compliant they need to be encrypted.

  • Shared RDS instances are encrypted
  • Dedicated RDS instances are encrypted

Retain automated backups for RDS on service delete

In order to maintain backups as documented, we want to retain automated snapshots when deleting RDS instances

Acceptance Criteria

  • GIVEN a non-shared RDS service instance
    WHEN a user calls cf delete-service on the instance
    AND the broker finishes deleting the service instance
    THEN existing, automated backups are still available for 14 days
    AND existing automated backups are deleted after 15 days

Security considerations

Protects data in case of accident or malicious deletion

Implementation sketch

https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteDBInstance.html

  • set DeleteAutomatedBackups to false when deleting instances
  • validate recoverability
  • document recovery process

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.