Coder Social home page Coder Social logo

awslabs / rds-snapshot-tool Goto Github PK

View Code? Open in Web Editor NEW
341.0 20.0 146.0 124 KB

The Snapshot Tool for Amazon RDS automates the task of creating manual snapshots, copying them into a different account and a different region, and deleting them after a specified number of days

License: Apache License 2.0

Python 94.93% Makefile 5.07%

rds-snapshot-tool's Introduction

Snapshot Tool for Amazon RDS

The Snapshot Tool for RDS automates the task of creating manual snapshots, copying them into a different account and a different region, and deleting them after a specified number of days. It also allows you to specify the backup schedule (at what times and how often) and a retention period in days. This version will work with all Amazon RDS instances except Amazon Aurora. For a version that works with Amazon Aurora, please visit the Snapshot Tool for Amazon Aurora.

IMPORTANT Run the Cloudformation templates on the same region where your RDS instances run (both in the source and destination accounts). If that is not possible because AWS Step Functions is not available, you will need to use the SourceRegionOverride parameter explained below.

Getting Started

Building From Source and Deploying

You will need to build from source and deploy to your own bucket in your own account. To build, you need to be on a unix-like system (e.g., macOS or some flavour of Linux) and you need to have make and zip.

  1. Create an S3 bucket to hold the Lambda function zip files. The bucket must be in the same region where the Lambda functions will run. And the Lambda functions must run in the same region as the RDS instances.

  2. Clone the repository

  3. Edit the Makefile file and set S3DEST to be the bucket name where you want the functions to go. Set the AWSARGS, AWSCMD and ZIPCMD variables as well.

  4. Type make at the command line. It will call zip to make the zip files, and then it will call aws s3 cp to copy the zip files to the bucket you named.

  5. Be sure to use the correct bucket name in the CodeBucket parameter when launching the stack in both accounts.

To deploy on your accounts, you will need to use the Cloudformation templates provided.

  • Deploy snapshot_tool_rds_source.json in the source account (the account that runs the RDS instances)
  • Deploy snapshot_tool_rds_dest.json in the destination account (the account where you'd like to keep your snapshots)

Source Account

Components

The following components will be created in the source account:

  • 3 Lambda functions (TakeSnapshotsRDS, ShareSnapshotsRDS, DeleteOldSnapshotsRDS)
  • 3 State Machines (Amazon Step Functions) to trigger execution of each Lambda function (stateMachineTakeSnapshotRDS, stateMachineShareSnapshotRDS, stateMachineDeleteOldSnapshotsRDS)
  • 3 Cloudwatch Event Rules to trigger the state functions
  • 3 Cloudwatch Alarms and associated SNS Topics to alert on State Machines failures
  • A Cloudformation stack containing all these resources

Installing in the source account

Run snapshot_tool_RDS_source.json on the Cloudformation console. You wil need to specify the different parameters. The default values will back up all RDS instances in the region at 1AM UTC, once a day. If your instances are encrypted, you will need to provide access to the KMS Key to the destination account. You can read more on how to do that here: https://aws.amazon.com/premiumsupport/knowledge-center/share-cmk-account/

Here is a break down of each parameter for the source template:

  • BackupInterval - how many hours between backup

  • BackupSchedule - at what times and how often to run backups. Set in accordance with BackupInterval. For example, set BackupInterval to 8 hours and BackupSchedule 0 0,8,16 * * ? * if you want backups to run at 0, 8 and 16 UTC. If your backups run more often than BackupInterval, snapshots will only be created when the latest snapshot is older than BackupInterval. If you set BackupInterval to more than once a day, make sure to adjust BackupSchedule accordingly or backups will only be taken at the times specified in the CRON expression.

  • InstanceNamePattern - set to the names of the instances you want this tool to back up. You can use a Python regex that will be searched in the instance identifier. For example, if your instances are named prod-01, prod-02, etc, you can set InstanceNamePattern to prod. The string you specify will be searched anywhere in the name unless you use an anchor such as ^ or $. In most cases, a simple name like "prod" or "dev" will suffice. More information on Python regular expressions here: https://docs.python.org/2/howto/regex.html

  • DestinationAccount - the account where you want snapshots to be copied to

  • LogLevel - The log level you want as output to the Lambda functions. ERROR is usually enough. You can increase to INFO or DEBUG.

  • RetentionDays - the amount of days you want your snapshots to be kept. Snapshots created more than RetentionDays ago will be automatically deleted (only if they contain a tag with Key: CreatedBy, Value: Snapshot Tool for RDS)

  • ShareSnapshots - Set to TRUE if you are sharing snapshots with a different account. If you set to FALSE, StateMachine, Lambda functions and associated Cloudwatch Alarms related to sharing across accounts will not be created. It is useful if you only want to take backups and manage the retention, but do not need to copy them across accounts or regions.

  • SourceRegionOverride - if you are running RDS on a region where Step Functions is not available, this parameter will allow you to override the source region. For example, at the time of this writing, you may be running RDS in Northern California (us-west-1) and would like to copy your snapshots to Montreal (ca-central-1). Neither region supports Step Functions at the time of this writing so deploying this tool there will not work. The solution is to run this template in a region that supports Step Functions (such as North Virginia or Ohio) and set SourceRegionOverride to us-west-1. IMPORTANT: deploy to the closest regions for best results.

  • CodeBucket - this parameter specifies the bucket where the code for the Lambda functions is located. The Lambda function code is located in the lambda directory in zip format. These files need to be on the *root of the bucket or the CloudFormation templates will fail. Please follow the instructions to build source (earlier on this README file)

  • DeleteOldSnapshots - Set to TRUE to enable functionality that will delete snapshots after RetentionDays. Set to FALSE if you want to disable this functionality completely. (Associated Lambda and State Machine resources will not be created in the account). WARNING If you decide to enable this functionality later on, bear in mind it will delete all snapshots, older than RetentionDays, created by this tool; not just the ones created after DeleteOldSnapshots is set to TRUE.

  • TaggedInstance - Set to TRUE to enable functionality that will only take snapshots for RDS Instances with tag CopyDBSnapshot set to True. The settings in InstanceNamePattern and TaggedInstance both need to evaluate successfully for a snapshot to be created (logical AND).

Destination Account

Components

The following components will be created in the destination account:

  • 2 Lambda functions (CopySnapshotsDestRDS, DeleteOldSnapshotsDestRDS)
  • 2 State Machines (Amazon Step Functions) to trigger execution of each Lambda function (stateMachineCopySnapshotsDestRDS, stateMachineDeleteOldSnapshotsDestRDS)
  • 2 Cloudwatch Event Rules to trigger the state functions
  • 2 Cloudwatch Alarms and associated SNS Topics to alert on State Machines failures
  • A Cloudformation stack containing all these resources

On your destination account, you will need to run snapshot_tool_RDS_dest.json on the Cloudformation. As before, you will need to run it in a region where Step Functions is available. The following parameters are available:

  • DestinationRegion - the region where you want your snapshots to be copied. If you set it to the same as the source region, the snapshots will be copied from the source account but will be kept in the source region. This is useful if you would like to keep a copy of your snapshots in a different account but would prefer not to copy them to a different region.
  • SnapshotPattern - similar to InstanceNamePattern. See above
  • DeleteOldSnapshots - Set to TRUE to enable functionanility that will delete snapshots after RetentionDays. Set to FALSE if you want to disable this functionality completely. (Associated Lambda and State Machine resources will not be created in the account). WARNING If you decide to enable this functionality later on, bear in mind it will delete ALL SNAPSHOTS older than RetentionDays created by this tool, not just the ones created after DeleteOldSnapshots is set to TRUE.
  • CrossAccountCopy - if you only need to copy snapshots across regions and not to a different account, set this to FALSE. When set to false, the no-x-account version of the Lambda functions will be deployed and will expect snapshots to be in the same account as they run.
  • KmsKeySource KMS Key to be used for copying encrypted snapshots on the source region. If you are copying to a different region, you will also need to provide a second key in the destination region.
  • KmsKeyDestination KMS Key to be used for copying encrypted snapshots to the destination region. If you are not copying to a different region, this parameter is not necessary.
  • RetentionDays - as in the source account, the amount of days you want your snapshots to be kept. Do not set this parameter to a value lower than the source account. Snapshots created more than RetentionDays ago will be automatically deleted (only if they contain a tag with Key: CopiedBy, Value: Snapshot Tool for RDS)

How it Works

There are two sets of Lambda Step Functions that take regular snapshots and copy them across. Snapshots can take time, and they do not signal when they're complete. Snapshots are scheduled to begin at a certain time using CloudWatch Events. Then different Lambda Step Functions run periodically to look for new snapshots. When they find new snapshots, they do the sharing and the copying functions.

In the Source Account

A CloudWatch Event is scheduled to trigger Lambda Step Function State Machine named stateMachineTakeSnapshotsRDS. That state machine invokes a function named lambdaTakeSnapshotsRDS. That function triggers a snapshot and applies some standard tags. It matches RDS instances using a regular expression on their names.

There are two other state machines and lambda functions. The statemachineShareSnapshotsRDS looks for new snapshots created by the lambdaTakeSnapshotsRDS function. When it finds them, it shares them with the destination account. This state machine is, by default, run every 10 minutes. (To change it, you need to change the ScheduleExpression property of the cwEventShareSnapshotsRDS resource in snapshots_tool_rds_source.json). If it finds a new snapshot that is intended to be shared, it shares the snapshot.

The other state machine is the statemachineDeleteOldSnapshotsRDS and it calls lambdaDeleteOldSnapshotsRDS to delete snapshots according to the RetentionDays parameter when the stack is launched. This state machine is, by default, run once each hour. (To change it, you need to change the ScheduleExpression property of the cwEventDeleteOldSnapshotsRDS resource in snapshots_tool_rds_source.json). If it finds a snapshot that is older than the retention time, it deletes the snapshot.

In the Destination Account

There are two state machines and corresponding lambda functions. The statemachineCopySnapshotsDestRDS looks for new snapshots that have been shared but have not yet been copied. When it finds them, it creates a copy in the destination account, encrypted with the KMS key that has been stipulated. This state machine is, by default, run every 10 minutes. (To change it, you need to change the ScheduleExpression property of the cwEventCopySnapshotsRDS resource in snapshots_tool_rds_dest.json).

The other state machine is just like the corresponding state machine and function in the source account. The state machine is statemachineDeleteOldSnapshotsRDS and it calls lambdaDeleteOldSnapshotsRDS to delete snapshots according to the RetentionDays parameter when the stack is launched. This state machine is, by default, run once each hour. (To change it, you need to change the ScheduleExpression property of the cwEventDeleteOldSnapshotsRDS resource in snapshots_tool_rds_source.json). If it finds a snapshot that is older than the retention time, it deletes the snapshot.

Updating

This tool is fundamentally stateless. The state is mainly in the tags on the snapshots themselves and the parameters to the CloudFormation stack. If you make changes to the parameters or make changes to the Lambda function code, it is best to delete the stack and then launch the stack again.

Authors

License

This project is licensed under the Apache License - see the LICENSE.txt file for details

rds-snapshot-tool's People

Contributors

edubxb avatar falmotlag avatar ha-king avatar hyandell avatar ismaelfernandezscmspain avatar james-tr avatar jwr0 avatar mrcoronel avatar pacohope avatar pall-valmundsson avatar rguillebert avatar rts-rob avatar rwarren99 avatar spingel avatar tensho avatar troydieter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rds-snapshot-tool's Issues

Share function hard to debug

We are trying to figure out why the share function seems to not be working. It is tough to debug because the outputs of the cloudwatch logs do not contain any errors or information pointing us in the right direction. The create snapshot function seems to be working fine and created the appropriate AWS tags on the snapshots. The params are set to TRUE for sharing and the account ID matches.

(We've tested sharing by clicking in the gui and manually sharing the created snapshot to the other account with no problems).

Example setup:
RDS instance: companyname-prod
Created snapshot: companyname-prod-2019-02-07-12-10
AWS tags on the snapshot:
CreatedBy: Snapshot Tool for RDS
CreatedOn: 2019-02-07-12-10
shareAndCopy: YES

We've tried to delete the stack and recreate to no avail. Snapshots are created but no snapshots are being programmatically shared to the other account.

Step function alarms are not showing errors nor is the actual lambda output.

We also deleted the dest stack and recreated that for sanity reasons.

Example output from lambda logs:

START RequestId:
Found credentials in environment variables.
Starting new HTTPS connection (1): rds.amazonaws.com
END RequestId:

At this point after checking the params and recreating the stack Im reaching out to see if there was something missed.

oracle se2 support

Hi guys,

can oracle se2 be added to the list of supported engines or is there a technical reason the snapshot share/copy model won't work here?

Ability to specify S3Key for lambda functions

I would like to use aws cdk to deploy this tool so that teams can have this as part of their cdk application rather than have piecemeal cdk and cloudformation templates. The easiest way to deploy custom lambdas with the cdk is to wrap them in an aws-s3-assets.Asset construct. That construct uploads to a generated key and not in the root of a particular bucket. I would like the ability to specify the key of the lambda files so that we can do this. It could be generally useful for other deploy systems.

I can make a pull request for this if there is interest.

Question re docs and INTERVAL and CRON entries

Hi,

I think it would be helpful to document that if you make your backup interval less than every 24 hours, you also need to increase the frequency of the "TakeSnapshots" job via the cron definition.

Is that understanding correct?

Curious why copy_remote() would fail.

Hi, quite often for me "copy_snapshots_dest_rds" I get failures while running copy_remove() but no visable exceptions. So I looked into the docs of client.copy_db_snapshot() in boto3 and also noticed none.

My backups still work. The eventual retries pick up the pending snapshots and it always works out. But my logs are very noisy and my monitoring system is going a bit berserk :) I could turn down monitoring but I kind of want to know when I don't have backups.

Thoughts?

S3 not existing or restricted

Resource handler returned message: "Your access has been denied by S3, please make sure your request credentials have permission to GetObject for snapshots-tool-rds-us-east-1/copy_snapshots_dest_rds.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: Lambda, Status Code: 403, Request ID: 2379f595-084c-40a1-8398-3162ec6b2efd, Extended Request ID: null)" (RequestToken: a8ec598b-f405-5cfe-8001-11b2d31fbfd4, HandlerErrorCode: AccessDenied)

It seems bucket is no longer accessible when using option DEFAULT_BUCKET.

KeyError in get_own_snapshots_source() if a snapshot is in progress

Similar to #38, it appears that we include in progress snapshots when parsing snapshots, which appears to be causing errors like this (since an in-progress snapshot does not have a Create Time):
'SnapshotCreateTime': KeyError Traceback (most recent call last): File "/var/task/lambda_function.py", line 49, in lambda_handler filtered_snapshots = get_own_snapshots_source(PATTERN, paginate_api_call(client, 'describe_db_snapshots', 'DBSnapshots'), BACKUP_INTERVAL) File "/var/task/snapshots_tool_utils.py", line 206, in get_own_snapshots_source if backup_interval and snapshot['SnapshotCreateTime'].replace(tzinfo=None) < datetime.utcnow().replace(tzinfo=None) - timedelta(hours=backup_interval): KeyError: 'SnapshotCreateTime'

I plan to submit a PR against this issue soon.

Template error: Unable to get mapping for Buckets::us-west-1::Bucket

Even if I specify my own bucket in CodeBucket parameter I still get the error "Template validation error: Template error: Unable to get mapping for Buckets::us-west-1::Bucket" when I create resources using CloudFormation templates.

I tried to create resources in us-west-1 region.

ListTagsForResources issuing 700 requests per 1 minute

We are running into API rate limit exceeding errors and per AWS support, the culprit is as below. We need to understand how to correct this issue.

The service team has investigated and has found that the primary API call rate being exceeded is "ListTagsForResource", on 2019-01-09 during a one minute period (at 10:25 UTC), over 700 requests were made, from the following sources:

Boto3/1.7.74 Python/3.6.1 Linux/4.14.88-72.73.amzn1.x86_64 exec-env/AWS_Lambda_python3.6 Botocore/1.10.74
Boto3/1.7.74 Python/3.6.1 Linux/4.14.88-72.73.amzn1.x86_64 exec-env/AWS_Lambda_python3.6 Botocore/1.10.74

Boto3/1.7.74 Python/3.6.1 Linux/4.14.88-72.73.amzn1.x86_64 exec-env/AWS_Lambda_python3.6 Botocore/1.10.74
aws-internal/3 aws-sdk-java/1.11.432 Linux/4.9.124-0.1.ac.198.73.329.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.192-b12 java/1.8.0_192

The vast majority of these are likely to be the Lambda functions mentioned previously.

Python code for deleting RDS

Hi All, I am new to Python, I am trying to delete RDS snapshots based on some conditions, actually it should not be only based on time as we have daily weekly and monthly backup but deleting the backup based on time is not possible as AWS RDS API doesn’t support that. if Anyone has done this, deleting a python code based on name and date, possible to help me here.

This is my code
import boto3
import datetime

def lambda_handler(event, context):
print(“Connecting to RDS”)
client = boto3.client(‘rds’)

response = client.describe_db_snapshots(
SnapshotType='manual',
Name=ndi-spcp-'dbname-20-02-19-10'
)
for snapshot in response(DBInstanceIdentifier=‘dbane’, MaxRecords=50)[‘DBSnapshots’]:
create_ts = snapshot[‘SnapshotCreateTime’].replace(tzinfo=None)
if create_ts < datetime.datetime.now() - datetime.timedelta(days=7):
print(“Deleting snapshot id:”, snapshot[‘DBSnapshotIdentifier’])
boto3.client(‘rds’).delete_db_snapshot(
DBSnapshotIdentifier=snapshot[‘DBSnapshotIdentifier’]
)

Deployment question

I am trying to have two copies of backup in my secondary account: one in the same region as the primary account(say, "us-west-2"), and the other in a different region(say, "us-east-1"). In my secondary/dest account, I deployed two snapshots_tool_rds_dest.json in us-west-2: one set destination_region equals to us-west-2 and the other equals to us-east-1. However, in us-west-2, the backup gets deleted every 30 minutes because the copy_remote from the other lambda thinks they are the intermediate copies as shown in here.

Did I misunderstand anything here? Is the setup correct if I want to have both same region and across region backup in the same secondary account?

Encrypted snapshots are not correctly copied

Hey,

I ran into an issue when trying to copy the encrypted snapshots to the secondary account. The snapshots in the primary account are created using the default KMS key, however, that is not allowed to share to the secondary account by AWS for security reason. Any idea how to fix that?

Shared snapshot copy uses source KMS not destination KMS

I've just deployed the stacks in Source and Destination AWS accounts (separate accounts) and configured with KmsKeyDestination and KmsKeySource CMKs. Region in both accounts is eu-west-1.

I am surprised to see that the resultant local snapshot "copies" in the external (destination) account are encrypted with the KmsKeySource and haven't been re-encrypted with my specified KmsKeyDestination.

If I manually copy a shared snapshot I am able to specify the local CMK instead and the copy successfully uses it.

Anything I am missing? What should I look for? Anything I can try?

Thanks!

Karl

Filter by tags

Hi. I am about to begin work on a PR that will allow this tool to filter which RDS instances to backup via tags on said instance. This will make it easier for us to configure instances to be backed up within our CloudFormation and at the same time will make it easier to ignore read replicas.

Currently our RDS instance's names are given by CloudFormation so using regular expressions just doesn't work for us.

Before I begin: has anyone else already started work on this ? and/or - do you think there are any gotchas to my plan? All discussion is welcome. Tks.

Question about InstanceNamePattern

How should I define the names of different instances in the InstanceNamePattern field? It only works when I leave the value ALL_INSTANCES

Can't copy snapshot having "option group"

Trying to copy a snapshot having an "option group" to a different region results in error: "The snapshot requires a target option group..."

Can the tool be modified to retrieve the option group for a snapshot and pass that to the copy function?

Zip files on S3 are not most recent

I elected to not define my own S3 buckets and to rely up on downloading the code from the default buckets.

The problem there is that the code from the following commit (specifically line 204) has not made it to your S3 buckets and so folks won't be able to use this tool until that is done.

65f5bae

Curious if there is a timeline to upload those? The spot specifically where it breaks is during the ShareSnapshot step.

Side question: have you been able to find any interesting way to reduce duplication and not have to define that file in each directory?

Updates to AWS Lambda Runtime

Is this project still maintained and recommended?

Python 3.7 is EoL as of June 27, 2023 and the python3.7 AWS Lambda Runtime enters stage 1 of deprecation starting November 27, 2023. Is there any plan to update accordingly?

states is not a supported service for a target

The following resource(s) failed to create: [cwEventBackupRDS, cwEventDeleteOldSnapshotsRDS]. . Rollback requested by user.

cwEventBackupRDS
cwEventDeleteOldSnapshotsRDS

Error: states is not a supported service for a target

Error with default KMS key aws/rds

We weren't able to copy a snapshot encrypted with the aws managed default key (aws/rds) into another destination account. CopySnapshot lambda which in the destination account fails to copy the shared snapshot, as it has no access to the KMS default key of the other account.

We thought this is a common case and covered by the script. After some deep dive, I think we need to copy the snapshot once more within the source account and attach another CMK to the snapshot which is shared to the destination account. WOuld love to hear some opinions about this issue.

Patterns match against snapshot instead of instance

Pattern matching in the share function matches against the DBSnapshotIdentifier instead of the DBInstanceIdentifier. This results in orphaned snapshots which are not shared, and therefore not copied to the destination account.

An example:

The pattern .+(-production)$ matches all instances that end in -production. Given an RDS instance django-production and an RDS instance django-production-reporting, we want to copy the first but not the second.

The pattern matches correctly when taking snapshots, but appends YYYY-MM-DD-hh-mm to the DBSnapshotIdentifier. When the share function executes, the DBSnapshotIdentifier does not match the pattern, so the snapshot is not shared.

Error when creating the stack

I am getting this error when creating the stack in CF.

Parameter ScheduleExpression is not valid. (Service: AmazonCloudWatchEvents; Status Code: 400; Error Code: ValidationException; Request ID: d50ae7df-e029-4be2-8755-e7ccf3701bdf)

CloudFormation doesn't deploy changes to the lambda code

If I make a change to the lambda code, upload a new zip file into S3 with the new code (overwriting the old zip file), and then update my CloudFormation stack, CloudFormation will not detect that the code has changed, and thus will not deploy the new code.

A solution could be to use the aws cloudformation package command, which will give each version of the zip file a unique name in S3, and will produce a CloudFormation template file that refers to the objects in S3 by their unique name. Then a new version of the Lambda code will result in a change to the CloudFormation template so CloudFormation will update the Lambda correctly.

upload snapshot monthly/yearly to glacier

Hi,
I would like to use your tools to backup and upload automatically to S3 glacier snapshot for Hipaa compliance. How it is possible to achieve it ? Is it possible to put policy on S3 bucket where the snapshot are saved on the destination region ?

Incomplete pagination prevents sharing

Problem

When an account has more than 25 Aurora cluster snapshots of any type, snapshots are no longer copied over to the target account.

Root Cause

snapshots_tool_utils.py uses custom pagination code. Somehow this pagination is limited to 25 rows. Once an account has more than 25 manual backups, snapshots are created but not shared, and thus are not copied over.

Proposed Solution

Modify snapshots_tool_utils.py to use the built-in Boto paginator. A PR will be submitted against this issue.

Process is not taking snapshot

Hi, everyone

I deployed the Cloudformation templates to do a DR for an Oracle SE1 RDS but is not taking the manual snapshot, no error, no message.

RDS have automated backups and also have AWS Backup Service programmed. can this configuration have conflict?

Screen Shot 2019-05-10 at 10 39 57 AM
Screen Shot 2019-05-10 at 10 40 09 AM
Screen Shot 2019-05-10 at 10 40 52 AM
Screen Shot 2019-05-10 at 10 50 33 AM
Screen Shot 2019-05-10 at 10 50 56 AM
Screen Shot 2019-05-10 at 10 51 22 AM

Provided region_name '[us-east-1]' doesn't match a supported format.

When executing make I get this error :

Provided region_name '[us-east-1]' doesn't match a supported format.
make: *** [._copy_snapshots_dest_rds] Error 255
rm copy_snapshots_dest_rds.zip

configuration:
AWSARGS=--region [us-east-1] --profile [lz-lab]
AWSCMD=aws
ZIPCMD=zip

can't pass this stage ..really appreciate any help

Different Account | Different Regions

I've followed the steps and configured the lambda functions for cross account snapshot sharing as well as cross region. I've specified a different destination region via DEST_REGION (us-east-2) then the the source region (us-east-1). However, the shared snapshots continue to show up under the same region (us-east-1) as the source region in the destination account.

I made sure that both source and destination CloudFormation stacks were run on the us-east-1 region.

When I investigate the logs I see the following error.

Local copy pending: experience-builder-development-2021-05-10-15-00 (An error occurred (KMSKeyNotAccessibleFault) when calling the CopyDBSnapshot operation: The target KMS key [None] does not exist, is not enabled or you do not have permissions to access it.)
[ERROR] SnapshotToolException: Copies pending: 7. Needs retrying
Traceback (most recent call last):
  File "/var/task/lambda_function.py", line 112, in lambda_handler
    raise SnapshotToolException(log_message)
[ERROR] SnapshotToolException: Copies pending: 7. Needs retrying Traceback (most recent call last):   File "/var/task/lambda_function.py", line 112, in lambda_handler     raise SnapshotToolException(log_message)

None of these databases are encrypted. So I'm not sure why this error is being thrown.

I need the snapshots on the source account in us-east-1 to be shared to the destination account in us-east-2.

DeleteOldSnapshots equals false

When setting the DeleteOldSnapshots parameter to False, I receive this error:
Template format error: Unresolved resource dependencies [lambdaDeleteOldSnapshotsRDS] in the Resources block of the template.

Setting the RetentionDays to 0 or blank does not work. Only setting DeleteOldSnapshots to True allows the template to be deployed.

Wrong Makefile rules

On Ubuntu 18.04 LTS I used master branch for creating .zip files and uploading them to my S3 bucket.

From the README:

Type make at the command line. It will call zip to make the zip files, and then it will call aws s3 cp to copy the zip files to the bucket you named.

Indeed, it calls zip. But before creating zip files it builds dependencies for this target, i.e. tries to upload unzipped folder to S3 bucket.

Here is my fix:

-# if you define "._foo.zip" as a file on this line, then it will look for
+# if you define "._foo" as a file on this line, then it will look for
# a folder called foo, copy the standard files into it, and then make foo.zip.
-all:   ._copy_snapshots_dest_rds.zip \
-       ._copy_snapshots_no_x_account_rds.zip \
-       ._delete_old_snapshots_dest_rds.zip \
-       ._delete_old_snapshots_no_x_account_rds.zip \
-       ._delete_old_snapshots_rds.zip \
-       ._share_snapshots_rds.zip \
-       ._take_snapshots_rds.zip
+all:   ._copy_snapshots_dest_rds \
+       ._copy_snapshots_no_x_account_rds \
+       ._delete_old_snapshots_dest_rds \
+       ._delete_old_snapshots_no_x_account_rds \
+       ._delete_old_snapshots_rds \
+       ._share_snapshots_rds \
+       ._take_snapshots_rds

 clean:
        rm -f ._*

-._%: %
+._%: %.zip
        "$(AWSCMD)" $(AWSARGS) s3 cp "$<" "s3://$(S3DEST)" \
                --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers
        cp "$<" "$@"

Same-accounts - different regions.

Hello, I'm slightly confused about setting up the rds-snapshot-tool.
I'm interested in it creating snapshots, and copying them to a different region, within the same account.

I noted and set the CrossAccountCopy variable to FALSE; however here's where I'm lost.

My source-region is us-east-1, so I've deployed my cloudformation-source stack in the east-region.
Do I deploy the destination stack in the east-region as well? Or in the west-1 region?

Sorry for the n00b question.
Thank you.

Including automated snapshots increases run time and cost.

Problem

Automated snapshots are parsed by the tool, even though only manual snapshots can be shared and copied. This results in unnecessary pagination and additional runtime, which leads to additional cost.

Root Cause

Calls to describe_db_cluster_snapshots are not filtered by snapshot type.

Proposed Solution

Modify snapshot pagination calls to only apply to manually-created snapshots via the SnapshotType parameter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.