Coder Social home page Coder Social logo

cloud-nuke's Introduction

gruntwork.io website

This is the code for the Gruntwork website.

Gruntwork can help you get your entire infrastructure, defined as code, in about a day. You focus on your product. We'll take care of the Gruntwork.

Docker quick start

The fastest way to launch this site is to use Docker.

  1. git clone this repo
  2. docker compose up
  3. Go to http://localhost:4000 to test
  4. If you are going to be testing the checkout flow, you must login to Aperture at: https://aperture.dogfood-stage.com/.

The default Docker compose configuration supports hot-reloading of your local environment, meaning that as you edit files to change markup, text, images, etc, your local development server will pick up these changes and reload the latest version of the site for you. This makes it quick and convenient to develop on the site locally.

Manual quick start

  1. git clone this repo
  2. Install Jekyll
  3. Just the first time: bundle install
  4. Start Jekyll server: bundle exec jekyll serve --livereload
  5. Go to http://localhost:4000
  6. If you are going to be testing the checkout flow, you must login to Aperture at: https://aperture.dogfood-stage.com/.

Deploying

To deploy the site:

  1. Create a PR with your code changes
  2. After the PR has been approved, merge it into master
  3. Create a new tag, you can do this manually via git or in the subsequent step on the releases page - be sure to increment the version number using semantic versioning
  4. Go to the releases page and create a draft release with the relevant information (use the "Generate Release Notes" button to make your life easier)
  5. Release it
  6. The CI/CD pipeline will deploy it automatically

Technologies

  1. Built with Jekyll. This website is completely static and we use basic HTML or Markdown for everything.
  2. Preview environments are built with Netlify.
  3. Hosted on Amazon S3, with CloudFront as a CDN. Using s3_website to automatically upload static content to S3.
  4. We use Bootstrap and Less.
  5. We're using UptimeRobot, Google Analytics, and HubSpot Traffic Analytics for monitoring and metrics.

Troubleshooting

Disabling the Jekyll Feed gem

The Gruntwork website uses a Ruby Gem called Jekyll Feed which generates a structured RSS feed of "posts" on the site. Unfortunately, in development this can significantly slow down the hot-reloading of the site, forcing you to wait upwards of a minute at a time to see minor text changes locally.

You'll know this is happening when you look at the STDOUT of your docker-compose process and the final count of seconds spent Generating feed for posts is greater than 5:

web_1  |       Regenerating: 1 file(s) changed at 2021-07-21 14:31:08
web_1  |                     _data/website-terms.yml
web_1  |        Jekyll Feed: Generating feed for posts
web_1  |                     ...done in 58.507850014 seconds.

As a temporary workaround, you can open the Gemfile in the root of the project directory and temporarily comment out the line that pulls in the Jekyll Feed dependency:

source 'https://rubygems.org'
gem 'jekyll', '~> 4.1'
gem 's3_website', '3.3.0'
group :jekyll_plugins do
  gem 'jekyll-redirect-from', '0.16.0'
  gem 'jekyll-sitemap', '1.4.0'
  gem 'jekyll-paginate', '1.1.0'
  gem 'therubyracer', '0.12.3'
  gem 'less', '2.6.0'
  gem 'jekyll-asciidoc'
  gem 'jekyll-toc'
  gem 'nokogiri', '1.11.0.rc4' # Addressing security issue in earlier versions of this library
#  gem 'jekyll-feed'
end

Important - Be sure that you don't end up committing this change because we do want the Jekyll Feed plugin to run for production!

I made changes locally but they're not being reflected in my hot-reloaded development environment

This can happen especially if you add or remove files from the website's working directory. When this occurs, terminate your docker-compose process and restart it to see your changes reflected.

License

See LICENSE.txt.

cloud-nuke's People

Contributors

andreybleme avatar arsci avatar autero1 avatar brandonstrohmeyer avatar brikis98 avatar bwhaley avatar chenrui333 avatar denis256 avatar dependabot[bot] avatar eak12913 avatar ellisonc avatar etiene avatar hongil0316 avatar hposca avatar ina-stoyanova avatar jennas-lee avatar josh-padnick avatar letubert avatar marinalimeira avatar moonmoon1919 avatar oredavids avatar rafaelleonardocruz avatar rhoboat avatar robmorgan avatar robpickerill avatar saurabh-hirani avatar sbocinec avatar tonerdo avatar yorinasub17 avatar zackproser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-nuke's Issues

cloud-nuke gcp

Hi,

I'm interested in using cloud-nuke for google cloud platform. I see it on the roadmap in the README. Are you accepting contributions for this feature?

Invalid response on prompt leads to immediate exit.

Cloud-nuke takes a while to identify all the resources it plans to delete in AWS and the finally displays this prompt:

Are you sure you want to nuke all listed resources? Enter 'nuke' to confirm: 

I accidentally entered yes instead of nuke, and the program immediately exited, which means I have to re-scan all AWS resources. Instead catch the error and give the user another chance to enter nuke. Also consider advising the user to use CTRL+C to cancel.

Nuke VPCs and related resources

A nice to have (but not critical) for the future would be the ability to nuke VPCs and all related resources:

  • The VPC itself
  • Subnets
  • Route tables
  • NACLs
  • VPC endpoints
  • NAT Gateways
  • ENIs / EIPs

Aws-nuke fails on Deleting EBS Volumes

While doing a fresh aws-nuke run, I received the following error:

INFO[2018-02-18T12:07:48-07:00] No Auto Scaling Groups to nuke in region eu-central-1
INFO[2018-02-18T12:07:48-07:00] No Elastic Load Balancers to nuke in region eu-central-1
INFO[2018-02-18T12:07:48-07:00] No V2 Elastic Load Balancers to nuke in region eu-central-1
INFO[2018-02-18T12:07:48-07:00] Terminating all EC2 instances in region eu-central-1
INFO[2018-02-18T12:07:49-07:00] Terminated EC2 Instance: i-0c61c9e0e5878f441
	INFO[2018-02-18T12:08:54-07:00] [OK] 1 instance(s) terminated in eu-central-1
INFO[2018-02-18T12:08:54-07:00] Deleting all EBS volumes in region eu-central-1
ERRO[2018-02-18T12:08:55-07:00] [Failed] VolumeInUse: Volume vol-0b43429069414b331 is currently attached to i-05333ad8fe9a89158
	status code: 400, request id: eeea65fe-4c56-414d-89b0-1878293db105

This makes me realize that part of deleting an EBS Volume means repeatedly checking that it has in fact been detached before initiating its termination.

Ability to delete specific resources instead of all

cloud-nuke is great for cleaning up stale resources and saving $$. I wanted to run an idea through the team to know if this makes sense.

The current cli for cloud-nuke is

cloud-nuke aws

which when run gets all active AWS resources and then deletes them. This could be explicity called out by making the user specify an "all" argument e.g.

cloud-nuke aws all 

will delete all the supported resources. This means that the user can delete either all of the supported resources or specific ones by specifying

cloud-nuke aws ec2

which will delete only ec2 instances. The advantage of doing so are:

  1. Reducing search time.
  2. Targeting specific resources as someone might not want entire account deletion but only for specific components.
  3. Giving the user an ability to list out the currently supported resources from the cmdline instead of referring to the README.

Add dry-run mode

Hi,

as you mentioned on #30 you should add a --dry-run mode.

I must confess I switch to awseeper because dry mode is missing

Regards

General UX Enhancement Ideas

Currently aws-nuke outputs its prompt as follows:

INFO[2018-02-18T12:02:57-07:00] Retrieving all active AWS resources
INFO[2018-02-18T12:04:05-07:00] The following AWS resources are going to be nuked:
INFO[2018-02-18T12:04:05-07:00] * ec2-i-023397578dd4e6940-eu-west-2

INFO[2018-02-18T12:04:05-07:00] * ec2-i-0d13286852fecf36f-eu-west-2

INFO[2018-02-18T12:04:05-07:00] * ebs-vol-095e6aabf8f382f20-eu-west-2

INFO[2018-02-18T12:04:05-07:00] * ebs-vol-0f6d2742b48945347-eu-west-2

INFO[2018-02-18T12:04:05-07:00] * ebs-vol-02af799dda2ba7e18-eu-west-2

INFO[2018-02-18T12:04:05-07:00] * asg-confluent-tools-OYr86p-0-sa-east-1

INFO[2018-02-18T12:04:05-07:00] * asg-confluent-tools-OYr86p-1-sa-east-1

INFO[2018-02-18T12:04:05-07:00] * asg-confluent-tools-OYr86p-2-sa-east-1
...

There are a few opportunities for improvement here:

  1. It would be helpful to see this output summarized by region.
  2. There's extra line between each line of output which causes more scrolling
  3. The INFO[2018-02-18T12:04:05-07:00] prefix doesn't add much value in this context. It'd be nice to have it without for the interactive prompt. Perhaps for automated runs it returns.
  4. Some summary stats (ASGs: 5, EC2 Instance: 5) by region and overall would be helpful.

None of these are critical, just suggestions for future improvements.

On the positive side, I really like the manual prompt the tool requires, however the prompt should include the AWS account number. Better yet, it'd be nice if there were some way to get the AWS Account Name (e.g. phxdevops).

aws profiles not supported?

Hi,
according to the documentation all "standard AWS CLI credential mechanisms" are supported by cloud-nuke. However, when I try to execute

cloud-nuke aws --profile my-profile-name

I get the following error: "flag provided but not defined: -profile"

Am I missing something?

Adding support for non-dedicated accounts

Functional Description

Is there any way to support nuking environments that do not have fully dedicated accounts? Similar to #65 which added support for specific resource types, I'd like to delete resources:
(A) when the resources have a matching tag, and/or
(B) when the resource has an ID (ARN on AWS) which is contained within a specific list or tracked resource IDs

The latter case (B) would require that somewhere we are logging the IDs of resources which were created (not yet solved for) and would be needed in order to give an option for resources that do not support tags needed to implement (A). (For example: ECS clusters have an ARN but do not yet support tagging.)

Has this been considered or is there a way to implement this today? And if not, is it something you would consider in the roadmap and/or accept a PR for?

Sample Workflow

One option for an adapted workflow would look like:

  1. During the terraform design process, we apply a uniquely generated tag to apply to all (taggable) resources.
  2. For resources known to not be taggable (such as ECS) we log the ARNs into a text file or into state variables (or both)
  3. Nuke called twice, once with an arg like --nuke_tag=terraform-autotag-8fsCd and once with an arg like --resource_list=created_resources.txt

Additional Background

The specific use case is that (mostly for training labs and POCs), we don't always have the luxury of being able to create new AWS accounts, but we still want to have confidence that we can ALWAYS successfully destroy an environment after we are finished with it.

Thanks!

Remove Default DHCP Options Set

I'm trying this tool and can cleanly remove defaults VPCs on all regions but the default DHCP options are not removed. Are there any reason not to remove DHCP option set? Thank you

Support wait for dependent resources

Example: we use Terraform to create a EBS-Instance and three ElPs and connect them.
cloud-nuke now runs into an issue: the EIPs can not be deleted while the EBS instance still exists, because EIPs can not be deleted while they are assigned.
Even running cloud-nuke twice in a row (we are trying to use it to automatically reset an account regularly) does not solve this problem, because the EBS-instance requires time to shut down - more time than cloud-nuke running twice.

Would cloud-nuke considering such dependencies be something that could be added in the future?
Since EBS-removal is already supported by cloud-nuke, how did you deal with such cases?

What IAM permissions are required for cloud-nuke ?

Great tool! I'm wondering is there any documentation on the required permissions to run cloud-nuke?

currently I get

...is not authorized to perform: ecs:ListClusters on resource: *\n\tstatus code: 400,

which I can fix, but would be cool to know the prerequisite permissions in any case.

aws-nuke should pause when the --force flag is enabled

To give users a chance to hit CTRL+C in case of error, aws-nuke should pause for 10 seconds with the log output "Pausing for 10 seconds to give you a last chance to hit CTRL+C" whenever the --force flag is enabled.

Nuke VPC endpoint services

VPC endpoint services, not VPC endpoints directly, as they may be fronting NLBs spun up during testing. As part of deleting the VPC endpoint services, any existing endpoint connections would need to be rejected.

My reasoning for not targeting VPC endpoints is that, in my case, they are potentially tied into how the VPC is designed - such as allowing communication to AWS APIs without going onto the big-wide internet

Test roles are leaking

Iam roles created on automated tests for this repo are not getting properly deleted and are accumulating inside our test AWS account.

New feature: ability to delete IAM roles

To assist with hitting the IAM roles limit in AWS - we could use the ability to delete IAM roles - but to also provide a list of roles that should be preserved and not deleted.

String could not find any enabled regions

Hi,
I try to run cloud-nuke aws --resource-type ec2 --dry-run.
I have configured env variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION.
I do not have aws cli installed.

When running I get the error message:
*errors.errorString could not find any enabled regions
/go/src/github.com/gruntwork-io/cloud-nuke/aws/aws.go:64 (0x85fd15e)
/go/src/github.com/gruntwork-io/cloud-nuke/aws/aws.go:77 (0x85fd1ae)
/go/src/github.com/gruntwork-io/cloud-nuke/commands/cli.go:128 (0x860fd5f)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/gruntwork-io/gruntwork-cli/errors/errors.go:93 (0x84d8343)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/app.go:490 (0x84cabc2)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/command.go:210 (0x84cbb48)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/app.go:255 (0x84c93db)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/gruntwork-io/gruntwork-cli/entrypoint/entrypoint.go:21 (0x8611971)
/go/src/github.com/gruntwork-io/cloud-nuke/main.go:13 (0x8611b99)
/usr/local/go/src/runtime/proc.go:195 (0x807010d)
/usr/local/go/src/runtime/asm_386.s:1635 (0x8096351)
error="could not find any enabled regions"

I tried to follow #49
but I can't get the repo mentioned.
Could any one help me with this?
Thanks, Agata

Allow specifying exclusion tags

The new s3 nuking capability has a built in mechanism to ignore any buckets that have the tag cloud-nuke-excluded. We should consider extending this to other resources.

See #101 (comment) for more context and suggestions.

Feature Request: protect resources from deletion based on a tag

Example use case:
Let's say you have a sandbox/dev account and want to use cloud-nuke to keep it clean of old artifacts, but on the other hand have resources that you're actively using and that may also be even older than your time threshold.

Solution:
Tag those resources with something like 'cloud-nuke'='protect'

Receiving an error="NoCredentialProviders:

I am trying this out for the first time and I was stopped in my tracks and don't know how to troubleshoot further. I use an AWS profile configuration to assume a role in an account. Is this supported? Am I overlooking something obvious?

guzzi:~ $ cloud-nuke --version
cloud-nuke version v0.1.13

guzzi:~ $ env | grep -i aws
AWS_PROFILE=secret-profile-name

guzzi:~ $ aws sts get-caller-identity
{
"UserId": "AROAI3SSNQ5VVVVMPHPQA:botocore-session-1579734394",
"Account": "xxxxxxxxxxxxx",
"Arn": "arn:aws:sts::xxxxxxxxxxxx:assumed-role/TeamRole/botocore-session-1579734394"
}

guzzi:~ $ cloud-nuke aws --region us-west-2 --dry-run
INFO[2020-01-22T15:14:22-08:00] Retrieving active AWS resources in [us-west-2]
INFO[2020-01-22T15:14:22-08:00] Checking region [1/1]: us-west-2
ERRO[2020-01-22T15:14:42-08:00] *awserr.baseError NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
/go/src/github.com/gruntwork-io/cloud-nuke/aws/asg.go:18 (0x16d48ea)
/go/src/github.com/gruntwork-io/cloud-nuke/aws/aws.go:204 (0x16d8468)
/go/src/github.com/gruntwork-io/cloud-nuke/commands/cli.go:149 (0x16eac46)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/gruntwork-io/gruntwork-cli/errors/errors.go:93 (0x15ef2cb)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/app.go:490 (0x15ddfa2)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/command.go:210 (0x15df315)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/app.go:255 (0x15dc108)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/gruntwork-io/gruntwork-cli/entrypoint/entrypoint.go:21 (0x16ecc37)
/go/src/github.com/gruntwork-io/cloud-nuke/main.go:13 (0x16ecec7)
/usr/local/go/src/runtime/proc.go:195 (0x102b626)
/usr/local/go/src/runtime/asm_amd64.s:2337 (0x10577e1)
error="NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"

Incomplete Execution Due to Missing Exception Handling

Please add exception handling to continue execution when an existing control prevents successful deletion of a resource.

$ ./cloud-nuke aws --log-level debug
INFO[2020-05-04T14:29:33-05:00] The following resources types will be nuked:
INFO[2020-05-04T14:29:33-05:00] - ami
INFO[2020-05-04T14:29:33-05:00] - asg
INFO[2020-05-04T14:29:33-05:00] - ebs
INFO[2020-05-04T14:29:33-05:00] - ec2
INFO[2020-05-04T14:29:33-05:00] - ecsserv
INFO[2020-05-04T14:29:33-05:00] - eip
INFO[2020-05-04T14:29:33-05:00] - ekscluster
INFO[2020-05-04T14:29:33-05:00] - elb
INFO[2020-05-04T14:29:33-05:00] - elbv2
INFO[2020-05-04T14:29:33-05:00] - lc
INFO[2020-05-04T14:29:33-05:00] - rds
INFO[2020-05-04T14:29:33-05:00] - s3
INFO[2020-05-04T14:29:33-05:00] - snap
INFO[2020-05-04T14:29:37-05:00] Retrieving active AWS resources in [eu-north-1, ap-south-1, eu-west-3, eu-west-2, eu-west-1, ap-northeast-2, ap-northeast-1, sa-east-1, ca-central-1, ap-southeast-1, ap-southeast-2, eu-central-1, us-east-1, us-east-2, us-west-1, us-west-2]
INFO[2020-05-04T14:29:37-05:00] Checking region [1/16]: eu-north-1
ERRO[2020-05-04T14:29:37-05:00] *awserr.requestError AccessDenied: User: arn:aws:sts::111122223333:role/admin/cloud-nuke-test is not authorized to perform: autoscaling:DescribeAutoScalingGroups with an explicit deny
	status code: 403, request id: 103e6933-9797-474d-b391-55ebfeb1f88d
/go/src/github.com/gruntwork-io/cloud-nuke/aws/asg.go:18 (0x18138da)
/go/src/github.com/gruntwork-io/cloud-nuke/aws/aws.go:207 (0x1817f83)
/go/src/github.com/gruntwork-io/cloud-nuke/commands/cli.go:204 (0x1831eaa)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/gruntwork-io/gruntwork-cli/errors/errors.go:93 (0x15f9d1b)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/app.go:490 (0x15e89f2)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/command.go:210 (0x15e9d65)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/urfave/cli/app.go:255 (0x15e6b58)
/go/src/github.com/gruntwork-io/cloud-nuke/vendor/github.com/gruntwork-io/gruntwork-cli/entrypoint/entrypoint.go:21 (0x1834167)
/go/src/github.com/gruntwork-io/cloud-nuke/main.go:13 (0x18343f7)
/usr/local/go/src/runtime/proc.go:195 (0x102b756)
	main: // A program compiled with -buildmode=c-archive or c-shared
/usr/local/go/src/runtime/asm_amd64.s:2337 (0x1057911)
	goexit: ???
  error="AccessDenied: User: arn:aws:sts::111122223333:role/admin/cloud-nuke-test is not authorized to perform: autoscaling:DescribeAutoScalingGroups with an explicit deny\n\tstatus code: 403, request id: 103e6933-9797-474d-b391-55ebfeb1f88d"

Documentation for minimum IAM policy

There ought to be documentation for what the minimum IAM policy is to be able to run cloud nuke. This documentation would be valuable so that users can run cloud-nuke with a cron job and not worry about the user being overly permissive.

This was brought up in #106 but prematurely closed.

I know the README lists what services are targeted, but it does not list what specific permissions are needed. For example, does EC2 Auto Scaling need to be able to list everything or just DescribeAutoScalingGroup?

Limit defaults-aws to security groups only

In the case of default VPCs that are still in use, a user should still be able to remediate for CIS by blanking the default security groups. An option on defaults-aws to exclude VPC destruction would allow this. Currently I do not see any AWS CIS requirements for removing all default VPCs. Only that the default security group restricts all traffic and others do not openly allow RDP or SSH.

Add support for deleting RDS snapshots

We are hitting limits in our testing account with having too many snapshots...

Of course, RDS snapshots are very sensitive things, so we should make sure to support careful filtering by tag, date, name, etc.

Move to go modules

Right now go is with go 1.13.6, many repos have been moved to go modules which is the future go toolchain. So suggesting move this repo to also use go modules and deprecate dep.

Cloud-nuke should leave you with a 100% fresh account.

We recently received the following customer request:

So, I wanted a quick destruction of the environment created by terraform. cloud-nuke is great, but it leaves a lot of resources like rds, elastic ips, vpcs, buckets. sqs etc. I'd like to just go back to a state as if it was a fresh account.

For example I get a warning that elastic ips can't be deleted because they're attached somewhere. Specifically network interface which is not deleted by cloud nuke.

This is more of a meta-issue because updating cloud-nuke to actually get you to a 100% fresh account would mean having support for every available AWS resource, which is an aspirational goal but one that would be very difficult to achieve. Nevertheless, I wanted to record the feedback here so we have it written down.

Introducing you to AWSweeper (a similar tool)

Hi there,

it looks like the problem of trying to clean out Cloud accounts is present everywhere :-) -- I also started working on it last year, so I wanted show you what I've got (a tool named AWSweeper).

Maybe it's helpful for you to reuse some code, fork it and drive it, or contribute (not sure if it's smart for me to continue AWSweeper as a one-man show, which it currently is). Anyway, here are some thoughts behind the tool I wanted to share with you:

  • It currently supports deletion of 29 resource types (but there are so many more). Therefore, I followed a generic approach (via reflection) to easily support deletion of more types out of the box, where only some API information of go-aws-sdk routines to list and delete resources needs to be added to a config array (pointer to code).

  • It is built upon the existing delete methods of the AWS terraform provider (pointer to some code). I thought this might be helpful to get retries, detaching of some policies etc. from IAM resources, "forcing" of deletion where dependencies exist, and other stuff, for free.

  • Integration tests for each resource type using the Terraform testing framework (pointer to the tests).

  • I also started with a all-or-nothing-wipe-out approach, but then realised that it's handy to sometimes keep some resources (e.g. an IAM user + credentials to access the account). So, with AWSweeper one can filter resources by type, tags or ids described in a yaml config (see here). I have the idea to make filtering also more generic, i.e, allow filtering on all attributes available about a resource (such as creation date, etc), returned via the output struct of the Go API.

Thanks for reading & cheers,
Jan

Add region excludes to defaults-aws

In the case that 90% of the regions are unused but need to be CIS remediated before being disabled completely, it would help greatly to be able to nuke default-aws --exclude-region X in similar fashion to the standard nuke aws. Then at most the in use region can be manually remediated for CIS afterwards.

Contribution guideline should highlight dependency on dep, or update to go modules to simplify onboarding

While debugging #49 with @sugandh-pasricha - setting up a local environment to get cloud-nuke code gave the following error:

go get -v github.com/gruntwork-io/cloud-nuke
github.com/gruntwork-io/cloud-nuke/commands
# github.com/gruntwork-io/cloud-nuke/commands
commands/cli.go:22:5: app.Author undefined (type *cli.App has no field or method Author)
commands/cli.go:25:15: cannot use []cli.Command literal (type []cli.Command) as type []*cli.Command in assignment
commands/cli.go:31:24: cannot use cli.StringSliceFlag literal (type cli.StringSliceFlag) as type cli.Flag in array or slice literal:
        cli.StringSliceFlag does not implement cli.Flag (Apply method has pointer receiver)
commands/cli.go:35:24: cannot use cli.StringSliceFlag literal (type cli.StringSliceFlag) as type cli.Flag in array or slice literal:
        cli.StringSliceFlag does not implement cli.Flag (Apply method has pointer receiver)
commands/cli.go:39:24: cannot use cli.StringSliceFlag literal (type cli.StringSliceFlag) as type cli.Flag in array or slice literal:
        cli.StringSliceFlag does not implement cli.Flag (Apply method has pointer receiver)
commands/cli.go:43:17: cannot use cli.BoolFlag literal (type cli.BoolFlag) as type cli.Flag in array or slice literal:
        cli.BoolFlag does not implement cli.Flag (Apply method has pointer receiver)
commands/cli.go:47:19: cannot use cli.StringFlag literal (type cli.StringFlag) as type cli.Flag in array or slice literal:
        cli.StringFlag does not implement cli.Flag (Apply method has pointer receiver)
commands/cli.go:52:17: cannot use cli.BoolFlag literal (type cli.BoolFlag) as type cli.Flag in array or slice literal:
        cli.BoolFlag does not implement cli.Flag (Apply method has pointer receiver)
commands/cli.go:62:17: cannot use cli.BoolFlag literal (type cli.BoolFlag) as type cli.Flag in array or slice literal:
        cli.BoolFlag does not implement cli.Flag (Apply method has pointer receiver)

I also tried and can confirm that go get github.com/gruntwork-io/cloud-nuke gives this error. This failure is because doing a go get gets github.com/urfav/cli which has a breaking change

~/go/src/github.com/urfave/cli# git describe --tags
v1.22.1-335-g0587424

because the pinned version is v1.20.0 as per https://github.com/gruntwork-io/cloud-nuke/blob/master/Gopkg.toml#L30

So if one wants to do local development they should ignore the above error and then go to the downloaded folder and do the following

cd gruntwork-io/cloudnuke
dep ensure -v
go build

which creates a local vendor folder.

Is there a way to ensure that go get github.com/gruntwork-io/cloud-nuke does not give the above error and gets the right versions of modules?

If not then we should have a "How to contribute" section in the README calling this out.

Feature Request - Removing AWS Config and AWS Config Rules

Just wondering if you are looking to in the future add support for removing all config rules.

Use Case would be if anyone has been playing with config rules and setup and forgot any rules they have configured and are being triggered will cause additional charges to users accounts.

It would be good if the tool could also clear these out so if someone wanted to start their account from scratch they can.

Thoughts anyone?

cloud-nuke aws --exclude-resource-type s3 --dry-run NOT SUPPORTED

Hi,
When running:

cloud-nuke aws --exclude-resource-type s3 --dry-run
Incorrect Usage: flag provided but not defined: -exclude-resource-type

NAME:
cloud-nuke aws - BEWARE: DESTRUCTIVE OPERATION! Nukes AWS resources (ASG, ELB, ELBv2, EBS, EC2, AMI, Snapshots, Elastic IP, RDS).

USAGE:
cloud-nuke aws [command options] [arguments...]

OPTIONS:
--region value regions to include
--exclude-region value regions to exclude
--resource-type value Resource types to nuke
--list-resource-types List available resource types
--older-than value Only delete resources older than this specified value. Can be any valid Go duration, such as 10m or 8h. (default: "0s")
--dry-run Dry run without taking any action.
--force Skip nuke confirmation prompt. WARNING: this will automatically delete all resources without any confirmation

ERRO[2020-04-06T21:50:27Z] flag provided but not defined: -exclude-resource-type error="flag provided but not defined: -exclude-resource-type"

Like in the title, unfortunately, this is not working as described in the WIKI.
Thanks,
Agata

EC2 Instances from Automated Tests Are Left Around

I'm pretty consistently seeing EC2 Instances in every region with names like:

  • aws-nuke-test-3nOZAN
  • aws-nuke-test-iBLgDR

It looks like aws-nuke-test-3nOZAN was launched at February 20, 2018 at 7:49:30 AM UTC-7, so I'm pretty confident that aws-nuke tests aren't cleaning up after themselves. Either that, or tests are continually failing, but it doesn't look like that's the case, so the test clean up is probably the culprit.

To see for yourself, check out the PhxDevOps AWS account in any region.

Add support for more complex matching rules

We have many clean-up use cases where we want to nuke all resources that match a specific pattern: e.g., nuke S3 buckets, IAM roles, and IAM profiles where the name matches one of a long list of specific regular expressions (i.e., those that come from our automated tests).

We should extend cloud-nuke with a way to filter resources using these match rules. Specifying them via CLI arguments is likely going to be inconvenient, so we may need some config format for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.