Coder Social home page Coder Social logo

fl-aws's Introduction

fl-aws (Flaws at AWS)

Use this repo to

This Repo is NOT affiliated with AWS or Amazon.

Stories from the AWS front line

I Have No Idea What I'm Doing Here But It Sounds Fatinstating

May 2016 I've changed jobs, and I ended up working with AWS. Exciting, yes. Greenfield, yes. Powerful, yes. Smooth, no. Logical, no.

So here I am half a year later. I haven't hit rock bottom, and it doesn't seem like I will do so anytime soon, so I thought I'd better document some of the findings, the workarounds and some of the struggles which I label flaws in the hype wagon called AWS.

FWIW I'm a well-above-average reader of technical documentation, and I have a sound experience in software development, Unix, networking, protocols, virtualization, infrastructure, devops, etc. This is the reason for choosing flaws over personal lacking in motivation/understanding, and not the customer is always right.

I know that there's a bunch of "smartness" in and at AWS, but that's beside the point. This repo is not about throwing rotten tomatoes at AWS, though the number of open issues over time can point in that direction.

Overheard: AWS tagline "Friends don't let friends build data centers,"

โ€” Tony Baer (@TonyBaer) July 21, 2016
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>

There's a discrepancy between what I see AWS delivering in "the not-just-iron department" and what AWS promises overall, the quality that one expects in these times, not just from a AAA class company but from the IT sector in general, even when it's marked "beta", MVP, etc.

Customers on the Basic support plan can only ask for help and file issues on the AWS public forums and general public sites like StackOverflow, while the ones paying for Technical Support can do so in a closed environment called "Support Center". NOTE: on top of that, most paying customers still won't get information about AWS decisions, just that behaviour/limitation X is by design (o.O) . I can't help but think that the split (forums vs support center) is strategic as to keep the noise levels up, in order to disperse any concentration of negative perception.

Maybe that's just me and my conspiracy devil sitting on my left shoulder talking, but that's what I'm addressing with this repo. I'm concentrating all the negative perception facts about the AWS offerings.

I'd like this repo to keep AWS transparently accountable for the lackings in their offering.

We will not inflict a change in their reply style "I have +1 you on our internal backlog, but I can't give you any ETA on a fix" (what year is this that I need a proxy to +1 for me?! and +1 on top of which number?!) and surely not get to peak into their development process --- for instance, as it is already possible with Atlassian (creators of JIRA, BitBucket, etc), who keep their issues public making it trivial to see and judge the quality of their products and of their development process --- but at least next time someone asks what's the quality level of AWS and what are some of the pain points with a specific service, they can read some these war stories from the AWS front line.

And I'd like to hear yours too!

If you too are working on the AWS platform, feel free to submit a new issue or submit a PR with a BCP proposal.

There are plenty and there will be even more people that will want to get their hands dirty with AWS. Help them make an informed decision.

NOTE: https://github.com/open-guides/og-aws is great, but it fills a different void.

License

Apache 2.0

fl-aws's People

Contributors

andreineculau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fl-aws's Issues

naming inconsistencies

AWS::KinesisFirehose::DeliveryStream talks about BucketARN and RoleARN,
while AWS::Lambda::Permission talks about SourceArn and AWS::Lambda::Function talks about Role (referring to an ARN)

there's no fn:getatt that returns s3 bucket arn

although there is at least one resource, AWS::KinesisFirehose::DeliveryStream (not to mention policy statements), that requires not a bucket name, but a bucket ARN (?!), there's no cloudformation function to get the ARN of a bucket

partition with firehose prefix

athena (hive) expects partitions to be part of the indexing path in a key-value pattern (year=YYYY/month=MM/day=DD/hour=HH), but firehose cannot be configured to do that.
given that, automatic partitioning needs to be done manually

aws-sdk-js cannot understand aws-cli assumed roles

Given that we are all grown ups and agree that the best way to secure AWS access is via Roles with permissions, and Users with one permission (i.e. to assume a role) - ref https://cloudonaut.io/improve-aws-security-protect-your-keys-with-ease/ - I have an easy shell alias aws_switch that allows me to switch between profiles with different roles.

Problem is that aws-sdk-js (nor other sdks e.g aws-sdk-go) does not understand that my current shell is a AWS environment with an assumed role.

linkify all resources

across the entire Console there are intertwined resources, yet only in very few places one can actually get links to jump from one resource to another e.g. Kinesis Firehose links to the bucket's view in Console.

cannot redeploy api gateway with no methods

one api gateway was about to be end-of-life but we wanted a gradual deprecation, so we removed one resource at a time.

removing all resources was fine, until we wanted to make other changes in that stack.
cloudformation started complaing how the rest api has no methods, thus it cannot create an api deployment anymore... (it managed first time)

cloudformation: renaming an apigateway recordsetgroup resource ends up in deletion

imagine you have a diff in a cloudformation stack template similar to

-    "RouteGroup": {
+    "ApiRouteGroup": {
       "Properties": {
         "HostedZoneName": "example.com.",
         "RecordSets": [
           {
             "Name": "foo.example.com",
             "ResourceRecords": [
               "bar.example.com"
             ],
             "TTL": 300,
             "Type": "CNAME"
           }
         ]
       },
       "Type": "AWS::Route53::RecordSetGroup"
     },

basically a rename.

Don't be confused when you see this in your cloudformation event log:

18:17:12 UTC+0100	UPDATE_COMPLETE	AWS::CloudFormation::Stack	ci	
18:17:12 UTC+0100	DELETE_COMPLETE	AWS::Route53::RecordSetGroup	RouteGroup	
18:16:39 UTC+0100	DELETE_IN_PROGRESS	AWS::Route53::RecordSetGroup	RouteGroup	
18:16:36 UTC+0100	UPDATE_COMPLETE_CLEANUP_IN_PROGRESS	AWS::CloudFormation::Stack	ci	
18:16:33 UTC+0100	CREATE_COMPLETE	AWS::Route53::RecordSetGroup	ApiRouteGroup	
18:15:27 UTC+0100	CREATE_IN_PROGRESS	AWS::Route53::RecordSetGroup	ApiRouteGroup	Resource creation Initiated
18:15:24 UTC+0100	CREATE_IN_PROGRESS	AWS::Route53::RecordSetGroup	ApiRouteGroup	
18:15:20 UTC+0100	UPDATE_IN_PROGRESS	AWS::CloudFormation::Stack	ci	User Initiated

CloudFormation DELETES the DNS record !

cannot delete item in item view, only in list

several services do not offer the possibility to delete items in "item-view", only in "list-view" despite having no multi-selection i.e. you can only delete one by one

example: IAM Roles, S3 Buckets, etc

cloudformation says Internal Failure

Executing a cloudformation change-set would result in "Internal Failure" next to the stack itself (not a resource). Very clear error message, no doubt.

Upon trial-and-error investigation, I figured it's a IAM permission issue (giving AdministratorAccess solves the issue).

It's impossible to detect which permissions are missing, support is yet to reply back.

cannot change s3 prefix

for unknown reasons, s3-firehose prefixes the dumps with a YYYY/MM/DD/HH prefix and cannot be configured to do otherwise

failure to delete newly created firehose resource

When a stack encounters a failure, if it created a new Firehose resource, it may fail to delete the resource with a message like
"Firehose FOO under accountBAR cannot be deleted in CREATING state".

inconsistent and unusable table UI: selection

  • Console > S3: no radio box, and diff click behaviour whether you click the bucket name or the row
  • Console > CloudFormation: checkbox performs like a radiobox
  • radiobox in a table UI? I can understand actions being limited when having multiple selection, but I do want to be able to delete multiple items at once

s3 limitations

unlike other AWS services, S3 buckets are global and not even scoped by an AWS account, meaning the names need to be unique.

if you want to save yourself the trouble if later on you want to have one bucket per region, name them with a region suffix.

on top of that, bucket contents can be accessed via HTTP, and HTTPS if and only if they don't contain dots. so once again save yourself the trouble and don't use any dots.

the simplest advice that I have is to treat them like domain names, all lowercase, all alphanumeric and hyphens. e.g. <my-own-bucket>-<at-my-company-tld>-<in-this-region> -> builds-example-com-eu-west-1

it is beyond me why there isn't any naming limitation and/or automatic translation. KISS should prevail!

DeletionPolicy is not respected for non-empty S3 Buckets

as per the docs "Only Amazon S3 buckets that are empty can be deleted. Deletion will fail for buckets that have contents." (even with DeletionPolicy is set to Delete).

why is it like that? I doubt that even AWS knows why. As the creator of that resource, good design says that I should be able to do anything I want with it. Can you imagine if your OS would implement a similar rule? "oh yeah, you created this folder, you have r/w permissions, but hey first delete every file one by one in this folder and then we'll let you delete the folder itself"

PS: the ridiculous situation gets even worse when the bucket is created as part of a cloudformation stack-create process - if the stack fails, then the bucket doesn't get deleted.

elastic kinesis

one benefit that you buy into when switching to "serverless" cloud architecture is elasticity i.e. just show us the money, and we scale your infrastructure anyway you need. Wrong!

Kinesis shards can only be scaled manually, and you pay for the idle capacity. The mismatch should be corrected by using the capacity as a maximum (i.e. I don't want to pay more than X USD for this dynamodb table being used this month), and you only pay for the used capacity.

elastic dynamodb

one benefit that you buy into when switching to "serverless" cloud architecture is elasticity i.e. just show us the money, and we scale your infrastructure anyway you need. Wrong!

DynamoDB read/write capacity can only be scaled manually, and you pay for the idle capacity. The mismatch should be corrected by using the capacity as a maximum (i.e. I don't want to pay more than X USD for this dynamodb table being used this month), and you only pay for the used capacity.

http proxy needs one fake integration response

When creating a AWS::ApiGateway::Method resource in cloudformation to act as a simple HTTP reverse proxy, it is still necessary to add set Properties.Integration.IntegrationResponses to the illogical [{"StatusCode": 200}].

On top of that, the source server can reply with any HTTP status code, not just 200 (but thank goodness for this illogical part, I mean!)

inconsistent delete dialogs

deleting a bucket or a firehose requires you to type/paste the resource's name in a confirmation dialog.
but not deleting a kinesis stream...

inconsistent and unusable table UI: sorting

not able to sort on all columns, and while it's ok not to sort on 100% of the columns, selection of the columns is random and not considering basic user intent (like sorting dates)

e.g. Console > IAM > Users

inconsistent Sid definitions

AWS::KMS::Key policy statements handle a Sid with space,
but AWS::Lambda::Function policy statements cannot and CloudFormation fails with:

Statement IDs (SID) must be alpha-numeric. Check that your input satisfies the regular expression [0-9A-Za-z]*

mishandling of content-encodings and file extensions

Firehose can be configured to dump onto an S3 bucket raw or compressed records.

When choosing compression, Firehose will write instead to an object like "foo.gz" (as opposed to the uncompressed "foo") but also set metadata "Content-Encoding: gzip". Other metadata will stay the same "Content-Type: application/octet-stream".

This is not just confusing but plain logic breakdown as any HTTP-knowledge person would tell you. Either don't use the ".gz" extension and stick to "Content-Encoding: gzip", or use it and set "Content-Type: application/gzip" and forget the Content-Encoding header.

default region

IAM > Encryption Keys: if my active region (upper-right corner is eu-west-1) then default to it, and not just pick "the first region" (us-east-1)

column sorting fails

Console > IAM > Encryption Keys

change the status of a key (enabled or disable) one, and then sort by Status. You'll notice that the sorting happens based on the old status of the key, not the new one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.