Coder Social home page Coder Social logo

Comments (5)

tmitchel2 avatar tmitchel2 commented on June 22, 2024

This is coming very soon... the code has been getting a little more complex than i think it needs to be so I'm refactoring it, part of that change will satisfy this request. A preview of what the config is going to look like..

{
  "readCapacity": {
    "min": 1,
    "max": 10,
    "increment": {
      "when": {
        "utilisationIsAbovePercent": 90
      },
      "by": {
        "units": 3
      },
      "to": {
        "consumedPercent": 110
      }
    },
    "decrement": {
      "when": {
        "utilisationIsBelowPercent": 30,
        "afterLastIncrementMinutes": 60,
        "afterLastDecrementMinutes": 60
      },
      "to": {
        "consumedPercent": 100
      }
    }
  },

Full explaination to come soon. However, this should enable you to scale instantly up to the consumed value or if you would prefer you could still increment up by a percentage or unit amount.

from dynamodb-lambda-autoscale.

gump59 avatar gump59 commented on June 22, 2024

Nice, I am testing it where I just have it set to increasing to consumed * percentage, and that was working well. I did run into another issue, with AWS CloudWatch itself. It is like CloudWatch got stuck and kept reporting the same metric when hitting the API for it. Logs as well as Cloudwatch via the AWS console reported the same erroneous datapoints. Looking at metrics via Dynamo in the Console showed other metrics that were actually accurate, and oddly enough, matched the metrics that were showing up in DataDog (Metrics / monitoring with AWS integration). I assume that DataDog must have some VIP arrangement with AWS and that something was going on with those hitting the general populace endpoints. According to AWS support, we may have "exhausted the API limits". This could make a dependence on CloudWatch API calls dangerous to depend on if there are no means of effectively managing and monitoring the API call utilization. In my test case, we got stuck in a scaled down scenario.

I'm going to go ahead and throw it in production with my modifications soon, and just set it so that it won't actually decrement.

from dynamodb-lambda-autoscale.

gump59 avatar gump59 commented on June 22, 2024

I picked up some information from AWS, looks like it is something completely different than first indication. May need to change how the datapoint is calculated. From AWS support:

"All in all, average, min and maximum statistic don't show the Consumed Reads per second. They show the consumed reads per request. The Sum Statistic shows the total consumed reads in a given time period which is the most useful statistic when pulling Reads consumed per second."

So, Sum = the number of reads over the time period specified, Average = the average reads / REQUEST over the time period.... I would say that is a little less than intuitive or clearly documented. This is also why the metrics are different between dynamodb console and cloudwatch console:

"There seems to be a slight difference in the presentation of the metric on the DynamoDB Console and in CloudWatch. The Cloudwatch Metric ConsumedReadCapacityUnits can be viewed in multiple statistic modes such as (Average, Minimum, Maximum, Sum) whereas the same metric when viewed in the Dynamodb console (as Consumed Reads) shows the same metric with the Sum Statistic. Basically what you are seeing in the dynamodb console metrics tab is Consumed Reads = Sum of ConsumedReadCapacityUnits/(Time Period - 60 seconds, 300 seconds, etc) which gives the consumed reads per second. The ConsumedReadCapacityUnits when viewed in average shows the consumed reads per request and each request can have multiple reads. "

I'm going to play around with the piece that does the 5 * 1 min datapoint, switch it to sum / 60 to see if that addresses it.

from dynamodb-lambda-autoscale.

tmitchel2 avatar tmitchel2 commented on June 22, 2024

Thanks for investigating that, I had seen the note in the api re: using Sum but I think in my recent tests average did seem to give the correct value over 1 minute periods, however, i could be wrong here.

Anyway, I've pushed my latest to github. The changes are quite extensive which is good and bad.

The new provisioner configuration now supports different forms of increments / decrements without altering code. You can increment by certain unit or percentage amounts, but more importantly you can increment / decrement 'to' a unit or percentage amount. Which means you can increment immediately up to the required value rather than progressively 'climb' up. I've made this strategy the default.

As well as the main 'Provisioner' being configurable the 'CapacityCalculator' can now very easliy be changed also. It should be really simple to change the 'Average' request to a 'Sum' request and do the extra bit of division logic to get a reasonable average. You could even go further and plot a regression line through it to get a more accurate projected value but it's probably a little OTT.

Take a look a let me know what you think.

from dynamodb-lambda-autoscale.

tmitchel2 avatar tmitchel2 commented on June 22, 2024

This work is complete now, please check latest. I've also updated the main readme.md which explains the new style configuration.

from dynamodb-lambda-autoscale.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.