Comments (13)
I thought this might be because I've not tested the write side as much as the read but i just did a local test again for myself and it seems to be scaling it fine for me.
Ensure you are running it every minute, it only looks a minute in the past to gather the statistics. Perhaps the statistics aren't available for some reason and its giving a false value. I might need to look something like 5 - 10 minutes in the past instead but that would slow down the provisioning correction unless i applied some smarter math to the statistic data points.
Also, maybe put the following debug logging into Config.js on line 110
log(JSON.stringify({data, isAboveThreshold, isBelowMin}, null, 2));
And update the import on line 4 to
import { log, invariant } from '../src/Global';
That should log the all data from cloudwatch which it uses to determine the write increment.
Please give it a go and let me know what the results are.
from dynamodb-lambda-autoscale.
I was running every minute (the logs said so), and even when I disabled the trigger and ran it locally it still didn't work.
I have the impression that it didn't get any datapoints from my hardcoded logs.
I'll try it again with your change asap.
Thanks 😄
from dynamodb-lambda-autoscale.
I can only try it in 12h or so.
May be related with the fact that I have multiple tables?
from dynamodb-lambda-autoscale.
Ok, got to do it earlier.
dynamic-dynamodb is working, and for eg reports right now 82.77% write usage, but dynamodb-lambda-autoscale reports:
"ConsumedThroughput": {
"ReadCapacityUnits": 0,
"WriteCapacityUnits": 0
}
from dynamodb-lambda-autoscale.
Definitely handles multiple tables but there's obviously something not right. Ill take another look in a few hours.
from dynamodb-lambda-autoscale.
Ok so I determined that taking an throughput average for the last minute in CloudWatch was a little too aggressive and would often give a 0 reading. I really want to maintain the incrementation agility as a feature so Ive altered the algorithm to take 5 one minute averages and pick the max value. Along with that change I've made various other changes including adding 100% flow coverage and some improved logging. I've done some quick minimal testing so please let me know if it doesn't work quite how you would expect.
from dynamodb-lambda-autoscale.
I can't currently test it for increases, but I can say that it is not decreasing as it should.
I've not had any read or write to any of the tables for quite some hours, maybe the lack of datapoints is a problem by itself?
The processing time also increased for the JS quite significantly (local test, didn't actually run on aws), maybe the extra computation for the averages is not such a good idea when the whole purpose of this may be to fit into the free tier of the lambda?
I'll test the increases Monday.
Thanks 👍
from dynamodb-lambda-autoscale.
The decreases have to work in a different way as you are only limited to 4 per calendar day so it may have been holding off on purpose. Ill put some specific logging in to make it easy to understand why it might not have decreased.
There shouldn't be any additional processing time at all, it's literally only doing the max of 5 numbers instead of taking 1. If you are running locally then a change in latency between calls might cause a slowdown. Ill take a look at it though.
from dynamodb-lambda-autoscale.
Yes, you must be right, the latency should have a great impact on the total processing time.
The duration must be because I was testing it on a different network than from where I normally test.
from dynamodb-lambda-autoscale.
Actually the stats logged out at the end should let you know where all the time is being spent. I've just made a checkin which improves the logging.
from dynamodb-lambda-autoscale.
I was attempting to use the code before your latest updates and was having the same issue with it not scaling up. I updated to the latest and it is working now. BTW, I think your work on this is awesome, the coolest lambda for dynamo I have seen so far.
I added in a parameter to the config.env for filtering the table names based on a string such that a particular lambda will only operate on tables that match the string. It is pretty basic, and the first javascript I have written ever, but if is a feature you are interested in, I can put it into a public fork for you to see.
from dynamodb-lambda-autoscale.
As just mentioned in another ticket I'm refactoring the code to make these potential changes a little bit more obvious. I think commented out options is what ill go for. Please, hold on for my next commit and then lets see if your changes can fit in. Thanks...
from dynamodb-lambda-autoscale.
This should be resolved now, closing.
from dynamodb-lambda-autoscale.
Related Issues (20)
- How to scale down working HOT 1
- Minimum capacity ignored HOT 6
- [Discussion] Strategy with fixed values. HOT 5
- Incorporate throttled events into provisioning calculations
- Odd logging message & failure to increase throughput capacity HOT 1
- Add an SNS notification channel for increase / decrease throughput? HOT 7
- Table IOPS are currently being updated HOT 1
- Combine getmetricsstatistics into as few requests as possible
- Add circuit breaker logic
- Is there a setting to limit the number of downscales per day? HOT 1
- Configuration per table group HOT 4
- Would be nice to have a --dryRun feature
- Socket timeouts HOT 1
- Decreasing write capacity was disallowed without any reasons, decreasing using Management Console worked HOT 4
- Would be nice to have a provisioner that selects tables based on tags HOT 1
- Missing script build in package.json HOT 2
- AWS Officially releases native DynamoDB autoscaling HOT 13
- Lambda fails with error
- Configuration specific to env or tables HOT 4
- Allow more decreases per day as per docs HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dynamodb-lambda-autoscale.