Comments (10)
I created a pull request to address the above: #16583
from kops.
We had the ASG terminate an instance today that didn't get the Lifecycle hook notification even though there were no issues with the instance when it joined the Warmpool.
The nodeup
command added this in the version where we started having problems:
loader.Builders = append(loader.Builders, &networking.AmazonVPCRoutedENIBuilder{NodeupModelContext: modelContext})
And that step does the change to set MACAddressPolicy = none
and to run systemctl restart systemd-networkd
Is there a chance this restart of the network is happening at a bad time interfering with the ASG Lifecycle hook notification sometimes?
Are these loader.Builders
run in order, or in parallel? Since the networking.AmazonVPCRoutedENIBuilder
is called towards the end of the list and a long time after model.HookBuilder
for example...
from kops.
Based on your PR, I think you've figured this out, but ....
Are these loader.Builders run in order, or in parallel? Since the networking.AmazonVPCRoutedENIBuilder is called towards the end of the list and a long time after model.HookBuilder for example...
The builders are run in order, but they set up a list of Tasks which run in parallel. The Tasks don't run fully parallelized, rather they run in "waves", the first wave has all the tasks that have no dependencies, then when all of those complete we move onto the next wave, which comprises the tasks that are newly unblocked.
Dependencies are either because we see that an object has a reference to a Task (and there's some "resolving" that happens based on the Task ID so we don't need to pass around Tasks everywhere), or because there are explicit Dependencies.
(We could change it to run each task as soon as possible, "waves" are just how it works today. I don't think it will make a huge difference in execution time, and it might be a little less deterministic i.e. more bugs, so that's why we haven't prioritized it)
At least that's my recollection :-)
from kops.
Looking at how the lifecycle hook should be called from nodeup:
kops/upup/pkg/fi/nodeup/command.go
Lines 375 to 382 in 0e70556
That is actually called after all the nodeup tasks. So if the node isn't delivering that hook, then I don't think it's about task execution.
A few things to check:
- Is the node getting enableLifecycleHook: true in the nodeup config (e.g. ) from one of the tests
- Is nodeup printing
klog.Info("No ASG lifecycle hook found")
kops/upup/pkg/fi/nodeup/command.go
Line 434 in 0e70556
- Did it print
Lifecycle action completed
? If so ... very curious, because that suggests that everything worked.
Based on the CloudTrail events, I'm assuming that the No active Lifecycle Action found with instance ID i-xxxxxxxxxxxxxxxxx.
is coming from the CompleteLifecycleAction call here, but that's surprising I think given we just fetched it. Maybe the key word is "active" there, like maybe there's a delay in the lifecycle action becoming ready?
One thing though, you mention model.HookBuilder - IIUC that is for custom hooks, not for the built-in warmpool hook I've linked above. Are you using a custom hook for the lifecycle action?
On the ENI task, I did a quick spot check and the the error handling / retry logic looked correct, so I'd be surprised if that was interfering with the CompleteLifecycleAction call. But ... something is plainly going wrong.
from kops.
Hi @justinsb
Thanks for looking into this.
This issue has been overloaded a little bit in that it is covering two issues and perhaps you're mixing the two given your responses above?
The first (and the main thing this issue was about), was the instances in the ASG starting before the ASG Lifecycle Hook was in place.
This occurs as soon as the ASG is created and happens to both the Warmpool instances and the instances that skip the Warmpool to join the ASG to be put into service.
It's these instances that encounter the No active Lifecycle Action found with instance ID i-xxxxxxxxxxxxxxxxx
error.
In the case of the instances that skipped the Warmpool and were put into service at this time, it isn't an issue for them since the ASG won't kill them because the Lifecycle wasn't in place at the time they joined.
But when the Warmpool instances go into service at a later time due to a scale-up or node replacement, they don't attempt to perform the Lifecycle notification, and so will be killed by the ASG causing an outage.
The Pull Request I raised fixes the problem above for the instances that join the Warmpool only. They longer encounter the No active Lifecycle Action found with instance ID i-xxxxxxxxxxxxxxxxx
error.
However the instances that skip the Warmpool to go into service immediately will still throw these messages, but it's not an issue and they are safe from being reaped by the ASG because they joined before the hook was in effect.
The above problem has nothing to do with the loader.Builders
as far as I can tell, and are purely because they've started before the Lifecycle hook was active on the ASG.
For kOps 1.28.4 and earlier (where we were using Ubuntu 20.04) we have never seen the above issue before.
It all started at kOps 1.28.5 (where we also switched to Ubuntu 22.04) and persists in kOps 1.29.0.
As to why we had never encountered it before I cannot say.
The 2nd issue is what I posted about on May 31, and this is the one where I was wondering if the loader.Builders
came into play.
In this case these instances joined the Warmpool a long time after the ASG and its Lifecycle hook was established and had no errors at all.
CloudTrail shows no issues with the instances as they joined the Warmpool, and shows that they properly performed the Lifecycle hook notification at the time they joined the Warmpool.
But later when they leave the Warmpool to go into service, there are no logs in CloudTrail showing them attempting to perform the Lifecycle hook notification (successfully or not); and after 10 mins the ASG kills them for having not completed the Lifecycle notification, causing an outage.
So either they didn't attempt the Lifecycle notification at all (which seems unlikely) OR they did, but it got lost.
If it got lost, I was wondering if it was because it happened at the time the task created by the loader.Builder
executes the systemctl restart systemd-networkd
command causing networking to go offline for a moment.
Note that what I failed to mention at the time I wrote about this 2nd problem on May 31 was that it was encountered on a Kubernetes 1.29.5 cluster that was created with kOps 1.29.0.
But this problem doesn't happen often and we didn't run the Kubernetes 1.28.9 cluster built with kOps 1.28.5 for very long (about 2 weeks) so for all I know the problem existed there as well, but not necessarily.
Still talking about the 2nd issue but addressing your questions...
One thing though, you mention model.HookBuilder - IIUC that is for custom hooks, not for the built-in warmpool hook I've linked above. Are you using a custom hook for the lifecycle action?
No that was me making a bad assumption that it might have had something to do with the creation of the lifecycle hooks.
The point I was trying to make is that there are all sorts of tasks being created there, where the networking.AmazonVPCRoutedENIBuilder
one that restarts networking is towards the end.
I was thinking this probably needed to be one of the first tasks performed, and anything that requires the use of the network should probably depend on it having called systemctl restart systemd-networkd
before they attempt to do anything network related.
However you're saying the lifecycle hook is called after all the node tasks are completed, so I guess my theory above about the network restart possibly interrupting things can't be right...
A few things to check
Unfortunately with this problem any node specific configuration and the logs produced by nodeup or caught by the systemd journal are lost, since we are not actually aware there is a problem with the instance until the ASG has already terminated it.
The vast majority of the instances do not encounter the problem at all, and it has only happened a couple of times.
We've since stopped deploying our own node-termination-handler that was running as a Daemon Set in IMDS mode to catch spot interruptions into having kOps deploy it in queue-processor mode so that it also handles when the ASG terminates the instances that fail lifecycle hooks.
This makes it less obvious if we are still having nodes terminated due to this problem since we'll no longer have outages caused by a node suddenly going away while still serving requests; so I'd have to go back to CloudTrail again to try and find the frequency this is happening.
Is the node getting enableLifecycleHook: true in the nodeup config
It should be getting the same configuration as all the other instances on the same ASG that didn't encounter the issue.
But with the instance having been terminated I have no way of checking.
Is nodeup printing
klog.Info("No ASG lifecycle hook found")
Due to the ASG terminating the instances, I have no way of knowing as far as I can tell?
With my theory of the systemctl restart systemd-networkd
being incorrect because the lifecycle hook call is done after all tasks have completed their run, then I don't know why in these few cases the instances have appeared to have not performed the lifecycle hook call; yet still managed to be put into service within the cluster for ~10 mins before the ASG terminates them for failing to perform the Lifecycle hook call.
from kops.
Hi @justinsb
This issue was closed due to the 1st issue being addressed.
Do I need to open a new issue covering the 2nd one where random hosts that joined the warmpool much later in the ASG's life had no problem then but we were terminated by the ASG later for not having called the lifecycle hook?
from kops.
Hopefully this will do it:
/reopen
Just to confirm, you are using the AWS VPC CNI? The systemd-networkd change does look like a likely possibly-related change, but it should only be in play if you're using AWS VPC CNI
from kops.
@justinsb: Reopened this issue.
In response to this:
Hopefully this will do it:
/reopen
Just to confirm, you are using the AWS VPC CNI? The systemd-networkd change does look like a likely possibly-related change, but it should only be in play if you're using AWS VPC CNI
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
from kops.
Yes. We are using the AWS VPC CNI.
from kops.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from kops.
Related Issues (20)
- Kops not creating any instances when creating a cluster on AWS HOT 1
- Kops cluster upgrade from 1.27 to 1.29 create new NLB, Targetgroup and AutoscalingGroup in AWS and also updated new information in old targetgroup
- Provisioing m7i.large instances in us-east-1 reverting to old instance type HOT 1
- Having an issue with my test cluster , my cluster was delete by mistake and iam trying to recreate my cluster back
- Update to new etcd-manager image HOT 1
- AmazonVPC CNI broken in Kops
- Allow to define resources for miscellaneous addons
- kOps leave connections open in OpenStack when rebooting instance HOT 1
- Restricting kms permissions
- Dependency loop between cert-manager-webhook and AWS load balancer controller
- S3 file asset repository CLI unable to read file HOT 2
- S3 file asset repository URL validation
- After upgrade from Kubernetes 1.29.2 to 1.30, dns-controller fails HOT 1
- kubelet cannot pull images when using ECR containerProxy asset repository HOT 12
- All nodes have warning events when stood up with kOps 1.30 HOT 5
- cni migration and the current docs state HOT 2
- Support for DualStack IPv4/Ipv6 HOT 3
- kops toolbox instance-selector command bug: panic: interface conversion: interface {} is *string, not string
- Addons: AWS Pod Identity Webhook doesn't support sidecar containers.
- Missing Services in AWS Cluster When 'hostedZoneIDs' Parameter Used with Kops to create cluster with kubernetes 1.29 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kops.