Coder Social home page Coder Social logo

eks-cluster-upgrade's People

Contributors

amazon-auto avatar balajiv603 avatar bryantbiggs avatar dependabot[bot] avatar hkavalip avatar jagarapu-roshini avatar kcaws avatar kushaggarwal avatar mbeacom avatar nayanen avatar nitishsaik avatar quixoticmonk avatar sanusatyadarshi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eks-cluster-upgrade's Issues

Bug: Deprecated API usage is incorrect and mis-leading

Expected Behaviour

Running eksupgrade should provide accurate results

Current Behaviour

Currently, based on the logic used by eksupgrade, the results of checking for deprecated APIs will hardly ever be accurate - it will vary based on the version of eksupgrade and its python client. "

More details and links to the upstream Kubernetes project can be found here https://github.com/clowdhaus/r8s#r8s

Code snippet

N/A

Possible Solution

For now, it should simply be removed in order to ensure users are not mislead into believing their cluster does not contain any resources with deprecated APIs

Steps to Reproduce

N/A

Amazon EKS one click upgrade version

latest

Python runtime version

3.8

Packaging format used

Git clone, PyPi

Debugging logs

N/A

OneClickUpdate Script Stuck

I tried to update my EKS cluster using the given script. Below are the steps that I did on my Cloud9 Instance.


git clone https://github.com/aws-samples/amazon-eks-one-click-cluster-upgrade.git
cd amazon-eks-one-click-cluster-upgrade/
python installer.py 
python eks_updater.py <name-of-cluster> <Updated Version> <region>

I can see that the Control Plane is updated to 1.21 version. However, the eks_updater script is still running. I initiated the process some 2 hours ago and the Control Plane is running now the specified version.

I could see that the script is stuck at the update AddOn stage. Below is the last message on the CW logs

The Addons Found = ['aws-load-balancer-controller-6db9694d6b-58gcd', 'aws-load-balancer-controller-6db9694d6b-pmpsb', 'aws-node-5t9x9', 'aws-node-rlkdt', 'aws-node-vr4df', 'coredns-765545c8b8-7zhsn', 'coredns-765545c8b8-jlr7w', 'kube-proxy-b9gtd', 'kube-proxy-kbjvf', 'kube-proxy-x4gzt', 'metrics-server-9f459d97b-wqrcn']

Here is what i see on my Command line from where i ran the Python Updater script.

The Cluster eksworkshop-eksctl is Still Updating to 1.21 ..... 00:11:54.12
The eksworkshop-eksctl Updated to 1.21
The Time Taken For the Cluster to Upgrade  00:12:20.41
 The add-ons Update has been initiated.... 
The Addons Found =  aws-load-balancer-controller-6db9694d6b-58gcd aws-load-balancer-controller-6db9694d6b-pmpsb aws-node-5t9x9 aws-node-rlkdt aws-node-vr4df coredns-765545c8b8-7zhsn coredns-765545c8b8-jlr7w kube-proxy-b9gtd kube-proxy-kbjvf kube-proxy-x4gzt metrics-server-9f459d97b-wqrcn
aws-node-5t9x9 Current Version =  v1.7.5-eksbuild.1 Updating To =  v1.9.3
Total Pods With aws-node = 3
old vpc cni Pod aws-node-5t9x9   new vpc cni aws-node-xq957
aws-node-rlkdt Current Version =  v1.7.5-eksbuild.1 Updating To =  v1.9.3
Total Pods With aws-node = 3
old vpc cni Pod aws-node-rlkdt   new vpc cni aws-node-xq957
aws-node-vr4df Current Version =  v1.7.5-eksbuild.1 Updating To =  v1.9.3
Total Pods With aws-node = 3
old vpc cni Pod aws-node-vr4df   new vpc cni aws-node-5v8dr
coredns-765545c8b8-7zhsn Current Version =  v1.8.3-eksbuild.1 Updating To =  v1.8.4-eksbuild.1
Total Pods With kube-dns = 3
old CoreDNs Pod coredns-765545c8b8-7zhsn         new CoreDnsPod coredns-59d47d99dc-xxvs5
coredns-765545c8b8-jlr7w Current Version =  v1.8.3-eksbuild.1 Updating To =  v1.8.4-eksbuild.1
Exception in thread Thread-4:
Traceback (most recent call last):
  File "/usr/lib64/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/amazon-eks-one-click-cluster-upgrade/eksupdate/src/k8s_client.py", line 26, in run
    x=addon_status(cluster_name=cluster_name,new_pod_name=new_pod_name,podName=podName,regionName=regionName,nameSpace=nameSpace)
  File "/home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/amazon-eks-one-click-cluster-upgrade/eksupdate/src/k8s_client.py", line 219, in addon_status
    if response.status.container_statuses[0].ready and response.status.container_statuses[0].started:
TypeError: 'NoneType' object is not subscriptable

Total Pods With kube-dns = 3
old CoreDNs Pod coredns-765545c8b8-jlr7w         new CoreDnsPod coredns-59d47d99dc-xxvs5
kube-proxy-b9gtd Current Version =  v1.20.7-eksbuild.1 Updating To =  v1.21.2-eksbuild.2
Exception in thread Thread-5:
Traceback (most recent call last):
  File "/usr/lib64/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/amazon-eks-one-click-cluster-upgrade/eksupdate/src/k8s_client.py", line 26, in run
    x=addon_status(cluster_name=cluster_name,new_pod_name=new_pod_name,podName=podName,regionName=regionName,nameSpace=nameSpace)
  File "/home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/amazon-eks-one-click-cluster-upgrade/eksupdate/src/k8s_client.py", line 219, in addon_status
    if response.status.container_statuses[0].ready and response.status.container_statuses[0].started:
TypeError: 'NoneType' object is not subscriptable

Total Pods With kube-proxy = 3
old KubProxy Pod kube-proxy-b9gtd        new KubeProxyPod kube-proxy-jh955
kube-proxy-kbjvf Current Version =  v1.20.7-eksbuild.1 Updating To =  v1.21.2-eksbuild.2
Total Pods With kube-proxy = 3
old KubProxy Pod kube-proxy-kbjvf        new KubeProxyPod kube-proxy-tkd98
kube-proxy-x4gzt Current Version =  v1.20.7-eksbuild.1 Updating To =  v1.21.2-eksbuild.2
Total Pods With kube-proxy = 3
old KubProxy Pod kube-proxy-x4gzt        new KubeProxyPod kube-proxy-tkd98

I took the strace and lsof O/p of this process and seems like it is waiting on

Admin:environment $ ps -eFL | grep python
ec2-user  3605 25204  3605  0    1 29855   920   1 04:19 pts/4    00:00:00 grep --color=auto python
root     17476 17472 17476  0    1 91216 29912   1 03:37 ?        00:00:00 /usr/bin/python -tt /usr/sbin/yum-cron
ec2-user 22850 26543 22850  0   19 385452 136508 0 02:37 pts/3    00:00:05 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28775  0   19 385452 136508 1 02:50 pts/3    00:00:09 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28776  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28777  0   19 385452 136508 1 02:50 pts/3    00:00:19 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28780  0   19 385452 136508 0 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28781  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28782  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28783  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28784  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28785  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28786  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28787  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28788  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28789  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28790  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28791  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28792  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28793  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
ec2-user 22850 26543 28794  0   19 385452 136508 1 02:50 pts/3    00:00:00 python3 eks_updater.py eksworkshop-eksctl 1.21 us-west-2
Admin:environment $ 
Admin:environment $ lsof -p 26543
COMMAND   PID     USER   FD   TYPE DEVICE  SIZE/OFF     NODE NAME
bash    26543 ec2-user  cwd    DIR  259,1       190   913808 /home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/a azon-eks-one-click-cluster-upgrade
bash    26543 ec2-user  rtd    DIR  259,1       257       96 /
bash    26543 ec2-user  txt    REG  259,1    935976  4195692 /usr/bin/bash
bash    26543 ec2-user  mem    REG  259,1     71160  8410842 /usr/lib64/libnss_files-2.26.so
bash    26543 ec2-user  mem    REG  259,1     37032  8789607 /usr/lib64/libnss_sss.so.2
bash    26543 ec2-user  mem    REG  259,1 113049440 13072173 /usr/lib/locale/locale-archive
bash    26543 ec2-user  mem    REG  259,1   2021864  8410826 /usr/lib64/libc-2.26.so
bash    26543 ec2-user  mem    REG  259,1     19208  8410830 /usr/lib64/libdl-2.26.so
bash    26543 ec2-user  mem    REG  259,1    179264  8410886 /usr/lib64/libtinfo.so.6.0
bash    26543 ec2-user  mem    REG  259,1    174280  8410819 /usr/lib64/ld-2.26.so
bash    26543 ec2-user  mem    REG  259,1     26370     1393 /usr/lib64/gconv/gconv-modules.cache
bash    26543 ec2-user    0u   CHR  136,3       0t0        6 /dev/pts/3
bash    26543 ec2-user    1u   CHR  136,3       0t0        6 /dev/pts/3
bash    26543 ec2-user    2u   CHR  136,3       0t0        6 /dev/pts/3
bash    26543 ec2-user    6u   CHR  136,3       0t0        6 /dev/pts/3
bash    26543 ec2-user  255u   CHR  136,3       0t0        6 /dev/pts/3
Admin:environment $ 
Admin:environment $ lsof -p 22850
COMMAND   PID     USER   FD   TYPE   DEVICE  SIZE/OFF     NODE NAME
python3 22850 ec2-user  cwd    DIR    259,1       190   913808 /home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/amazon-eks-one-click-cluster-upgrade
python3 22850 ec2-user  rtd    DIR    259,1       257       96 /
python3 22850 ec2-user  txt    REG    259,1      7048  4249794 /usr/bin/python3.7
python3 22850 ec2-user  mem    REG    259,1     88640  8410801 /usr/lib64/libgcc_s-7-20180712.so.1
python3 22850 ec2-user  mem    REG    259,1     17008 12709186 /usr/lib64/python3.7/lib-dynload/_multiprocessing.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     26976  8410840 /usr/lib64/libnss_dns-2.26.so
python3 22850 ec2-user  mem    REG    259,1     71160  8410842 /usr/lib64/libnss_files-2.26.so
python3 22850 ec2-user  mem    REG    259,1   1832344  8711818 /home/ec2-user/.local/lib/python3.7/site-packages/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     43504    95974 /usr/lib64/python3.7/site-packages/simplejson/_speedups.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     49984 12709185 /usr/lib64/python3.7/lib-dynload/_multibytecodec.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1   1072304 12709220 /usr/lib64/python3.7/lib-dynload/unicodedata.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     66640 12709202 /usr/lib64/python3.7/lib-dynload/array.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    128248 12709188 /usr/lib64/python3.7/lib-dynload/_pickle.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     25944 12709219 /usr/lib64/python3.7/lib-dynload/termios.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     20056  8411057 /usr/lib64/libuuid.so.1.3.0
python3 22850 ec2-user  mem    REG    259,1      7216 12709201 /usr/lib64/python3.7/lib-dynload/_uuid.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    320048 12668088 /usr/lib64/python3.7/lib-dynload/_decimal.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     13208 12709207 /usr/lib64/python3.7/lib-dynload/grp.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    153248  8411025 /usr/lib64/liblzma.so.5.2.2
python3 22850 ec2-user  mem    REG    259,1     39024 12668095 /usr/lib64/python3.7/lib-dynload/_lzma.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     68128  8411050 /usr/lib64/libbz2.so.1.0.6
python3 22850 ec2-user  mem    REG    259,1     23080 12668073 /usr/lib64/python3.7/lib-dynload/_bz2.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     63360 12709213 /usr/lib64/python3.7/lib-dynload/pyexpat.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    197960  8461231 /usr/lib64/libexpat.so.1.6.0
python3 22850 ec2-user  mem    REG    259,1     72512 12668089 /usr/lib64/python3.7/lib-dynload/_elementtree.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     38712 12709222 /usr/lib64/python3.7/lib-dynload/zlib.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     18016 12709190 /usr/lib64/python3.7/lib-dynload/_queue.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    131696 12709198 /usr/lib64/python3.7/lib-dynload/_ssl.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     30992 12709204 /usr/lib64/python3.7/lib-dynload/binascii.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    120040 12668086 /usr/lib64/python3.7/lib-dynload/_datetime.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1      7304 12709187 /usr/lib64/python3.7/lib-dynload/_opcode.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     76176 12668093 /usr/lib64/python3.7/lib-dynload/_json.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     20552 12709191 /usr/lib64/python3.7/lib-dynload/_random.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     13432 12668071 /usr/lib64/python3.7/lib-dynload/_bisect.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     98352 12709194 /usr/lib64/python3.7/lib-dynload/_sha3.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     47504 12668072 /usr/lib64/python3.7/lib-dynload/_blake2.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    410400  8410985 /usr/lib64/libpcre.so.1.2.0
python3 22850 ec2-user  mem    REG    259,1    155680  8410984 /usr/lib64/libselinux.so.1
python3 22850 ec2-user  mem    REG    259,1     94200  8410846 /usr/lib64/libresolv-2.26.so
python3 22850 ec2-user  mem    REG    259,1     15616  8461240 /usr/lib64/libkeyutils.so.1.5
python3 22850 ec2-user  mem    REG    259,1     62880  8509861 /usr/lib64/libkrb5support.so.0.1
python3 22850 ec2-user  mem    REG    259,1     85984  8410997 /usr/lib64/libz.so.1.2.7
python3 22850 ec2-user  mem    REG    259,1    202472  8509853 /usr/lib64/libk5crypto.so.3.1
python3 22850 ec2-user  mem    REG    259,1     15768  8411005 /usr/lib64/libcom_err.so.2.1
python3 22850 ec2-user  mem    REG    259,1    947152  8509859 /usr/lib64/libkrb5.so.3.3
python3 22850 ec2-user  mem    REG    259,1    315672  8509849 /usr/lib64/libgssapi_krb5.so.2.2
python3 22850 ec2-user  mem    REG    259,1   2467296  8463015 /usr/lib64/libcrypto.so.1.0.2k
python3 22850 ec2-user  mem    REG    259,1    457928  8463017 /usr/lib64/libssl.so.1.0.2k
python3 22850 ec2-user  mem    REG    259,1     30936 12668091 /usr/lib64/python3.7/lib-dynload/_hashlib.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     53160 12709199 /usr/lib64/python3.7/lib-dynload/_struct.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1    132176 12709196 /usr/lib64/python3.7/lib-dynload/_socket.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     52536 12709208 /usr/lib64/python3.7/lib-dynload/math.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     37600 12709216 /usr/lib64/python3.7/lib-dynload/select.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     16680 12709189 /usr/lib64/python3.7/lib-dynload/_posixsubprocess.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1     22896 12668092 /usr/lib64/python3.7/lib-dynload/_heapq.cpython-37m-x86_64-linux-gnu.so
python3 22850 ec2-user  mem    REG    259,1 113049440 13072173 /usr/lib/locale/locale-archive
python3 22850 ec2-user  mem    REG    259,1   2021864  8410826 /usr/lib64/libc-2.26.so
python3 22850 ec2-user  mem    REG    259,1   1414728  8410832 /usr/lib64/libm-2.26.so
python3 22850 ec2-user  mem    REG    259,1     14304  8410852 /usr/lib64/libutil-2.26.so
python3 22850 ec2-user  mem    REG    259,1     19208  8410830 /usr/lib64/libdl-2.26.so
python3 22850 ec2-user  mem    REG    259,1    149416  8410844 /usr/lib64/libpthread-2.26.so
python3 22850 ec2-user  mem    REG    259,1     41032  8410998 /usr/lib64/libcrypt-2.26.so
python3 22850 ec2-user  mem    REG    259,1   3551352  8510815 /usr/lib64/libpython3.7m.so.1.0
python3 22850 ec2-user  mem    REG    259,1    174280  8410819 /usr/lib64/ld-2.26.so
python3 22850 ec2-user  mem    REG    259,1     26370     1393 /usr/lib64/gconv/gconv-modules.cache
python3 22850 ec2-user    0u   CHR    136,3       0t0        6 /dev/pts/3
python3 22850 ec2-user    1u   CHR    136,3       0t0        6 /dev/pts/3
python3 22850 ec2-user    2u   CHR    136,3       0t0        6 /dev/pts/3
python3 22850 ec2-user    3u  IPv4 30677781       0t0      TCP ip-192-168-78-154.us-west-2.compute.internal:38320->ip-192-168-137-239.us-west-2.compute.internal:https (CLOSE_WAIT)
python3 22850 ec2-user    4r   REG    259,1      5754 13110333 /home/ec2-user/environment/K8s-Manifest/EKS-Playground/UpdateCluster/amazon-eks-one-click-cluster-upgrade/eksupdate/src/S3Files/vpc-cni.yaml
python3 22850 ec2-user    5u  IPv4 30677789       0t0      TCP ip-192-168-78-154.us-west-2.compute.internal:54562->ip-192-168-129-144.us-west-2.compute.internal:https (CLOSE_WAIT)
python3 22850 ec2-user    6u   CHR    136,3       0t0        6 /dev/pts/3



Admin:environment $ strace -p 22850
strace: Process 22850 attached
futex(0x1878590, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY


^Cstrace: Process 22850 detached
 <detached ...>

Admin:environment $ strace -p 26543                                                                                                
strace: Process 26543 attached
wait4(-1, 


^Cstrace: Process 26543 detached
 <detached ...>


Why is the update script stuck?

Feature: Allow users to opt in to selecting the latest addon version when upgrading

Discussed in #67

Originally posted by onabison February 23, 2023
Hi,

I have been trying to use the provided AWS EKS Upgrade tool:
https://github.com/aws-samples/eks-cluster-upgrade

While using it I have noticed a few things that I'm not sure why they are occurring:

  1. The add-on seems to be downgrading rather than upgrading, here's the output:
    INFO:eksupgrade.src.k8s_client:coredns-79fffc5cf7-rtpnt Current Version = v1.8.7-eksbuild.3 Updating to = v1.8.7-eksbuild.2

  2. I am curious to understand how does the tool determine what is the latest available version for the respective add-on. It seems there is a much newer version for each of the add-on's I have on my cluster, yet, it manages to downgrade. Even if it does find an upgrade version, it only makes a small (usually minor version) upgrade. here's an image that shows the current version the upgrade tool selected during the upgrade process and VS the latest version that is available for that particular add-on.

image

Feature request: Update upgrade execution logic

Use case

The process for updating an EKS managed nodegroup is defined here https://docs.aws.amazon.com/eks/latest/userguide/managed-node-update-behavior.html

In addition, there are 4 upgrade scenarios for the compute constructs available within a data plane:

  1. Update Fargate profiles via replacement once the control plane has been updated (eksupgrade calls the eviction API on behalf of the user requesting the upgrade)
  2. DONE: #78 - 0.7.0 Update EKS managed nodegroups using the built-in functionality and controlled via the update config (eksupgrade simply updates the nodegroup to the desired settings and EKS managed nodegroup handles the update process)
  3. Update self-managed nodegroups using the graceful termination of the node-termination-handler in coordination with the instance refresh functionality of an autoscaling group. This process should be used if the node-termination-handler is deployed within the cluster, the autoscaling group is monitored by the NTH for lifecycle events, and an instance refresh configuration is present on the launch template used by the autoscaling group
  4. Update self-managed nodegroups using logic similar to that of the EKS managed node group (like this)

Solution/User Experience

Users simply use the logic provided by eksupgrade to gracefully update their data plane compute while avoiding disruptions and/or downtime

Alternative solutions

None

Maintenance: Remove region list

Summary

We currently maintain a list of AWS regions that are eligible for interacting with EKS clusters. This is unnecessary per the discussion: #26 (comment)

Why is this needed?

We shouldn't be maintaining a list. This should be simply enforced by the AWS API.

Which area does this relate to?

Other

Solution

Remove the AWS region list from ./eksupgrade/cli.py

Maintenance: Add inital test workflow

Summary

There are currently no unit tests for any of the code in this repository. Introduce a base implementation for executing pytest and various standard enforcement.

Why is this needed?

Continuous integration

Which area does this relate to?

No response

Solution

No response

Bug: Upgrading to 1.25 results in erroneous PDB error output

Expected Behaviour

No Pod Disruption Budget error when upgrading from 1.24 to 1.25 on post-flight check.

Current Behaviour

The tool outputs an erroneous error message stating the PDB check fails in post-flight (after upgrade) because the API is no longer implemented and the resource results in 404.

Code snippet

pod_disruption_budget(errors, cluster_name, region, report, customer_report, force_upgrade)

Possible Solution

Explicitly bypass the PDB check when:

  • 1.25 is target version and running post-flight check
  • target version greater than 1.25

Steps to Reproduce

eksupgrade test-cluster-x-1 1.25 us-east-1 --force

Amazon EKS upgrade version

latest

Python runtime version

3.11

Packaging format used

PyPi

Debugging logs

INFO:eksupgrade.src.preflight_module:Fetching Pod Disruption Budget Details....
ERROR:eksupgrade.src.preflight_module:Error occurred while checking for pod disruption budget - Error: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({...})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource","reason":"NotFound","details":{},"code":404}
INFO:eksupgrade.src.preflight_module:Fetching Horizontal Autoscaler Details....
INFO:eksupgrade.src.preflight_module:No Horizontal Auto Scaler exists in cluster
INFO:eksupgrade.src.preflight_module:Fetching Cluster Auto Scaler Details....
INFO:eksupgrade.src.preflight_module:Cluster Autoscaler doesn't exist
ERROR:eksupgrade.src.preflight_module:Post flight unsuccessful because of the following errors: ['Error occurred while checking for pod disruption budget (404)\nReason: Not Found\nHTTP response headers: HTTPHeaderDict({\'Audit-Id\': \'c2d91398-7101-4cba-9b29-c9635a4fea2b\', \'Cache-Control\': \'no-cache, private\', \'Content-Type\': \'application/json\', \'X-Kubernetes-Pf-Flowschema-Uid\': \'a73648a5-9f74-4036-a6f3-024db2b88f93\', \'X-Kubernetes-Pf-Prioritylevel-Uid\': \'3216cf38-bd74-47cc-bdfc-92494c48a536\', \'Date\': \'Mon, 27 Feb 2023 20:36:50 GMT\', \'Content-Length\': \'174\'})\nHTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could n

Maintenance: update the regions_list using boto3

Summary

The regions_list is a list of string maintained in the cli.py. With newer regions becoming available, this could be updated to use the boto3 session to pull the updated region list than having to update our version.

Why is this needed?

Maintaining the regions_list would be easier in the long run with an api call, assuming the cli run would have access to the necessary credentials with the session.

Which area does this relate to?

Automation

Solution

The partition list could be updated to include aws-cn if needed and supported.

from boto3.session import Session
session = Session()
partition_list, regions_list = ["aws", "aws-us-gov"], []
for partition in partition_list:
    regions_list.extend(session.get_available_regions("eks", partition))

Docs: Update documentation for python entry/usage

What were you searching in the docs?

The documentation on using the module now that it is a python module versus simply a script.
This should include the new mechanism to execute the CLI and installation details.

Is this related to an existing documentation section?

No response

How can we improve?

Add relevant details on how to execute the modified package and instructions on how to install the package.

Got a suggestion in mind?

Once available, add a reference to the PyPi package listing and installation instructions. Additionally, the README.md could be markdownlint adjusted to conform to standardized MD.

Acknowledgment

  • I understand the final update might be different from my proposed suggestion, or refused.

Node Release Version is not updated or found

Expected Behaviour

Hello @mbeacom , I believe you helped me with this tool before...

When running the eksupgrade command:
eksupgrade {cluste_name} 1.29 us-east-1 --force --latest-addons all available updates/versions should be found and updated. When I ran it just now, it did not detect the on-demand Release Version latest update as it seems to still have available updates.

image

Current Behaviour

eksupgrade staging-gcs-eks-cluster 1.29 us-east-1 --force --latest-addons
Upgrading cluster: staging-gcs-eks-cluster from version: 1.29 to 1.29...
Are you sure you want to proceed with the upgrade process against: staging-gcs-eks-cluster? [y/N]: y
The current version of the cluster was detected as: 1.29
Cluster: staging-gcs-eks-cluster already on version: 1.29! Skipping cluster upgrade!
Found the following Managed Nodegroups
        * on-demand
        * spot
Getting cluster managed nodegroup details...
Managed Node Group: on-demand - Version: 1.29 - Release Version: 1.29.0-20240415 - Cluster: staging-gcs-eks-cluster
Getting cluster managed nodegroup details...
Managed Node Group: spot - Version: 1.29 - Release Version: 1.29.0-20240415 - Cluster: staging-gcs-eks-cluster
Getting cluster autoscaling group details...
Autoscaling Group: eks-on-demand-6cc1fdfd-d8c1-f92c-e2b5-63706644407e - Cluster: staging-gcs-eks-cluster
Healthy Instances:
         * i-096aadfd7318e8f61
         * i-0b44cd39c6d258ea8
         * i-0da7ebaff0a4671a4
Getting cluster autoscaling group details...
Autoscaling Group: eks-spot-8cc1fdfb-f7c2-057a-031b-4ed7a29e8bbf - Cluster: staging-gcs-eks-cluster
The add-ons update has been initiated...
Fetching Cluster Addons...
Getting the list of current cluster addons for cluster: staging-gcs-eks-cluster...
No Cluster AutoScaler is Found
No outdated managed nodegroups found!
Found the following Self-managed Nodegroups:
EKS Cluster staging-gcs-eks-cluster UPDATED TO 1.29

Code snippet

I'm not sure there's any thing I can provide to reproduce this unless you build your own cluster and try to run the same with the same versions I currently have.

Possible Solution

No response

Steps to Reproduce

I'm not sure there's any thing I can provide to reproduce this unless you build your own cluster and try to run the same with the same versions I currently have.

I am not sure this is a bug and not all the fields that are required are relevant for me

Amazon EKS upgrade version

latest

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

No response

Bug: k8s.gcr.io (registry) hardcoded for cluster-autoscaler

Expected Behaviour

No exception thrown, both old and new registry handled.

Current Behaviour

Snippet of the output when running eksupgrade drmaciej-cluster 1.26 ap-southeast-2 --preflight

Cluster Autoscaler exists
cluster-autoscaler pod is running
Error occurred while checking for the cluster autoscaler - Error: list index out of range
Pre flight unsuccessful because of the following errors: ['To upgrade please run the code with --force flag ', 'Error occurred while checking for the cluster autoscaler list index out of range']
Pre-flight check for cluster drmaciej-cluster targeting version: 1.26 failed!

It appears that the exception is thrown in

                version = (
                    i.spec.template.spec.containers[0]
                    .image.split("k8s.gcr.io/autoscaling/cluster-autoscaler:v")[1]
                    .split("-")[0]
                )

because my image is set to registry.k8s.io/autoscaling/cluster-autoscaler:v1.25.0.

k8s.gcr.io is about to be sunset and new images are not published there. For instance, the latest publish CA images are on the new registry already (see https://github.com/kubernetes/autoscaler/releases)

Code snippet

NA

Possible Solution

No response

Steps to Reproduce

Detailed in "Current Behaviour"

Amazon EKS upgrade version

1.25 to 1.26

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

No response

Maintenance: Consolidate cluster auth functionality

Summary

Currently there are multiple cluster authentication means throughout the codebase:

  1. https://github.com/aws-samples/amazon-eks-one-click-cluster-upgrade/blob/cd88b2f10ec0be05d18c44026695767324dc4fd8/eksupgrade/src/k8s_client.py#L59-L96
  2. https://github.com/aws-samples/amazon-eks-one-click-cluster-upgrade/blob/cd88b2f10ec0be05d18c44026695767324dc4fd8/eksupgrade/src/eksctlfinal.py#L26-L66

This logic should be consolidated and mirror/use what is defined by the awscli https://github.com/aws/aws-cli/blob/develop/awscli/customizations/eks/update_kubeconfig.py

Why is this needed?

  1. Ensure logic is not repeated
  2. Ensure that users are authenticated to the cluster correctly - this also removes the need for users to run aws eks update-kubeconfig --name <name> ... that is defined in the README.md getting started

Which area does this relate to?

Other

Solution

Logic is defined only once and aligns with the awscli functionality (it does not need to write a kubeconfig file, but the authn/authz logic should be matched)

Feature: Remove usage of `eksctl`

Summary

eksctl already provides support for upgrading the control plane of a cluster created by its CLI https://eksctl.io/usage/cluster-upgrade/

This is mainly done just to keep the generated CloudFormation in-sync with any changes. It currently does not support upgrading any components of the data plane; these are handled in the same way that upgrades are handled here. Therefore, the data plane and addon upgrade logic and be used on clusters created by eksctl and a separate, unique process does not need to exists

Why is this needed?

If users are using eksctl today, they should continue using it to perform upgrades of the control plane as per eksctl. Since eksctl does not support upgrading any components of the data plane, we can still support that process with eksupgrade

Which area does this relate to?

Automation

Solution

  • We can keep this wrapped call to eksctl or we can replace it and instruct users to upgrade the control plane first.

  • Functionality will need to be added to support performing data plane only upgrades. This process will inspect the control plane version and update the data plane components to match (update nodegroups, Fargate profiles, addon default versions, etc.). This is useful both for the eksctl scenario as well as for the guidance provided to users to ensure the data plane components are aligned with the control plane prior to upgrading. Therefore, users can also use this functionality to align their data plane with the control plane prior to performing an upgrade

Feature request: Run post-flight checks only

Use case

I like the pre-flight check, but I do not want to use this tool for the actual upgrade - I use my existing codebase for that. I'd like to run post-flight checks in isolation, without the upgrade.

Solution/User Experience

A --postflight CLI toggle which leads to invocation of pre_flight_checks with preflight=False; no other action is performed.
I am aware of #103, but there seems to be no ETA on it, so I wonder if --postflight might be a good interim solution.

Alternative solutions

No response

Bug: Launch template details can't be null for Custom ami type node group

Expected Behaviour

I'm running into an issue similar to this, whose solution was implemented in eksctl with this PR.

Current Behaviour

eksupgrade command's relevant output:

Getting cluster managed nodegroup details...
Managed Node Group: <EXAMPLE_MNG> - Version: 1.23 - Release Version: ami-0c099795affa953de - Cluster: <CLUSTER_NAME>

Updating nodegroup: k8s-qa-eu-cluster-xl-coredns-v1 from version: 1.23 to version: 1.24
Exception encountered! Error: An error occurred (InvalidParameterException) when calling the UpdateNodegroupVersion operation: Launch template details can't be null for Custom ami type node group

Code snippet

N/A

Possible Solution

eksctl-io/eksctl#6318

Steps to Reproduce

N/A

Amazon EKS upgrade version

1.24

Python runtime version

3.11

Packaging format used

PyPi

Debugging logs

No response

Maintenance: Integration tests for code changes

Summary

We should consider including an Integration test workflow for the project. This would help catch some of the issues/failures with new code changes. A simplified workflow could look like:

  • Create an EKS cluster ( or a matrix version of supported ones)
  • Perform the upgrade operation using the version with the PR or one merged to main
  • Use the successful upgrade as a gating process
  • Tear down the cluster

Why is this needed?

This would help reduce the manual effort in testing the upgrade process against changes.

Which area does this relate to?

Automation, Tests

Solution

No response

upgrading EKS cluster hosting Kafka

Use case

I am trying to update the EKS cluster hosting Kafka. The script upgraded nodes one by one. Is there any provision where the script can wait some specified time until the next node upgrade or waits till the Kafka partition syncs up?

Solution/User Experience

None

Alternative solutions

No response

Pre Flight check fails with no clear reason and log of the failure/details

`ubuntu@ip-xxxxx:~/amazon-eks-one-click-cluster-upgrade$ python3 eks_updater.py myCluster 1.22 ap-southeast-1 --preflight
/home/ubuntu/.local/lib/python3.6/site-packages/boto3/compat.py:88: PythonDeprecationWarning: Boto3 will no longer support Python 3.6 starting May 30, 2022. To continue receiving service updates, bug fixes, and security updates please upgrade to Python 3.7 or later. More information can be found here: https://aws.amazon.com/blogs/developer/python-support-policy-updates-for-aws-sdks-and-tools/
warnings.warn(warning, PythonDeprecationWarning)
Check logs in cloud watch group cluster-myCluster-ap-southeast-1 for more informationin stream preflight-checks-16xxxxx

Verifying User IAM Role....
IAM role for user verified

Fetching cluster details .....
Cluster control plane version 1.21
Cluster with verison 1.21 cannot be updated to target version 1.22

Preflight unsuccessful because of following errors
Cluster with verison 1.21 cannot be updated to target version 1.22

Pre flight check for cluster myCluster failed`

Bug: Error occurred while upgrading Node Group in EC2 instance created through Karpenter

Expected Behaviour

I tried to upgrade via eksupgrade CLI from 1.25 to 1.26.

Unfortunately, I've got an error about checking node group stage.
I created EKS Cluster via Terraform and there's no issues about it.

And �the instance where the error occurred is an instance created through Karpenter.

As far as I know, Karpenter doesn't generate nodes via ASG, do I need an ASG name to use the eksupgrade CLI ?

Please let me know what's the best way to upgrade EKS Cluster that in using Karpenter via eksupgrade CLI.

Current Behaviour

스크린샷 2023-05-31 17 47 42

Code snippet

eksupgrade --force <CLUSTER_NAME> 1.26 ap-northeast-2

Possible Solution

No response

Steps to Reproduce

eksupgrade --force <CLUSTER_NAME> 1.26 ap-northeast-2

Amazon EKS upgrade version

1.26

Python runtime version

3.11

Packaging format used

PyPi

Debugging logs

i-0f8bafc14f3d4bc9c cannot be upgraded because the cluster version is not compatible with the node version
Error occurred while checking node group details - Error: cannot access local variable 'autoscale_group_name' where it is not associated with a value

Post flight unsuccessful because of the following errors: ["Error occurred while checking node group details cannot access local variable 'autoscale_group_name' where it is not associated with a value"]
Post flight check for cluster <CLUSTER_NAME> failed after it upgraded

Bug: Owner ID does not look correct for Windows AMI images

Expected Behaviour

we are getting a message saying we have custom AMI's when we are using the latest Amazon windows AMI's for the EKS version specified.

our instances are using the following AMI
ami-087cc060ba1de1b6d https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#ImageDetails:imageId=ami-087cc060ba1de1b6d
Windows_Server-2019-English-Full-EKS_Optimized-1.21-2023.02.14
the owner for this AMI is 957547624766 but in the Preflight code it's looking for an owner of 801119661308

I don't find any EKS_Optimized windows images under that owner.

Current Behaviour

getting the following message in the pre-flight

i-01c9ece320961ce5c cannot be upgraded as it uses a custom AMI!
i-06af7621a4c1c99d2 cannot be upgraded as it uses a custom AMI!

Code snippet

try to upgrade a cluster with windows nodes.

Possible Solution

change the owner ID to 957547624766 in iscustomami method for the windows instances.

Steps to Reproduce

try to upgrade a cluster with windows nodes.

Amazon EKS upgrade version

1.22

Python runtime version

3.9

Packaging format used

Git clone

Debugging logs

No response

Maintenance: Remove CloudWatch Logger

Summary

The CLI currently expects access to push logs to CloudWatch. Additionally, the CLI prints out statements as well.

We should standardize on using a regular logger and for example, allow the end user to either pipe to a log file or run in another environment and offload to cloudwatch there.

Why is this needed?

Inconsistent and confusing log workflow

Which area does this relate to?

No response

Solution

Implement a standard python logger workflow using logging module.

Bug: Pre-flight failure on eksupgrade

Expected Behaviour

eksupgrade would complete the pre-flight checks and complete the upgrade.

The site-packages/eksupgrade does include the following files
Screenshot 2023-02-15 at 9 45 50 PM

Current Behaviour

Upgrade pre-flight check fails with the following errors.

INFO:eksupgrade.src.preflight_module:Available IPs for Subnet verified
INFO:botocore.credentials:Found credentials in environment variables.
ERROR:eksupgrade.src.preflight_module:Some error occurred during preflight check process - Error: [Errno 2] No such file or directory: 'eksupgrade/src/S3Files/cluster_roles.json'
ERROR:eksupgrade.src.preflight_module:Pre flight unsuccessful because of the following errors: ["Some error occurred during preflight check process [Errno 2] No such file or directory: 'eksupgrade/src/S3Files/cluster_roles.json'"]
ERROR:eksupgrade.starter:Pre-flight check for cluster eksup-cluster failed!

Code snippet

eksupgrade eksup-cluster 1.22 us-east-1

Possible Solution

No response

Steps to Reproduce

  1. Created a cluster using eksctl eksctl create cluster -f cluster.yaml. Config yaml below
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eksup-cluster
  region: us-east-1
  version: "1.21"
nodeGroups:
  - name: ng-1
    instanceType: t2.micro
    desiredCapacity: 2
addons: 
- name: vpc-cni
- name: coredns
- name: kube-proxy
  1. Installed eksupgrade using pip install eksupgrade
  2. Ran the eksupgrade command to upgrade the version to 1.22 eksupgrade eksup-cluster 1.22 us-east-1

Amazon EKS upgrade version

0.4.0

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

INFO:eksupgrade.src.preflight_module:Available IPs for Subnet verified
INFO:botocore.credentials:Found credentials in environment variables.
ERROR:eksupgrade.src.preflight_module:Some error occurred during preflight check process - Error: [Errno 2] No such file or directory: 'eksupgrade/src/S3Files/cluster_roles.json'
ERROR:eksupgrade.src.preflight_module:Pre flight unsuccessful because of the following errors: ["Some error occurred during preflight check process [Errno 2] No such file or directory: 'eksupgrade/src/S3Files/cluster_roles.json'"]
ERROR:eksupgrade.starter:Pre-flight check for cluster eksup-cluster failed!

Bug: Running the tool with AWS_PROFILE & role and MFA causes repeated MFA prompts

Expected Behaviour

Consistent with the usual usage of AWS CLI with roles and MFA. One should be prompted for the MFA code only once per duration of the session.

Current Behaviour

I want to utilise my AWS CLI profiles so I run the tool with AWS_PROFILE=drmaciej eksupgrade drmaciej-cluster 1.26 ap-southeast-2 --preflight. My profiles are configured to assume roles and MFA is required.

The tool then seems to ask me for an MFA code for multiple interactions with the AWS API - a new MFA code is ready every 30 seconds, so this slows down the whole experience significantly. For instance:

The pre-flight checks will be deprecated in the next minor release in favor of cluster summaries: #103
Running validation checks against cluster: drmaciej-cluster...
Enter MFA code for arn:aws:iam::123456789012:mfa/drmaciej
Enter MFA code for arn:aws:iam::123456789012:mfa/drmaciej
Verifying User IAM Role...
IAM role for user verified!
Enter MFA code for arn:aws:iam::123456789012:mfa/drmaciej
Fetching cluster details...
Cluster control plane version 1.25
Cluster with version 1.25 can be updated to target version 1.26
Enter MFA code for arn:aws:iam::123456789012:mfa/drmaciej

I cancelled execution after these prompts.

Code snippet

NA

Possible Solution

No response

Steps to Reproduce

Detailed in "Current Behaviour"

Amazon EKS upgrade version

latest

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

No response

Feature request: Infer AMI from current configuration or allow user to pass custom AMI

Use case

(Instead of trying to re-interpret the current logic, I'll defer to describing the intended logic)

--custom-ami-id argument should be added for users to pass a custom AMI ID.

When performing an upgrade of EKS and/or self-managed nodegroups, the process for determining the appropriate AMI should follow:

Self-managed nodegroup

If --custom-ami-id has been supplied with a value, simply proceed to grab each ASG and its LT and create a new LT version with the AMI ID provided. If --custom-ami-id has NOT been supplied:

  1. Get the launch template of each self-managed autoscaling group (we are ignoring support for launch configuration since its EOL)
  2. From the LTs, extract the AMI ID
  3. Perform a describe call on the AMI to determine if this is an Amazon EKS optimized AMI, and of which variant (AL2, Bottlerocket, Windows variants) and arch (x86/arm64)
  4. If this is NOT an Amazon EKS optimized AMI, and --custom-ami-id has not been supplied, halt progress and return details to user that a custom AMI ID is required when an Amazon EKS optimized AMI is not currently in use
  5. If this is an Amazon EKS optimized AMI, retrieve the AMI ID of the next incremental Kubernetes version from the associated SSM parameter (get the AMI ID for the next version of K8s for the given AMI variant and arch)
  6. Create a new LT version with the new AMI ID, update the ASG, etc. (roll out changes)

EKS managed nodegroup

Default launch template

  1. No-op from an AMI perspective; once the nodegroup Kubneretes version has been updated, the managed nodegroup will pull the appropriate AMI and deploy

Custom launch template

  1. Get the ASGs from each of the EKS managed nodegroups
  2. If there is NOT an AMI ID specified on the nodegroup, then the AMIs used are the EKS optimized AMIs - update the nodegroup version to the next incremental Kubernetes version and roll out changes
  3. If there is an AMI ID specified on the nodegroup, this is a custom AMI and the user should have provided a --custom-ami-id, otherwise abort and report (nice rhyme!)

Solution/User Experience

The logic used for determining an AMI ID should be consistent and reliable without assumptions

Alternative solutions

No response

Feature request: New Pre/post-flight checks and summary reports

Use case

The user executes:

eksupgrade --preflight
# with report generation enabled
eksupgrade --preflight --report
# or maybe:
eksupgrade eval(uate)
# with report generation enabled
eksupgrade eval --report

The cluster summary will be output to the terminal in the form of text summaries and table data.

Solution/User Experience

Desired future state:

The cluster details will be pulled from AWS and Kubernetes APIs, resulting in a populated Cluster object. The details derived from the cluster and child objects will be included in a report summary and tables annotating cluster version, target version, cluster add-on versions (current vs target), managed nodegroups, self-managed nodegroups, and current pre-flight checks.

Current User Experience: The tool is currently using the old preflight_module.py to check the upgrade eligibility and current state of the target cluster, resulting in some dated/irrelevant details that don't align with the upgrade outcomes.

Alternative solutions

Update the current preflight_module.py to pull all of the same details already present in Cluster.* and add missing add-on details to each manual block.

Bug: Upgrading fails and does not have a recovery or pickup-where-left-off option

Discussed in #73

Originally posted by onabison February 24, 2023
Hi,

I noticed a behavior that I believe others will probably be facing as well.

I upgraded my cluster from 1.23 to 1.2 (I don't believe the version are relevant, but figured I'd mention anyway). If the upgrade failed for whatever reason, and you re-run the command you will be facing something like this:

INFO:eksupgrade.src.self_managed:The cluster lab-poc-eks-cluster in region us-east-1 ASGs of the self-managed nodegroups: ['eks-on-demand-96c071ab-6ec9-2249-37e0-9c8e4975b944', 'eks-spot-a2c071ab-6ecf-329a-8ec4-72654fffbe40']
INFO:eksupgrade.starter:The Manged Node Groups Found are eks-on-demand-96c071ab-6ec9-2249-37e0-9c8e4975b944,eks-spot-a2c071ab-6ecf-329a-8ec4-72654fffbe40
INFO:eksupgrade.src.boto_aws:ASG Matched = eks-on-demand-96c071ab-6ec9-2249-37e0-9c8e4975b944 ,eks-spot-a2c071ab-6ecf-329a-8ec4-72654fffbe40
INFO:eksupgrade.starter:The ASGs Found Are eks-on-demand-96c071ab-6ec9-2249-37e0-9c8e4975b944,eks-spot-a2c071ab-6ecf-329a-8ec4-72654fffbe40
INFO:eksupgrade.starter:The add-ons Update has been initiated...
INFO:eksupgrade.starter:The Addons Upgrade Started At 1677258083.007861
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:The Addons Found = ['aws-load-balancer-controller-68f585b748-4cq7z', 'aws-load-balancer-controller-68f585b748-lx4fb', 'aws-node-9h8pn', 'aws-node-n75pl', 'aws-node-n8vm5', 'aws-node-x5vwb', 'coredns-79fffc5cf7-8kgkd', 'coredns-79fffc5cf7-8r6nz', 'csi-secrets-store-provider-aws-2ftmh', 'csi-secrets-store-provider-aws-4dc4p', 'csi-secrets-store-provider-aws-cr5ff', 'csi-secrets-store-provider-aws-d9lsc', 'csi-secrets-store-secrets-store-csi-driver-5gw4c', 'csi-secrets-store-secrets-store-csi-driver-btvs5', 'csi-secrets-store-secrets-store-csi-driver-pphpw', 'csi-secrets-store-secrets-store-csi-driver-w788s', 'ebs-csi-controller-78b4fdb7f5-bnm6d', 'ebs-csi-controller-78b4fdb7f5-drms5', 'ebs-csi-node-2q4rq', 'ebs-csi-node-6cvrq', 'ebs-csi-node-n4fcf', 'ebs-csi-node-srxr9', 'kube-proxy-8rfsb', 'kube-proxy-b5jfv', 'kube-proxy-j5cx6', 'kube-proxy-q9dbj']
INFO:eksupgrade.src.k8s_client:kube-proxy-8rfsb Current version: v1.24.7-minimal-eksbuild.2 Updating to: v1.24.7-eksbuild.2
INFO:eksupgrade.src.k8s_client:Updating the EKS cluster's kube-proxy add-on version via the EKS API...
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:eksupgrade.src.k8s_client:Old kube-proxy pod: kube-proxy-8rfsb - New kube-proxy pod: kube-proxy-8rfsb
INFO:eksupgrade.src.k8s_client:kube-proxy-b5jfv Current version: v1.24.7-minimal-eksbuild.2 Updating to: v1.24.7-eksbuild.2
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:eksupgrade.src.k8s_client:Total Pods With kube-proxy = 4
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials

In my case, my spot instances failed to upgrade and re-running the command to try and upgrade the spot instances will end up with the above endless loop. It seems it already went over the NodeGroup but failed to detect that there is still a component that has not been upgraded and moved on to the add-on.

image

Maintenance: Extend automated test cases to a secondary region

Summary

Extend the automated test cases to a secondary AWS region to identify any region specific lockdown issues with the code base.

Why is this needed?

This would help identify any issues that are occurring in a region outside of us-east-1 which is the currently set region for automated tests.

Which area does this relate to?

Tests

Solution

No response

Bug: Node upgrade of Amazon Linux from 1.21 to 1.22

Expected Behaviour

Upgrading from 1.21 to 1.22 , node groups are creating but not attached the cluster.
Expectation the eksctl nodegroup cmd file required have overrideBootstrapCommand with below detail then only it would attached to the cluster.

As per the announcement https://eksctl.io/announcements/nodegroup-override-announcement/

Current Behaviour

May be overrideBootstrapCommand is missing as part of eksctl nodegroup cmd file so it is not attaching to the cluster.

Code snippet

overrideBootstrapCommand: '#!/bin/bash

    source /var/lib/cloud/scripts/eksctl/bootstrap.helper.sh

    /etc/eks/bootstrap.sh CLUSTER_NAME --kubelet-extra-args "--node-labels=${NODE_LABELS} "'

Possible Solution

No response

Steps to Reproduce

upgrade eks-cluster-upgrade of cluster 1.21 of amazon linux 2 instances

Amazon EKS upgrade version

latest

Python runtime version

3.10

Packaging format used

Git clone

Debugging logs

NA

Maintenance: Re-write self-managed nodegroups logic

Summary

The self-managed nodegroup logic is currently resulting in nodes that are unable to join the target cluster.
Reproduction steps will be backfilled here when available.

Per #36:

  1. Update self-managed nodegroups using the graceful termination of the node-termination-handler in coordination with the instance refresh functionality of an autoscaling group. This process should be used if the node-termination-handler is deployed within the cluster, the autoscaling group is monitored by the NTH for lifecycle events, and an instance refresh configuration is present on the launch template used by the autoscaling group
  2. Update self-managed nodegroups using logic similar to that of the EKS managed node group (like this)

Why is this needed?

Current workflow is erroneous.

Which area does this relate to?

No response

Solution

Rewrite the logic for self-managed nodegroups into the new core.

Bug: Windows instances error with unsupported type

Expected Behaviour

I would expect the windows nodes to be updgraded

Current Behaviour

we get the following messages in the log

i-00936987c5fa16273
Node type: windows server 2019 datacenter is unsupported - Image ID: ami-04ff2af787d4f4ed3
i-04cef8a13573d6b29
Node type: windows server 2019 datacenter is unsupported - Image ID: ami-04ff2af787d4f4ed3

Code snippet

na

Possible Solution

I think line 29 in eksupgrade/src/get_image_type.py needs to be changed from:
elif node_type == "windows":
to
elif "windows" in node_type.lower():

this is what is used in eksupgrade/src/preflight_module.py on line 841 so we should be consistent.

I've modified this locally and was able to test this and it works.

Steps to Reproduce

upgrade a cluster with windows nodes.

Amazon EKS upgrade version

latest

Python runtime version

3.10

Packaging format used

Git clone

Debugging logs

No response

Feature request: Support Kubernetes V1.25

Use case

Currently, the maximum allowed version to upgrade to is 1.24. EKS announced inclusion of 1.25 version.

  • Defined on def get_cluster_version in pre-flight checks and the version_dicts.

Validate if this is merely a version number update or additional changes need to be incorporated to support this version from the api and resource side. Notably PodSecurityPolicy is removed in favor of PSA( Pod Security Admission).

Call outs from the documentation : https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#kubernetes-1.25

Solution/User Experience

Allows user to select 1.25 as a possible upgrade target.

Alternative solutions

No response

Maintenance: Update logger to use timestamps

Summary

Add timestamp to the logging output across the eksupgrade codebase.

Why is this needed?

This would help in debugging issues related to timing like the initial delay in cluster status changing from ACTIVE to UPDATING to make decisions when to update nodegroups and addons.

Which area does this relate to?

Other

Solution

Create a logging utility which has a formatter attached to it.

logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)

Maintenance: Unused region arg in eksctlfinal

Summary

Methods below have a region argument which is already implicitly set with the boto3 client instance bclient.

  • eksctlfinal.py
    • upgrade_cluster
    • get_old_smg_node_groups
    • create_managed_nodegroup
    • create_unmanaged_nodegroup

Why is this needed?

Unused argument in the method definition.

Which area does this relate to?

Other

Solution

Remove the unused argument (region) in the above methods and update invocations without the region reference.

Bug: Upgrade failing on Kubernetes client model attribute

Expected Behaviour

Cluster is upgraded and the nodes are updated as required.

Current Behaviour

Upgrade fails after preflights and cluster upgrade in the node eviction stage.

ERROR:eksupgrade.src.k8s_client:Exception encountered while attempting to drain nodes! Node: ip-*-*-*-*.ec2.internal Cluster: eksup-cluster - Error: module 'kubernetes.client.models' has no attribute 'v1beta1_eviction'
ERROR:eksupgrade.starter:Error encountered during actual update! Exception: Unable to Delete the Node
ERROR:eksupgrade.starter:Exception encountered in main method - Error: Unable to Delete the Node
  • Reference Kubernetes module installed from the project
 pip show kubernetes
Name: kubernetes
Version: 24.2.0

Code snippet

eksupgrade eksup-cluster 1.22 us-east-1

Possible Solution

Update the reference to v1_eviction

Steps to Reproduce

  1. Created a cluster using eksctl eksctl create cluster -f cluster.yaml. Config yaml below
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eksup-cluster
  region: us-east-1
  version: "1.21"
nodeGroups:
  - name: ng-1
    instanceType: t2.micro
    desiredCapacity: 2
addons: 
- name: vpc-cni
- name: coredns
- name: kube-proxy
  1. Installed eksupgrade using pip install eksupgrade
  2. Ran the eksupgrade command to upgrade the version to 1.22 eksupgrade eksup-cluster 1.22 us-east-1

Amazon EKS upgrade version

0.4.0

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

Debug logs attached in an earlier section.

Bug: Manual patching of addon removal

Expected Behaviour

Executing eksupgrade against a cluster should engage with the EKS API to upgrade supported addons versus attempting to patch with some arbitrary manifest derived from this codebase.

Current Behaviour

  • Attempts to patch the deployment or daemonset directly
  • Subsequently calls the EKS API to upgrade the addon

Code snippet

N/A

Possible Solution

Remove the manual patching of deployments/daemonsets for supported addons (coredns, vpc-cni, kube-proxy)

Steps to Reproduce

Execute eksupgrade against a cluster.

Amazon EKS upgrade version

latest

Python runtime version

3.8

Packaging format used

PyPi

Debugging logs

No response

Feature request: Cluster state reports

Use case

When executing eksupgrade the pre/post-flight checks are misleading and can cause confusion as to the actual intended outcomes and results of a cluster upgrade.

Due to the many factors that can play into upgrade eligibility, eksupgrade doesn't intend to be a tool to check the compatibility or eligibility of a cluster (in favor of letting other existing tools handle this with better scenario checks, such as: eksup.

The existing pre/post checks should be removed and replaced with relevant checks specific to the upgrade (based on previous understanding the cluster is eligible for such an upgrade).

Solution/User Experience

A cluster upgrade summary will be displayed to inform the user what the current state versus the future state is (and some minimal health information, but nothing intended to replace proper evaluation of a target cluster), minimally:

  • cluster: current vs target version
  • cluster nodegroups: current vs target versions
  • cluster addons: current vs target versions

This maybe expanded in the future to include other comparisons or checks.

Alternative solutions

No response

Bug: Cluster upgrade from 1.24 to 1.25 Createnodegroup nodes are in not ready status

Expected Behaviour

EKS kubernetes cluster upgrade from 1.24 to 1.25 (amazon-eks-node-1.25) are created but cluster status shows as "Not Ready"

In 1.22, followed as mentioned in announcement https://eksctl.io/announcements/nodegroup-override-announcement/

The announcement supported for 1.22 -> 1.23 -> 1.24

Similary any additional configuration to be done for 1.25

Current Behaviour

1.24 to 1.25 Amazon Linux 2 in Not Ready status after 30 minutes of node creation.

Code snippet

overrideBootstrapCommand: '#!/bin/bash

    source /var/lib/cloud/scripts/eksctl/bootstrap.helper.sh

    /etc/eks/bootstrap.sh CLUSTER_NAME --kubelet-extra-args "--node-labels=${NODE_LABELS} "'

Possible Solution

No response

Steps to Reproduce

EKS cluster upgrade from 1.24 to 1.25 using eks-cluster-upgrade script

Amazon EKS upgrade version

latest

Python runtime version

3.10

Packaging format used

Git clone

Debugging logs

No response

Bug: Upgrade failing with Invalid format specifier on nodegroup updates

Expected Behaviour

eksupgrade would complete the pre-flight checks and complete the upgrade. Currently running the eksupgrade from sitepackages to avoid my local issue#48

  • Cluster is upgraded to version 1.22 from 1.21
  • Addons are updated
  • Image type is identified and then it fails with the message below in Current behavior

Current Behaviour

The upgrade path fails on

INFO:eksupgrade.starter:The Image Type Detected = Amazon Linux 2
ERROR:eksupgrade.starter:Exception encountered in main method - Error: Invalid format specifier

Code snippet

eksupgrade eksup-cluster 1.22 us-east-1

Possible Solution

No response

Steps to Reproduce

  1. Created a cluster using eksctl eksctl create cluster -f cluster.yaml. Config yaml below
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eksup-cluster
  region: us-east-1
  version: "1.21"
nodeGroups:
  - name: ng-1
    instanceType: t2.micro
    desiredCapacity: 2
addons: 
- name: vpc-cni
- name: coredns
- name: kube-proxy
  1. Installed eksupgrade using pip install eksupgrade
  2. Ran the eksupgrade command to upgrade the version to 1.22 eksupgrade eksup-cluster 1.22 us-east-1

Amazon EKS upgrade version

0.4.0

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

DEBUG:botocore.hooks:Event needs-retry.ec2.DescribeImages: calling handler <botocore.retryhandler.RetryHandler object at 0x1051279d0>
DEBUG:botocore.retryhandler:No retry needed.
INFO:eksupgrade.starter:The Image Type Detected = Amazon Linux 2
DEBUG:botocore.hooks:Event choose-service-name: calling handler <function handle_service_name_alias at 0x102aee4c0>
DEBUG:botocore.loaders:Loading JSON file: /Users/***/.pyenv/versions/3.9.10/lib/python3.9/site-packages/botocore/data/ssm/2014-11-06/service-2.json.gz
DEBUG:botocore.loaders:Loading JSON file: /Users/***/.pyenv/versions/3.9.10/lib/python3.9/site-packages/botocore/data/ssm/2014-11-06/endpoint-rule-set-1.json.gz
DEBUG:botocore.hooks:Event creating-client-class.ssm: calling handler <function add_generate_presigned_url at 0x102a47550>
DEBUG:botocore.endpoint:Setting ssm timeout as (60, 60)
DEBUG:botocore.client:Registering retry handlers for service: ssm
DEBUG:botocore.hooks:Event choose-service-name: calling handler <function handle_service_name_alias at 0x102aee4c0>
DEBUG:botocore.hooks:Event creating-client-class.ec2: calling handler <function add_generate_presigned_url at 0x102a47550>
DEBUG:botocore.endpoint:Setting ec2 timeout as (60, 60)
DEBUG:botocore.client:Registering retry handlers for service: ec2
ERROR:eksupgrade.starter:Exception encountered in main method - Error: Invalid format specifier

Feature request: Support updating Fargate profiles

Use case

Fargate profiles use a node based off the Kubernetes version of the control plane at the time of the node creation. Once the control plane has been upgraded, the Fargate profiles will not be updated until they have been "rolled" (remove and replace with new nodes that will use a version that matches the control plane). The safest way to do this is to simply use the Kubernetes eviction API once the control plane has been updated

Solution/User Experience

When users use eksupgrade, any Fargate profiles should also be updated gracefully as well. Fargate profile nodes are identified by their node name that is prefixed with fargate- such as fargate-ip-10-0-14-253.ec2.internal

Alternative solutions

None

Bug: loading_config bad argument name regionName

Expected Behaviour

No argument error

Current Behaviour

Argument error thrown on multiple executions of loading_config due to the change in argument key from #29 (specifically: https://github.com/aws-samples/eks-cluster-upgrade/pull/29/files#diff-cee8afb0ae5b76bd171555d971a66fbe0d5566c24e0a7dfaa51e93878a1c289eR86)

There are a few areas where this method is called and regionName= is still explicitly specified, resulting in errors.

Code snippet

https://github.com/aws-samples/eks-cluster-upgrade/pull/29/files#diff-cee8afb0ae5b76bd171555d971a66fbe0d5566c24e0a7dfaa51e93878a1c289eR86

Possible Solution

Fix arguments passed to the method.

Steps to Reproduce

Run anything using watcher, is_cluster_auto_scaler_present, or cluster_auto_enable_disable

Amazon EKS one click upgrade version

latest

Python runtime version

3.1

Packaging format used

PyPi

Debugging logs

No response

Maintenance: Setup stale bot on repository

Summary

Lean on stale bot to cull outdated or unresponsive issues.

Why is this needed?

Maintenance is hard.

Which area does this relate to?

No response

Solution

Setup stale bot integration on repo

Bug: Pre-flight check fails in v0.6.0 for a 1.25 version upgrade

Expected Behaviour

eksupgrade would complete the pre-flight checks and confirm the upgrade path is available to users on 1.24 version of EKS.

Current Behaviour

eksupgrade fails on pre-flight on the pre-flight indicating the PolicyV1Api doesn't include a list_pod_security_policy attribute.

Code snippet

eksupgrade eksup-cluster 1.25 us-east-1

Possible Solution

Update the logic around invoking pod_security_policies only if the update_version is below 1.25.

preflight_module.py

        if float(update_version) < 1.25:
            pod_security_policies(errors, cluster_name, region, report, customer_report)

Steps to Reproduce

  1. Created a cluster using eksctl eksctl create cluster -f cluster.yaml. Config yaml below
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eksup-cluster
  region: us-east-1
  version: "1.24"
nodeGroups:
  - name: ng-1
    instanceType: t2.micro
    desiredCapacity: 2
addons: 
- name: vpc-cni
- name: coredns
- name: kube-proxy
  1. Installed eksupgrade using pip install eksupgrade
  2. Ran the eksupgrade command to upgrade the version to 1.25 eksupgrade eksup-cluster 1.25 us-east-1

Amazon EKS upgrade version

0.6.0

Python runtime version

3.9

Packaging format used

PyPi

Debugging logs

ERROR:eksupgrade.src.preflight_module:Pre flight unsuccessful because of the following errors: ["Some error occurred while checking for the policy security policies 'PolicyV1Api' object has no attribute 'list_pod_security_policy'"]
ERROR:eksupgrade.starter:Pre-flight check for cluster eksup-cluster failed!

Maintenance: Auto-style and format codebase

Summary

This codebase presently doesn't conform to PEP-8 or other styling guidelines common in the python community.
There are a number of capable tools to automatically style and format this python repository (e.g. black and isort) that could easily reduce the onus or guess work associated with handling this formatting.

Why is this needed?

More easily maintain the codebase and make OSS contributions more approachable.

Which area does this relate to?

Governance, Other

Solution

Execute the styling/formatting tools already setup in pyproject.toml dependencies.

e.g. execute poetry run black . and poetry run isort --profile=black . and instrument pre-commit to continuously enforce the standard.

Feature request: Remove email reporting

Use case

Sending results to an email inbox is outside the normal feedback loop that is expected when working with a CLI

Solution/User Experience

Instead of sending results to an email inbox, format the results and send them to stdout for the user who is currently running eksupgrade

Alternative solutions

None; once the results are sent to stdout, users have the ability to use that data by redirecting the stream to a local file, etc.

Feature request: Allow user to continue or abort between running checks and performing upgrade

Use case

Users may or may not want to proceed with the upgrade depending on the results of the pre-flight checks. They should be afforded the opportunity to view those results and determine if they are ready to proceed with the upgrade or abort and make any additional changes prior to upgrading

Solution/User Experience

After a user runs:

eksupgrade <name> <version> <region>

The process should kick off starting with the pre-flight checks. Once the results have been collected and reported to stdout (see #30), the user should receive a prompt that says something like Proceed with upgrade (Y/n)? Where if the user hits enter or Y or y, the upgrade continues, otherwise the process is aborted

Alternative solutions

Splitting out to separate commands such as:

eksupgrade --pre-flight <name> <version> <region> 
eksupgrade <name> <version> <region>

Package meta

The repository currently doesn't offer a mechanism for installing as a python module from pypi or the capability to call as a proper python script.

This could be accomplished by introducing a setup.py or pyproject.toml.

Feature request: Data models

Use case

Using the current CLI is fairly specific and doesn't provide any layer of abstraction around resources being interacted with.
We should create a data structure around common resource types that will be used across the CLI and python module.

Solution/User Experience

Define data models for common resource types (e.g. Cluster and Cluster Addon), while providing abstract models with common/core utilities to be provided for use by current and future implementations.

Define an eksupgrade.models.* module and create specific data models (likely using dataclasses) to define the schemas.

This should likely be conducive to facilitation of things like cluster auth method reuse (resolving #33).

Alternative solutions

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.