Coder Social home page Coder Social logo

antonputra / tutorials Goto Github PK

View Code? Open in Web Editor NEW
2.2K 79.0 2.1K 17.15 MB

DevOps Tutorials

Home Page: https://youtube.com/antonputra

License: MIT License

HCL 61.23% Makefile 0.15% Dockerfile 1.46% Go 12.25% Python 3.78% JavaScript 6.69% TypeScript 0.66% HTML 7.12% CSS 0.28% Smarty 1.63% Shell 3.23% Io 0.02% Ruby 0.03% Rust 0.70% Java 0.75%
kubernetes terraform ansible devops sre packer aws gcp serverless

tutorials's People

Contributors

antonputra avatar aws-simple avatar bajix avatar git4example avatar hackcoderr avatar hellowin avatar riwajchalise2 avatar shiningmosquito avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tutorials's Issues

Letsencrypt WILDCARD Certificate - follow up questions

Thank you for the "How to Get Letsencrypt WILDCARD Certificate?" tutorial! I've got three questions, I hope you don't mind me asking here, for unknown reasons youtube keeps deleting my comments:

  • what do we need the auth.devopsbyexample.io subdomain? for What purpose does it serve? Does it need to be auth exactly?
  • from what I understand, you're reaching amce-dns only locally, from the same EC2 machine. What if I want to use it elsewhere? I've got several hosts that provide their own acme-clients, is it safe to expose the amce-dns to the internet?
  • you're still able to add any sub-domain from your google panel, right? The presence of acme and auth entry does not conflict with setting. e.g. A record for home.devopsbyexample.io that would point to your home address?

podAntiAffinity matchExpressions wrong Key

I noticed that the pods were allocated on the same node. After investigating, I noticed that the key for matchExpressions should be 'name' instead of 'app' because the name of the app is 'my-mongodb-svc' not 'my-mongodb'. Even thought I also tried with 'my-mongodb-svc' and 'app', but it didn't work. When I changed the key to 'name', it worked because the replicaSet has the name 'my-mongodb'.

affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app #This should be **name** instead of app
          operator: In
          values:
          - my-mongodb

Error to define assume_role_policy data

When im runing TF apply im getting the error-
"╷
│ Error: Incorrect attribute value type

│ on iam-controller.tf line 20, in resource "aws_iam_role" "aws_load_balancer_controller":
│ 20: assume_role_policy = data.aws_iam_policy_document.aws_load_balancer_controller_assume_role_policy
│ ├────────────────
│ │ data.aws_iam_policy_document.aws_load_balancer_controller_assume_role_policy is object with 9 attributes

│ Inappropriate value for attribute "assume_role_policy": string required.
"

Im using
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}

Horizontal Scaling not working

Auto-scaling is not working for me on organization level due to that workflow jobs are not running parallelly and taking more time.
In horizontal-runner-autoscaler.yaml file I defined "organizationNames: MyOrg". and I even tried without this but no luck.

Please let me know if I'm doing anything wrong here.

Regards,
Faheem

issue with lesson 125 eks cluster terraform in azure pipeline

We have been using this Terraform for almost 3 weeks and in Azure Pipeline to create the EKS cluster. The cluster creation was successful initially and was working fine. Since last week, it stopped working in the pipeline and started complaining about the below error. It looks like the token to connect to the Kubernetes cluster is lost. When I tried to resolve it with a few suggestions on the internet to add load_config_file = false under the provider Kubernetes, it didn't help to resolve the issue. Kindly help me understand this to unblock this issue.

On a side note, I'm able to access the kubernetes resources and AWS resources using the CLIs with the same credentials.

The error details are below.2023-06-11T10:15:49.2436238Z ##[section]Starting: Terraform : plan 2023-06-11T10:15:49.2442561Z ============================================================================== 2023-06-11T10:15:49.2442703Z Task : Terraform 2023-06-11T10:15:49.2442774Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP) 2023-06-11T10:15:49.2443001Z Version : 4.218.21 2023-06-11T10:15:49.2443074Z Author : Microsoft Corporation 2023-06-11T10:15:49.2443164Z Help : [Learn more about this task](https://aka.ms/AAf0uqr) 2023-06-11T10:15:49.2443265Z ============================================================================== 2023-06-11T10:15:49.4956817Z [command]/opt/hostedtoolcache/terraform/1.4.6/x64/terraform providers 2023-06-11T10:15:52.9468494Z 2023-06-11T10:15:52.9469315Z Providers required by configuration: 2023-06-11T10:15:52.9470068Z . 2023-06-11T10:15:52.9471169Z ├── provider[registry.terraform.io/hashicorp/helm] 2023-06-11T10:15:52.9471793Z ├── provider[registry.terraform.io/hashicorp/kubernetes] 2023-06-11T10:15:52.9472331Z ├── provider[registry.terraform.io/hashicorp/aws] ~> 4.0 2023-06-11T10:15:52.9473222Z ├── provider[registry.terraform.io/gavinbunney/kubectl] >= 1.14.0 2023-06-11T10:15:52.9474363Z ├── module.allow_assume_eks_admins_iam_policy 2023-06-11T10:15:52.9474892Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9475285Z ├── module.eks_dev_iam_role 2023-06-11T10:15:52.9475786Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9566229Z ├── module.eks_admins_iam_role 2023-06-11T10:15:52.9566582Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9566827Z ├── module.eks_admins_iam_group 2023-06-11T10:15:52.9567123Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9567409Z ├── module.allow_assume_eks_dev_iam_policy 2023-06-11T10:15:52.9567701Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9568025Z ├── module.aws_load_balancer_controller_irsa_role 2023-06-11T10:15:52.9568317Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9568582Z ├── module.cluster_dev_user_iam_user 2023-06-11T10:15:52.9568862Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9569128Z ├── module.eks_dev_iam_group 2023-06-11T10:15:52.9569402Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9569684Z ├── module.allow_eks_admin_access_iam_policy 2023-06-11T10:15:52.9569983Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9570240Z ├── module.cluster_autoscaler_irsa_role 2023-06-11T10:15:52.9570535Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9570746Z ├── module.vpc 2023-06-11T10:15:52.9571026Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.35.0 2023-06-11T10:15:52.9571239Z ├── module.eks 2023-06-11T10:15:52.9571516Z │   ├── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 2023-06-11T10:15:52.9571821Z │   ├── provider[registry.terraform.io/hashicorp/tls] >= 3.0.0 2023-06-11T10:15:52.9572148Z │   ├── provider[registry.terraform.io/hashicorp/kubernetes] >= 2.10.0 2023-06-11T10:15:52.9572407Z │   ├── module.fargate_profile 2023-06-11T10:15:52.9572984Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 2023-06-11T10:15:52.9573226Z │   ├── module.kms 2023-06-11T10:15:52.9573495Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 2023-06-11T10:15:52.9573766Z │   ├── module.self_managed_node_group 2023-06-11T10:15:52.9574058Z │   ├── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 2023-06-11T10:15:52.9574309Z │   └── module.user_data 2023-06-11T10:15:52.9574598Z │   └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 2023-06-11T10:15:52.9574877Z │   └── module.eks_managed_node_group 2023-06-11T10:15:52.9575159Z │   ├── provider[registry.terraform.io/hashicorp/aws] >= 3.72.0 2023-06-11T10:15:52.9575589Z │   └── module.user_data 2023-06-11T10:15:52.9575895Z │   └── provider[registry.terraform.io/hashicorp/cloudinit] >= 2.0.0 2023-06-11T10:15:52.9576166Z ├── module.allow_eks_dev_access_iam_policy 2023-06-11T10:15:52.9576470Z │   └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9576739Z └── module.cluster_admin_user_iam_user 2023-06-11T10:15:52.9577040Z └── provider[registry.terraform.io/hashicorp/aws] >= 4.0.0 2023-06-11T10:15:52.9577139Z 2023-06-11T10:15:52.9577294Z Providers required by state: 2023-06-11T10:15:52.9577369Z 2023-06-11T10:15:52.9577524Z provider[registry.terraform.io/hashicorp/tls] 2023-06-11T10:15:52.9577631Z 2023-06-11T10:15:52.9577789Z provider[registry.terraform.io/gavinbunney/kubectl] 2023-06-11T10:15:52.9577882Z 2023-06-11T10:15:52.9578046Z provider[registry.terraform.io/hashicorp/aws] 2023-06-11T10:15:52.9578133Z 2023-06-11T10:15:52.9578296Z provider[registry.terraform.io/hashicorp/helm] 2023-06-11T10:15:52.9578390Z 2023-06-11T10:15:52.9578547Z provider[registry.terraform.io/hashicorp/kubernetes] 2023-06-11T10:15:52.9578654Z 2023-06-11T10:15:52.9585347Z [command]/opt/hostedtoolcache/terraform/1.4.6/x64/terraform plan -var region=*** -var access_key=*** -var secret_key=*** -out demo.tfplan -detailed-exitcode 2023-06-11T10:16:01.8441930Z �[0m�[1mmodule.eks_admins_iam_role.data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8447732Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8448820Z �[0m�[1mmodule.eks_dev_iam_group.data.aws_caller_identity.current[0]: Reading...�[0m�[0m 2023-06-11T10:16:01.8449297Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8449882Z �[0m�[1mmodule.eks_dev_iam_group.aws_iam_group.this[0]: Refreshing state... [id=eks-dev]�[0m 2023-06-11T10:16:01.8450298Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8450680Z �[0m�[1mmodule.eks_dev_iam_role.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8454996Z �[0m�[1mmodule.vpc.aws_vpc.this[0]: Refreshing state... [id=vpc-0860677527fc28025]�[0m 2023-06-11T10:16:01.8460731Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8468714Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8521980Z �[0m�[1mmodule.eks_dev_iam_role.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:01.8581061Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:01.8582063Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:01.8589227Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:01.8601130Z �[0m�[1mmodule.cluster_dev_user_iam_user.aws_iam_user.this[0]: Refreshing state... [id=cluster-dev-user]�[0m 2023-06-11T10:16:01.8622981Z �[0m�[1mmodule.vpc.aws_eip.nat[0]: Refreshing state... [id=eipalloc-0b1e88b4add069d4b]�[0m 2023-06-11T10:16:02.1402028Z �[0m�[1mmodule.eks_admins_iam_role.data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.1402989Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.1403778Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1410855Z �[0m�[1mmodule.eks_admins_iam_role.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1419549Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.1431559Z �[0m�[1mmodule.eks_admins_iam_role.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.1441506Z �[0m�[1mmodule.eks_admins_iam_group.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1456486Z �[0m�[1mmodule.eks_admins_iam_group.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.1457266Z �[0m�[1mmodule.eks.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1467985Z �[0m�[1mmodule.eks_admins_iam_group.aws_iam_group.this[0]: Refreshing state... [id=eks-admin]�[0m 2023-06-11T10:16:02.1474184Z �[0m�[1mmodule.eks.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.1510398Z �[0m�[1mmodule.eks.aws_cloudwatch_log_group.this[0]: Refreshing state... [id=/aws/eks/cluster-corro/cluster]�[0m 2023-06-11T10:16:02.1515785Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.1543974Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_region.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1544793Z �[0m�[1mmodule.eks_dev_iam_group.data.aws_caller_identity.current[0]: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.1551799Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_region.current: Read complete after 0s [id=***]�[0m 2023-06-11T10:16:02.1553203Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.1610047Z �[0m�[1mmodule.eks.module.kms.data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1617950Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.1622736Z �[0m�[1mmodule.eks.data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1629360Z �[0m�[1mmodule.eks_dev_iam_role.data.aws_caller_identity.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1630214Z �[0m�[1mmodule.eks.module.kms.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1650628Z �[0m�[1mmodule.eks.module.kms.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.1651492Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_region.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1670504Z �[0m�[1mmodule.allow_eks_admin_access_iam_policy.aws_iam_policy.policy[0]: Refreshing state... [id=arn:aws:iam::015921992120:policy/allow-eks-admin-access]�[0m 2023-06-11T10:16:02.1689606Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_region.current: Read complete after 0s [id=***]�[0m 2023-06-11T10:16:02.1699807Z �[0m�[1mmodule.eks_dev_iam_group.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.1711560Z �[0m�[1mmodule.eks_dev_iam_group.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.1725055Z �[0m�[1mmodule.allow_eks_dev_access_iam_policy.aws_iam_policy.policy[0]: Refreshing state... [id=arn:aws:iam::015921992120:policy/allow-eks-dev-access]�[0m 2023-06-11T10:16:02.2119269Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_partition.current: Reading...�[0m�[0m 2023-06-11T10:16:02.2136039Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_partition.current: Read complete after 0s [id=aws]�[0m 2023-06-11T10:16:02.2145627Z �[0m�[1mmodule.cluster_admin_user_iam_user.aws_iam_user.this[0]: Refreshing state... [id=cluster-admin-user]�[0m 2023-06-11T10:16:02.2382623Z �[0m�[1mmodule.eks_dev_iam_role.data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.2404465Z �[0m�[1mmodule.eks.module.kms.data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.2436009Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].data.aws_iam_policy_document.assume_role_policy[0]: Reading...�[0m�[0m 2023-06-11T10:16:02.2442937Z �[0m�[1mmodule.eks_admins_iam_group.data.aws_caller_identity.current[0]: Reading...�[0m�[0m 2023-06-11T10:16:02.2445225Z �[0m�[1mmodule.eks.data.aws_caller_identity.current: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.2472372Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2560088296]�[0m 2023-06-11T10:16:02.2485145Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].data.aws_iam_policy_document.assume_role_policy[0]: Reading...�[0m�[0m 2023-06-11T10:16:02.2512376Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2560088296]�[0m 2023-06-11T10:16:02.2590540Z �[0m�[1mmodule.eks.data.aws_iam_policy_document.assume_role_policy[0]: Reading...�[0m�[0m 2023-06-11T10:16:02.2611492Z �[0m�[1mmodule.eks.data.aws_iam_policy_document.assume_role_policy[0]: Read complete after 0s [id=2764486067]�[0m 2023-06-11T10:16:02.2648673Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_iam_policy_document.load_balancer_controller[0]: Reading...�[0m�[0m 2023-06-11T10:16:02.2669605Z �[0m�[1mmodule.eks_dev_iam_group.data.aws_iam_policy_document.iam_self_management: Reading...�[0m�[0m 2023-06-11T10:16:02.2679117Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_iam_role.this[0]: Refreshing state... [id=spot-eks-node-group-20230523061939512600000003]�[0m 2023-06-11T10:16:02.2754718Z �[0m�[1mmodule.eks_dev_iam_group.data.aws_iam_policy_document.iam_self_management: Read complete after 0s [id=3250444868]�[0m 2023-06-11T10:16:02.2776676Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_iam_role.this[0]: Refreshing state... [id=general-eks-node-group-20230523061939511000000001]�[0m 2023-06-11T10:16:02.2875330Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_iam_policy_document.load_balancer_controller[0]: Read complete after 0s [id=898716815]�[0m 2023-06-11T10:16:02.2929845Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.aws_iam_policy.load_balancer_controller[0]: Refreshing state... [id=arn:aws:iam::015921992120:policy/AmazonEKS_AWS_Load_Balancer_Controller-20230526184436105900000001]�[0m 2023-06-11T10:16:02.3216297Z �[0m�[1mmodule.eks_admins_iam_group.data.aws_caller_identity.current[0]: Read complete after 0s [id=015921992120]�[0m 2023-06-11T10:16:02.3239808Z �[0m�[1mmodule.eks_dev_iam_group.aws_iam_group_membership.this[0]: Refreshing state... [id=eks-dev]�[0m 2023-06-11T10:16:02.3668100Z �[0m�[1mmodule.eks_admins_iam_group.data.aws_iam_policy_document.iam_self_management: Reading...�[0m�[0m 2023-06-11T10:16:02.3719759Z �[0m�[1mmodule.eks_admins_iam_group.data.aws_iam_policy_document.iam_self_management: Read complete after 0s [id=3250444868]�[0m 2023-06-11T10:16:02.4092091Z �[0m�[1mmodule.eks_admins_iam_group.aws_iam_group_membership.this[0]: Refreshing state... [id=eks-admin]�[0m 2023-06-11T10:16:02.5641784Z �[0m�[1mmodule.eks.aws_iam_role.this[0]: Refreshing state... [id=cluster-corro-cluster-20230523061940040800000007]�[0m 2023-06-11T10:16:02.7665761Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"]: Refreshing state... [id=spot-eks-node-group-20230523061939512600000003-20230523061940243800000009]�[0m 2023-06-11T10:16:02.7723268Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"]: Refreshing state... [id=spot-eks-node-group-20230523061939512600000003-2023052306194042700000000d]�[0m 2023-06-11T10:16:02.7724596Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"]: Refreshing state... [id=general-eks-node-group-20230523061939511000000001-2023052306194028450000000a]�[0m 2023-06-11T10:16:02.7725863Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"]: Refreshing state... [id=spot-eks-node-group-20230523061939512600000003-2023052306194033370000000c]�[0m 2023-06-11T10:16:02.7729344Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"]: Refreshing state... [id=general-eks-node-group-20230523061939511000000001-20230523061940187600000008]�[0m 2023-06-11T10:16:02.7766679Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"]: Refreshing state... [id=general-eks-node-group-20230523061939511000000001-2023052306194029730000000b]�[0m 2023-06-11T10:16:02.9501018Z �[0m�[1mmodule.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"]: Refreshing state... [id=cluster-corro-cluster-20230523061940040800000007-2023052306194077000000000f]�[0m 2023-06-11T10:16:02.9544983Z �[0m�[1mmodule.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"]: Refreshing state... [id=cluster-corro-cluster-20230523061940040800000007-2023052306194076890000000e]�[0m 2023-06-11T10:16:03.4316209Z �[0m�[1mmodule.vpc.aws_default_network_acl.this[0]: Refreshing state... [id=acl-079547a9241994213]�[0m 2023-06-11T10:16:03.4453825Z �[0m�[1mmodule.vpc.aws_default_security_group.this[0]: Refreshing state... [id=sg-06015fdebd967ecea]�[0m 2023-06-11T10:16:03.4457855Z �[0m�[1mmodule.vpc.aws_subnet.public[0]: Refreshing state... [id=subnet-031d5d6a820ec09c3]�[0m 2023-06-11T10:16:03.4462017Z �[0m�[1mmodule.eks_admins_iam_role.data.aws_iam_policy_document.assume_role[0]: Reading...�[0m�[0m 2023-06-11T10:16:03.4469717Z �[0m�[1mmodule.vpc.aws_route_table.public[0]: Refreshing state... [id=rtb-0de682bebbfde358b]�[0m 2023-06-11T10:16:03.4473458Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_security_group.this[0]: Refreshing state... [id=sg-0b3a2b91f4b88209f]�[0m 2023-06-11T10:16:03.4480360Z �[0m�[1mmodule.vpc.aws_default_route_table.default[0]: Refreshing state... [id=rtb-019d34c02c5fa8ee8]�[0m 2023-06-11T10:16:03.4522542Z �[0m�[1mmodule.eks.aws_security_group.cluster[0]: Refreshing state... [id=sg-04493a20d9e346e41]�[0m 2023-06-11T10:16:03.4533892Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_security_group.this[0]: Refreshing state... [id=sg-09bc1aa248202da86]�[0m 2023-06-11T10:16:03.4592198Z �[0m�[1mmodule.eks_admins_iam_role.data.aws_iam_policy_document.assume_role[0]: Read complete after 0s [id=1484993501]�[0m 2023-06-11T10:16:03.4621511Z �[0m�[1mmodule.vpc.aws_subnet.public[1]: Refreshing state... [id=subnet-0d64c4ccbba8f1428]�[0m 2023-06-11T10:16:03.4675854Z �[0m�[1mmodule.eks.aws_security_group.node[0]: Refreshing state... [id=sg-08af1a74b7cac1744]�[0m 2023-06-11T10:16:03.5765595Z �[0m�[1mmodule.eks_dev_iam_role.data.aws_iam_policy_document.assume_role[0]: Reading...�[0m�[0m 2023-06-11T10:16:03.5782902Z �[0m�[1mmodule.eks_dev_iam_role.data.aws_iam_policy_document.assume_role[0]: Read complete after 0s [id=1484993501]�[0m 2023-06-11T10:16:03.5834958Z �[0m�[1mmodule.vpc.aws_route_table.private[0]: Refreshing state... [id=rtb-08055e8907096d4f2]�[0m 2023-06-11T10:16:03.7024149Z �[0m�[1mmodule.vpc.aws_internet_gateway.this[0]: Refreshing state... [id=igw-017130444a25ad049]�[0m 2023-06-11T10:16:03.8119696Z �[0m�[1mmodule.vpc.aws_subnet.private[0]: Refreshing state... [id=subnet-004a6b4c6f995a096]�[0m 2023-06-11T10:16:03.8156476Z �[0m�[1mmodule.vpc.aws_subnet.private[1]: Refreshing state... [id=subnet-0b210cfb9c10e783a]�[0m 2023-06-11T10:16:03.8271740Z �[0m�[1mmodule.eks_admins_iam_role.aws_iam_role.this[0]: Refreshing state... [id=eks-admin]�[0m 2023-06-11T10:16:03.8415379Z �[0m�[1mmodule.eks_dev_iam_role.aws_iam_role.this[0]: Refreshing state... [id=eks-dev]�[0m 2023-06-11T10:16:03.8764134Z �[0m�[1mmodule.vpc.aws_route_table_association.public[1]: Refreshing state... [id=rtbassoc-0046e4d4120bd9bb9]�[0m 2023-06-11T10:16:03.8865529Z �[0m�[1mmodule.vpc.aws_route_table_association.public[0]: Refreshing state... [id=rtbassoc-03b2db1419a9fe54b]�[0m 2023-06-11T10:16:03.8873558Z �[0m�[1mmodule.vpc.aws_nat_gateway.this[0]: Refreshing state... [id=nat-0d02dcc3c80761b32]�[0m 2023-06-11T10:16:03.8975172Z �[0m�[1mmodule.vpc.aws_route.public_internet_gateway[0]: Refreshing state... [id=r-rtb-0de682bebbfde358b1080289494]�[0m 2023-06-11T10:16:03.9268449Z �[0m�[1mmodule.eks.aws_security_group_rule.node["egress_ntp_udp"]: Refreshing state... [id=sgrule-1132926442]�[0m 2023-06-11T10:16:03.9277580Z �[0m�[1mmodule.eks.aws_security_group_rule.node["ingress_cluster_kubelet"]: Refreshing state... [id=sgrule-3063508007]�[0m 2023-06-11T10:16:03.9740965Z �[0m�[1mmodule.eks.aws_security_group_rule.node["egress_self_coredns_udp"]: Refreshing state... [id=sgrule-539550602]�[0m 2023-06-11T10:16:03.9777606Z �[0m�[1mmodule.eks.aws_security_group_rule.node["ingress_cluster_443"]: Refreshing state... [id=sgrule-2231412979]�[0m 2023-06-11T10:16:04.0068936Z �[0m�[1mmodule.eks.aws_security_group_rule.node["ingress_self_coredns_tcp"]: Refreshing state... [id=sgrule-3611822281]�[0m 2023-06-11T10:16:04.0124183Z �[0m�[1mmodule.eks.aws_security_group_rule.node["ingress_allow_access_from_control_plane"]: Refreshing state... [id=sgrule-2752858694]�[0m 2023-06-11T10:16:04.0199406Z �[0m�[1mmodule.eks.aws_security_group_rule.node["egress_cluster_443"]: Refreshing state... [id=sgrule-3446098879]�[0m 2023-06-11T10:16:04.0237690Z �[0m�[1mmodule.eks.aws_security_group_rule.node["egress_https"]: Refreshing state... [id=sgrule-3804696057]�[0m 2023-06-11T10:16:04.1342601Z �[0m�[1mmodule.eks.aws_security_group_rule.node["egress_ntp_tcp"]: Refreshing state... [id=sgrule-1025061095]�[0m 2023-06-11T10:16:04.1706799Z �[0m�[1mmodule.eks.aws_security_group_rule.node["ingress_self_coredns_udp"]: Refreshing state... [id=sgrule-1930930982]�[0m 2023-06-11T10:16:04.1776667Z �[0m�[1mmodule.eks.aws_security_group_rule.node["egress_self_coredns_tcp"]: Refreshing state... [id=sgrule-2924437996]�[0m 2023-06-11T10:16:04.1785643Z �[0m�[1mmodule.eks.aws_security_group_rule.cluster["ingress_nodes_443"]: Refreshing state... [id=sgrule-966434464]�[0m 2023-06-11T10:16:04.2048856Z �[0m�[1mmodule.eks.aws_security_group_rule.cluster["egress_nodes_443"]: Refreshing state... [id=sgrule-1924670702]�[0m 2023-06-11T10:16:04.2166021Z �[0m�[1mmodule.eks.aws_security_group_rule.cluster["egress_nodes_kubelet"]: Refreshing state... [id=sgrule-1273194252]�[0m 2023-06-11T10:16:04.2344626Z �[0m�[1mmodule.vpc.aws_route_table_association.private[1]: Refreshing state... [id=rtbassoc-07689d3e51c5afd64]�[0m 2023-06-11T10:16:04.2492976Z �[0m�[1mmodule.vpc.aws_route_table_association.private[0]: Refreshing state... [id=rtbassoc-006f99ec4c1f3d771]�[0m 2023-06-11T10:16:04.2563016Z �[0m�[1mmodule.vpc.aws_route.private_nat_gateway[0]: Refreshing state... [id=r-rtb-08055e8907096d4f21080289494]�[0m 2023-06-11T10:16:04.2619161Z �[0m�[1mmodule.eks_admins_iam_role.aws_iam_role_policy_attachment.custom[0]: Refreshing state... [id=eks-admin-20230525072140696700000001]�[0m 2023-06-11T10:16:04.3453885Z �[0m�[1mmodule.eks_dev_iam_role.aws_iam_role_policy_attachment.custom[0]: Refreshing state... [id=eks-dev-20230525043544179800000001]�[0m 2023-06-11T10:16:04.3612810Z �[0m�[1mmodule.allow_assume_eks_admins_iam_policy.aws_iam_policy.policy[0]: Refreshing state... [id=arn:aws:iam::015921992120:policy/allow-assume-eks-admin-iam-role]�[0m 2023-06-11T10:16:04.3652867Z �[0m�[1mmodule.allow_assume_eks_dev_iam_policy.aws_iam_policy.policy[0]: Refreshing state... [id=arn:aws:iam::015921992120:policy/allow-assume-eks-dev-iam-role]�[0m 2023-06-11T10:16:04.4469021Z �[0m�[1mmodule.eks.aws_eks_cluster.this[0]: Refreshing state... [id=cluster-corro]�[0m 2023-06-11T10:16:04.5641643Z �[0m�[1mmodule.eks_admins_iam_group.aws_iam_group_policy_attachment.custom_arns[0]: Refreshing state... [id=eks-admin-20230525072141050600000002]�[0m 2023-06-11T10:16:04.5721599Z �[0m�[1mmodule.eks_dev_iam_group.aws_iam_group_policy_attachment.custom_arns[0]: Refreshing state... [id=eks-dev-20230525043544381800000003]�[0m 2023-06-11T10:16:04.9354981Z �[0m�[1mmodule.eks.aws_ec2_tag.cluster_primary_security_group["Environment"]: Refreshing state... [id=sg-0d206fc1539778373,Environment]�[0m 2023-06-11T10:16:04.9358010Z �[0m�[1mmodule.eks.data.tls_certificate.this[0]: Reading...�[0m�[0m 2023-06-11T10:16:04.9388522Z �[0m�[1mdata.aws_eks_cluster.default: Reading...�[0m�[0m 2023-06-11T10:16:04.9421046Z �[0m�[1mdata.aws_eks_cluster_auth.default: Reading...�[0m�[0m 2023-06-11T10:16:04.9426538Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_iam_policy_document.cluster_autoscaler[0]: Reading...�[0m�[0m 2023-06-11T10:16:04.9451546Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_iam_policy_document.cluster_autoscaler[0]: Read complete after 0s [id=3924081046]�[0m 2023-06-11T10:16:04.9478076Z �[0m�[1mdata.aws_eks_cluster_auth.default: Read complete after 0s [id=cluster-corro]�[0m 2023-06-11T10:16:04.9507538Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_launch_template.this[0]: Refreshing state... [id=lt-03bea41a48946ccff]�[0m 2023-06-11T10:16:04.9517987Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_launch_template.this[0]: Refreshing state... [id=lt-0f00f99f84ec35996]�[0m 2023-06-11T10:16:04.9542765Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.aws_iam_policy.cluster_autoscaler[0]: Refreshing state... [id=arn:aws:iam::015921992120:policy/AmazonEKS_Cluster_Autoscaler_Policy-20230530093716046100000001]�[0m 2023-06-11T10:16:05.1434700Z �[0m�[1mdata.aws_eks_cluster.default: Read complete after 0s [id=cluster-corro]�[0m 2023-06-11T10:16:05.2759834Z �[0m�[1mmodule.eks.data.tls_certificate.this[0]: Read complete after 0s [id=7c72bb110518075cbb071e57d125fc3aa17238d9]�[0m 2023-06-11T10:16:05.5740742Z �[0m�[1mmodule.eks.aws_iam_openid_connect_provider.oidc_provider[0]: Refreshing state... [id=arn:aws:iam::015921992120:oidc-provider/oidc.eks.***.amazonaws.com/id/25E8623403867E44DE6514ACA37E6193]�[0m 2023-06-11T10:16:05.5869404Z �[0m�[1mkubectl_manifest.role: Refreshing state... [id=/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/cluster-autoscaler]�[0m 2023-06-11T10:16:05.5874687Z �[0m�[1mkubectl_manifest.deployment: Refreshing state... [id=/apis/apps/v1/namespaces/kube-system/deployments/cluster-autoscaler]�[0m 2023-06-11T10:16:05.5881050Z �[0m�[1mkubectl_manifest.service_account: Refreshing state... [id=/api/v1/namespaces/kube-system/serviceaccounts/cluster-autoscaler]�[0m 2023-06-11T10:16:05.5887335Z �[0m�[1mkubectl_manifest.cluster_role_binding: Refreshing state... [id=/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-autoscaler]�[0m 2023-06-11T10:16:05.5892025Z �[0m�[1mkubectl_manifest.role_binding: Refreshing state... [id=/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/cluster-autoscaler]�[0m 2023-06-11T10:16:05.5899367Z �[0m�[1mkubectl_manifest.cluster_role: Refreshing state... [id=/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-autoscaler]�[0m 2023-06-11T10:16:05.5918355Z �[0m�[1mmodule.eks.module.eks_managed_node_group["general"].aws_eks_node_group.this[0]: Refreshing state... [id=cluster-corro:general-20230523062931439900000016]�[0m 2023-06-11T10:16:05.5936645Z �[0m�[1mmodule.eks.module.eks_managed_node_group["spot"].aws_eks_node_group.this[0]: Refreshing state... [id=cluster-corro:spot-20230523062931438400000014]�[0m 2023-06-11T10:16:05.6886006Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_iam_policy_document.this[0]: Reading...�[0m�[0m 2023-06-11T10:16:05.6912541Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.data.aws_iam_policy_document.this[0]: Read complete after 0s [id=914351167]�[0m 2023-06-11T10:16:05.6947321Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.aws_iam_role.this[0]: Refreshing state... [id=aws-load-balancer-controller]�[0m 2023-06-11T10:16:05.6976115Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_iam_policy_document.this[0]: Reading...�[0m�[0m 2023-06-11T10:16:05.6999514Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.data.aws_iam_policy_document.this[0]: Read complete after 0s [id=2908860525]�[0m 2023-06-11T10:16:05.7026404Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.aws_iam_role.this[0]: Refreshing state... [id=cluster-autoscaler]�[0m 2023-06-11T10:16:05.8260653Z �[0m�[1mmodule.eks.kubernetes_config_map_v1_data.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]�[0m 2023-06-11T10:16:06.0075479Z �[0m�[1mmodule.aws_load_balancer_controller_irsa_role.aws_iam_role_policy_attachment.load_balancer_controller[0]: Refreshing state... [id=aws-load-balancer-controller-20230526184436593800000002]�[0m 2023-06-11T10:16:06.0100350Z �[0m�[1mhelm_release.aws_load_balancer_controller: Refreshing state... [id=aws-load-balancer-controller]�[0m 2023-06-11T10:16:06.0274730Z �[0m�[1mmodule.cluster_autoscaler_irsa_role.aws_iam_role_policy_attachment.cluster_autoscaler[0]: Refreshing state... [id=cluster-autoscaler-20230530093716648700000002]�[0m 2023-06-11T10:16:11.5470580Z 2023-06-11T10:16:11.5471841Z �[0m�[1m�[31mPlanning failed.�[0m�[1m Terraform encountered an error while generating this plan.�[0m 2023-06-11T10:16:11.5472222Z 2023-06-11T10:16:11.5473696Z �[0m�[31m╷�[0m�[0m 2023-06-11T10:16:11.5474757Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mfailed to create kubernetes rest client for read of resource: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5475493Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5475855Z �[31m│�[0m �[0m�[0m 2023-06-11T10:16:11.5476225Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5491455Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5492044Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mInvalid configuration for API client�[0m 2023-06-11T10:16:11.5492452Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5492919Z �[31m│�[0m �[0m�[0m with kubernetes_manifest.service_account, 2023-06-11T10:16:11.5493622Z �[31m│�[0m �[0m on autoscaler-manifest.tf line 13, in resource "kubernetes_manifest" "service_account": 2023-06-11T10:16:11.5497644Z �[31m│�[0m �[0m 13: resource "kubernetes_manifest" "service_account" �[4m{�[0m�[0m 2023-06-11T10:16:11.5498199Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5498688Z �[31m│�[0m �[0mGet 2023-06-11T10:16:11.5502070Z �[31m│�[0m �[0m"https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/apis": 2023-06-11T10:16:11.5502615Z �[31m│�[0m �[0mgetting credentials: exec: executable aws failed with exit code 255 2023-06-11T10:16:11.5511644Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5513486Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5514431Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mfailed to create kubernetes rest client for read of resource: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5515026Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5515459Z �[31m│�[0m �[0m�[0m with kubectl_manifest.role, 2023-06-11T10:16:11.5516004Z �[31m│�[0m �[0m on autoscaler-manifest.tf line 29, in resource "kubectl_manifest" "role": 2023-06-11T10:16:11.5516590Z �[31m│�[0m �[0m 29: resource "kubectl_manifest" "role" �[4m{�[0m�[0m 2023-06-11T10:16:11.5517037Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5517384Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5534865Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5535896Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mfailed to create kubernetes rest client for read of resource: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5536881Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5537386Z �[31m│�[0m �[0m�[0m with kubectl_manifest.role_binding, 2023-06-11T10:16:11.5537951Z �[31m│�[0m �[0m on autoscaler-manifest.tf line 50, in resource "kubectl_manifest" "role_binding": 2023-06-11T10:16:11.5538608Z �[31m│�[0m �[0m 50: resource "kubectl_manifest" "role_binding" �[4m{�[0m�[0m 2023-06-11T10:16:11.5539162Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5539515Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5559246Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5560283Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mfailed to create kubernetes rest client for read of resource: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5561146Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5561513Z �[31m│�[0m �[0m�[0m with kubectl_manifest.cluster_role, 2023-06-11T10:16:11.5562007Z �[31m│�[0m �[0m on autoscaler-manifest.tf line 71, in resource "kubectl_manifest" "cluster_role": 2023-06-11T10:16:11.5562449Z �[31m│�[0m �[0m 71: resource "kubectl_manifest" "cluster_role" �[4m{�[0m�[0m 2023-06-11T10:16:11.5562776Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5563056Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5581028Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5582000Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mfailed to create kubernetes rest client for read of resource: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5582508Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5582888Z �[31m│�[0m �[0m�[0m with kubectl_manifest.cluster_role_binding, 2023-06-11T10:16:11.5583377Z �[31m│�[0m �[0m on autoscaler-manifest.tf line 131, in resource "kubectl_manifest" "cluster_role_binding": 2023-06-11T10:16:11.5583859Z �[31m│�[0m �[0m 131: resource "kubectl_manifest" "cluster_role_binding" �[4m{�[0m�[0m 2023-06-11T10:16:11.5584190Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5584453Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5605697Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5606561Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mfailed to create kubernetes rest client for read of resource: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5607098Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5607455Z �[31m│�[0m �[0m�[0m with kubectl_manifest.deployment, 2023-06-11T10:16:11.5607899Z �[31m│�[0m �[0m on autoscaler-manifest.tf line 151, in resource "kubectl_manifest" "deployment": 2023-06-11T10:16:11.5609331Z �[31m│�[0m �[0m 151: resource "kubectl_manifest" "deployment" �[4m{�[0m�[0m 2023-06-11T10:16:11.5609682Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5609964Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5610300Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5610992Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mKubernetes cluster unreachable: Get "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/version": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5611466Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5611840Z �[31m│�[0m �[0m�[0m with helm_release.aws_load_balancer_controller, 2023-06-11T10:16:11.5612330Z �[31m│�[0m �[0m on helm-load-balancer-controller.tf line 17, in resource "helm_release" "aws_load_balancer_controller": 2023-06-11T10:16:11.5612855Z �[31m│�[0m �[0m 17: resource "helm_release" "aws_load_balancer_controller" �[4m{�[0m�[0m 2023-06-11T10:16:11.5613196Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5613456Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5661649Z �[31m╷�[0m�[0m 2023-06-11T10:16:11.5662464Z �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mGet "https://25E8623403867E44DE6514ACA37E6193.gr7.***.eks.amazonaws.com/api/v1/namespaces/kube-system/configmaps/aws-auth": getting credentials: exec: executable aws failed with exit code 255�[0m 2023-06-11T10:16:11.5673896Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5674627Z �[31m│�[0m �[0m�[0m with module.eks.kubernetes_config_map_v1_data.aws_auth[0], 2023-06-11T10:16:11.5675245Z �[31m│�[0m �[0m on .terraform/modules/eks/main.tf line 470, in resource "kubernetes_config_map_v1_data" "aws_auth": 2023-06-11T10:16:11.5675906Z �[31m│�[0m �[0m 470: resource "kubernetes_config_map_v1_data" "aws_auth" �[4m{�[0m�[0m 2023-06-11T10:16:11.5676328Z �[31m│�[0m �[0m 2023-06-11T10:16:11.5676740Z �[31m╵�[0m�[0m 2023-06-11T10:16:11.5876982Z ##[warning]Can't find loc string for key: TerraformPlanFailed 2023-06-11T10:16:11.5885070Z ##[error]Error: TerraformPlanFailed 1 2023-06-11T10:16:11.5887443Z ##[section]Finishing: Terraform : plan

error on acme-dns-client register

When executing the following command, I'm getting an error:

acme-dns-client register -d domain.com -s http://localhost:8080

This is the error I'm getting:

[!] Caught an error while trying to query for CNAME record: read udp <local ip>:33520->46.101.179.64:53: i/o timeout

I must have changed something, because before your instructions worked fine. Hope you can point me in the right direction.

NodeGroup fails to create through terraform

Great Tutorials. On Native EKS Ingress: AWS Load Balancer Controller -112, I get the following error when deploying the terraform from Lesson 112:

Error: error waiting for EKS Node Group (eks-hackathon:private-nodes) to create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
* i-0399217097e46e322, i-09b603f126c569571: NodeCreationFailure: Instances failed to join the kubernetes cluster

I can verify in the AWS console that the NodeGroup is not healthy. Running the AWSSupport-TroubleshootEKSWorkerNode tool, everything comes back success.

Any ideas?

HPA unable find metrics,404

Followed the steps in tutorial,configured HPA to query http_requests_per_second
HPA shows :


Warning FailedGetPodsMetric 33s (x1936 over 18h) horizontal-pod-autoscaler unable to get metric http_requests_per_second: unable to fetch metrics from custom metrics API: the server could not find the metric http_requests_per_second for pods

Prometheus adapter logs shows:

I0721 23:40:12.332805 1 httplog.go:89] "HTTP" verb="GET" URI="/apis/custom.metrics.k8s.io/v1beta1/namespaces/demo/pods/%2A/http_requests_per_second?labelSelector=app%3Dexpress" latency="21.786184ms" userAgent="kube-controller-manager/v1.24.0 (linux/amd64) kubernetes/4ce5a89/system:serviceaccount:kube-system:horizontal-pod-autoscaler" srcIP="10.244.0.1:64743" resp=404

can you help me ASAP,Thank you

Tutorial 102 Doesn't Work with Terraform AWS Provider 5+

Terraform v1.5.1
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.5.0
+ provider registry.terraform.io/hashicorp/tls v4.0.4

Terraform doesn't create proper security group for nodes, as result - nodes can't join to cluster

tutorials/lesson 072 - kube-prometheus-stack hostNetwork

First of all thank you for the great tutorial.

Inspecting the default values I've found this comment:

$ helm show values prometheus-community/kube-prometheus-stack | fgrep -A3 "AWS EKS"
  # Required for use in managed kubernetes clusters (such as AWS EKS) with custom CNI (such as calico),
  # because control-plane managed by AWS cannot communicate with pods' IP CIDR and admission webhooks are not working
  ##
  hostNetwork: false

In a discussion about this topic on k8s slack I got this response:

[...] he issue with EKS and custom CNI is that the pod serving the webhook operation effectively is in a different network than control plane. Control plane is operating within network served by VPC. Pod however, instead of being in VPC network(as it would be with default CNI), is being served in your custom plugin netwotk(which has it's own addresses and routing). Control plane does not know how to route a request to webhook container.
As for what will not work if this is not enabled, this will be validating and mutatnig webhooks. You can find particular pods handling those, by describing the webhook and the svc it points to. Without hostNetwork those will not get requests from control plane
~ ref

However I still don't quite understand the implications of the above mentioned facts.

  1. Is this irrelevant for your scenario since you've disabled monitoring of the managed components (such as etcd etc.)?
  2. If it is relevant, can you please do a short follow-up video elaborating on this please?

tutorials/lesson 069 - using google_project_iam_binding in 6-shared-vpc.tf

Hej Anton,

First of all, thanks for your great videos on yt and for sharing the code examples here.
This really helped us out on setting up a shared vpc based gke in our org.

I only have one short remark, regarding the usage of google_project_iam_binding in 6-shared-vpc.tf.
Using this resource works fine, as long as you only have on gke in one service project.
But, as soon as you will have a second gke in another service project, which is of course using the same host project, your permissions on the host project will be reset when doing a for_each and using google_project_iam_binding. So only one SA will be visible - see hashicorp/terraform-provider-google#5760 for details
Thus, we had to rebuild your code example and used google_project_iam_member instead of that.

Maybe you could rebuild your code example.

Best Regards,
Tim

Error in implementing the AWS EKS & Secrets Manager

I am trying to use this url for implementation

https://github.com/antonputra/tutorials/tree/main/lessons/079

if we use the secret from secret manager from AWS as env variable we need to implement the CRD secretproviderclasses.secrets-store.csi instead of using Helm chart.

i facing some issue

ubuntu@****:~/antonputra/079/secrets-store-csi-driver$ kubectl apply -f 1-secretproviderclasspodstatuses-crd.yaml
The CustomResourceDefinition "secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io" is invalid: status.storedVersions[0]: Invalid value: "v1": must appear in spec.versions

Fail to deploy mongodb on eks

when trying to deploy mongodb on eks cluster im getting the following error:
Warning FailedCreate 39s (x12 over 49s) statefulset-controller create Pod mongodb-0 in StatefulSet mongodb failed error: failed to create PVC datadir-mongodb-0: persistentvolumeclaims "datadir-mongodb-0" is forbidden: Internal error occurred: 2 default StorageClasses were found Warning FailedCreate 28s (x13 over 49s) statefulset-controller create Claim datadir-mongodb-0 for Pod mongodb-0 in StatefulSet mongodb failed error: persistentvolumeclaims "datadir-mongodb-0" is forbidden: Internal error occurred: 2 default StorageClasses were found
should i try to create the storage class and pvc manually ?

Tut 130: Is there an alternative to use aws_iam_openid_connect_provider?

I have the following error because my policies do not allow me to use OpenIDConnectProvider.
Is there an alternative?

Error: creating IAM OIDC Provider: AccessDenied: User: arn:aws:sts::XXXXXX:assumed-role/AWSReservedSSO_PowerUserAccess_7cXXXX0/[email protected] is not authorized to perform: iam:CreateOpenIDConnectProvider on resource: arn:aws:iam::XXXXXXXX:oidc-provider/oidc.eks.eu-central-1.amazonaws.com/id/XXXXX because no identity-based policy allows the iam:CreateOpenIDConnectProvider action
│ status code: 403, request id: XXXXX

│ with aws_iam_openid_connect_provider.eks,
│ on 8-iam-oidc.tf line 5, in resource "aws_iam_openid_connect_provider" "eks":
│ 5: resource "aws_iam_openid_connect_provider" "eks" {

Cretbot sslwith cloudflare as dns

I followed your tutorial https://github.com/antonputra/tutorials/tree/main/lessons/078

Great one, but I used cloudflare as my DNS and came to an issue which caused a redirection loop

This one got resolved by enabling full strict on ssl/tsl>overview>full strict

Can I create a pull request to make this tutorial page better and address the issue since cloud flare is widely used dns.

Here is a reference for you https://serverfault.com/questions/1109368/cretbot-ssl-certificate-not-working-properly/1109370#1109370

issue with cfssl

hey I used your video tutorial but got stuck on the first part because of the error -bash: /usr/local/bin/cfssl: cannot execute binary file: Exec format error can u help me

112 - Error Deploying ALB ingress

Great Tutorials. On Native EKS Ingress: AWS Load Balancer Controller -112, I get the following error when deploying the ingress:

Error from server (InternalError): error when creating "deployments/canary/ingress.yaml": Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": context deadline exceeded

Any idea why this might be happening?

Is it possible to send an alert from AMP to Microsoft Teams with your lambda code?

Hi, is it possible to send an alert from Amazon Prometheus to Microsoft Teams with your lambda code? i used your Slack configuration, but i set Teams url instead of Slack url. When i try to send your example message in lambda, i receive an error. What could be the reason of an error? or maybe i need to change payload message or code in Lambda.
image

lesson 155 - istio - image error

hello,

first, thank you for your lessons, they are great.

i ex1, once the image is pulled ok, i describe the pods because it status is "Error":
Back-off restarting failed container first-app in pod first-app-v1-75f898c6bb-xrxjw_staging(266bde88-bce9-4e59-ae2d-df1ce4fa3ca2)

image: aputra/myapp-lesson155:latest

it is also at second app

Architecture diagram for lesson 125

Hi @antonputra

I found lesson 125 via search "terraform eks" keyword from Google.
It's a good tutorial for beginners studying using terraform to build an EKS cluster.

I made a HIGH-level AWS architecture diagram for lesson 125.
Are you happy with I create a PR to add this diagram to lesson 125 folder?

architecture_diagarm_lesson_125

"lcoals" vs "local" - just need more information around it.

Thanks for making it very easy for me to learn how to deploy the AKs using Terraform and how to use workload identity.
I am quite new using terraform. I have one little doubt regarding the 0-locals.tf file, where you defined "locals" as a function. but you have used "local" in other TF files, e.g., 2-resource-group, to have reference to "local", not "locals". Can you please let me know why you have not used "locals" instead of "local"?

service "ingress-nginx-controller-admission" not found

Hi
doing a kubectl apply results in error
namespace/staging unchanged
deployment.apps/foo created
service/foo created
deployment.apps/bar created
service/bar created
Error from server (InternalError): error when creating "5-simple-fanout-ing.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found

Value defined in disk_size not being reflected

hi @antonputra I am trying to setup eks cluster by following your tutorial on how to create eks cluster using terraform module. I setup the cluster and all and defined the disk_size value to 50. However only 20GB of ebs volume is being created and mounted to root directory. I suppose it should be 50 GB. I am using t3.2xlarge instance type
It would be great if provide some guidance on whats happening or what am i doing wrong.

running dig *.mydomain.com returns error

After following the whole process on getting to the spot where i should run dig *.mydomain.com it returning an error, please what do you think could have cause and how do you think i can solve it
Screenshot 2021-10-27 at 10 34 42
r

issue with metrics-server from 113 lesson

Error: could not download chart: no cached repo found. (try 'helm repo update'): open D:\Users\patha\AppData\Local\Temp\helm\repository\bitnami-index.yaml: The system cannot find the file specified.

│ with helm_release.metrics-server,
│ on 9-metrics-server.tf line 13, in resource "helm_release" "metrics-server":
│ 13: resource "helm_release" "metrics-server" {

k8s-single-runner pod is not dying after build.

Hello,

I have an issue with k8s-single-runner. this pod is not dying after build done.
Container docker:dind is in running state after summerwind/actions-runner:latest is in complete state.

logs:

$ kubectl -n actions logs k8s-single-runner
Defaulted container "runner" out of: runner, docker
2022-12-05 21:26:54.856  NOTICE --- Runner init started with pid 7
2022-12-05 21:26:54.859  DEBUG --- Github endpoint URL https://github.com/
2022-12-05 21:26:55.387  DEBUG --- Passing --ephemeral to config.sh to enable the ephemeral runner.
2022-12-05 21:26:55.390  DEBUG --- Configuring the runner.

--------------------------------------------------------------------------------
|        ____ _ _   _   _       _          _        _   _                      |
|       / ___(_) |_| | | |_   _| |__      / \   ___| |_(_) ___  _ __  ___      |
|      | |  _| | __| |_| | | | | '_ \    / _ \ / __| __| |/ _ \| '_ \/ __|     |
|      | |_| | | |_|  _  | |_| | |_) |  / ___ \ (__| |_| | (_) | | | \__ \     |
|       \____|_|\__|_| |_|\__,_|_.__/  /_/   \_\___|\__|_|\___/|_| |_|___/     |
|                                                                              |
|                       Self-hosted runner registration                        |
|                                                                              |
--------------------------------------------------------------------------------

# Authentication


√ Connected to GitHub

# Runner Registration




√ Runner successfully added
√ Runner connection is good

# Runner settings


√ Settings Saved.

2022-12-05 21:27:01.147  DEBUG --- Runner successfully configured.
{
  "agentId": 29,
  "agentName": "k8s-single-runner",
  "poolId": 1,
  "poolName": "Default",
  "ephemeral": true,
  "serverUrl": "https://pipelines.actions.githubusercontent.com/UKWAqnXsFDNFiTtCJkPNKVIQivszaSHTNc5B1ZJ6o5NTCgjJTZ",
  "gitHubUrl": "https://github.com/ServicePattern/ops-action-demo",
  "workFolder": "/runner/_work"
2022-12-05 21:27:01.151  DEBUG --- Docker enabled runner detected and Docker daemon wait is enabled
2022-12-05 21:27:01.153  DEBUG --- Waiting until Docker is available or the timeout of 120 seconds is reached
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory
}CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
2022-12-05 21:27:04.247  NOTICE --- WARNING LATEST TAG HAS BEEN DEPRECATED. SEE GITHUB ISSUE FOR DETAILS:
2022-12-05 21:27:04.248  NOTICE --- https://github.com/actions-runner-controller/actions-runner-controller/issues/2056

√ Connected to GitHub

Current runner version: '2.299.1'
2022-12-05 21:27:06Z: Listening for Jobs
2022-12-05 21:47:49Z: Running job: Build
2022-12-05 21:47:56Z: Job Build completed with result: Succeeded
√ Removed .credentials
√ Removed .runner
Runner listener exit with 0 return code, stop the service, no retry needed.
Exiting runner...
2022-12-05 21:47:56.781  NOTICE --- Runner init exited. Exiting this process with code 0 so that the container and the pod is GC'ed Kubernetes soon.

logs of docker:dind

Certificate request self-signature ok
subject=CN = docker:dind server
/certs/server/cert.pem: OK
Certificate request self-signature ok
subject=CN = docker:dind client
 /certs/client/cert.pem: OK
time="2022-12-05T21:27:03.749858432Z" level=info msg="Starting up"
time="2022-12-05T21:27:03.751156084Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
time="2022-12-05T21:27:03.751919241Z" level=info msg="libcontainerd: started new containerd process" pid=469
time="2022-12-05T21:27:03.751959339Z" level=info msg="parsed scheme: \"unix\"" module=grpc
time="2022-12-05T21:27:03.751972834Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
time="2022-12-05T21:27:03.751997480Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
time="2022-12-05T21:27:03.752012666Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2022-12-05T21:27:03Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/contai
time="2022-12-05T21:27:03.765967976Z" level=info msg="starting containerd" revision=1c90a442489720eec95342e1789ee8a5e1b9536f version=v1.6.9
time="2022-12-05T21:27:03.778421099Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
time="2022-12-05T21:27:03.778562545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
time="2022-12-05T21:27:03.783041127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"ip: can't fi
time="2022-12-05T21:27:03.783070076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
time="2022-12-05T21:27:03.783272695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrf
time="2022-12-05T21:27:03.783293712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2022-12-05T21:27:03.783325585Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2022-12-05T21:27:03.783342884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
time="2022-12-05T21:27:03.783449989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
time="2022-12-05T21:27:03.783703306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
time="2022-12-05T21:27:03.783830080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs mu
time="2022-12-05T21:27:03.783857338Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2022-12-05T21:27:03.783914686Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2022-12-05T21:27:03.783940215Z" level=info msg="metadata content store policy set" policy=shared
Stream closed EOF for actions/k8s-single-runner (docker)

tutorial 118 without custom domain

Hi, thank you so much for such great tutorial.
I followed the example 2 step by step, using aws console as well as terraform. however, when I call my api curl -i https://x1rjhowt93.execute-api.us-east-1.amazonaws.com/staging/health, I get the following response.

HTTP/2 404 
date: Wed, 19 Jul 2023 16:54:14 GMT
content-type: text/html; charset=utf-8
content-length: 153
x-powered-by: Express
content-security-policy: default-src 'none'
x-content-type-options: nosniff
apigw-requestid: IUhAiierIAMEJfQ=

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /staging/health</pre>
</body>
</html>

I've ssh into the instance using session manager and my-app is running and the instance is healthy.
I've been looking at various sources to debug the issue for almost a day, but I haven't found any solution.
To reproduce, I followed all the steps in example-2 until making the certificated and adding custom domain. Using terraform, I remove the .tf files related to the certificate and custom domain and run terraform apply.

ls
10-ec2-example-2.tf  12-api-gw-example-2.tf  2-igw.tf      4-nat.tf     9-sg-example-2.tf  main.tf
11-nlb-example-2.tf  1-vpc.tf                3-subnets.tf  5-routes.tf

Thank you very much for the help

Lesson 102: Autoscalar in 'Not Ready' State for AWS EKS 1.25

Hi Anton,

I have created the AWS EKS resources based on this and the cluster that gets created has version 1.25.

I'm also able to apply all files from this directory after updating the role ARN, container image to registry.k8s.io/autoscaling/cluster-autoscaler:v1.25.1, and cluster name

But when I do kubectl get all -A -n kube-system, I get:

NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
default       pod/nginx-7f85bb5c99-77kdl     1/1     Running   0          153m
kube-system   pod/aws-node-drj5z             1/1     Running   0          21h
kube-system   pod/coredns-7975d6fb9b-b9nrw   1/1     Running   0          21h
kube-system   pod/coredns-7975d6fb9b-qdk4r   1/1     Running   0          21h
kube-system   pod/kube-proxy-86tw4           1/1     Running   0          21h

NAMESPACE     NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)         AGE
default       service/kubernetes   ClusterIP      172.20.0.1       <none>                                                                          443/TCP         21h
default       service/private-lb   LoadBalancer   1.2.3.4          long-text.elb.us-east-1.amazonaws.com   80:30528/TCP    137m
default       service/public-lb    LoadBalancer   1.2.3.4          long-text.elb.us-east-1.amazonaws.com   80:32750/TCP    137m
kube-system   service/kube-dns     ClusterIP      172.20.0.10      <none>                                                                          53/UDP,53/TCP   21h

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/aws-node     1         1         1       1            1           <none>          21h
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           <none>          21h

NAMESPACE     NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                1/1     1            1           153m
kube-system   deployment.apps/cluster-autoscaler   0/1     0            0           8m58s
kube-system   deployment.apps/coredns              2/2     2            2           21h

NAMESPACE     NAME                                            DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-7f85bb5c99                1         1         1       153m
kube-system   replicaset.apps/cluster-autoscaler-6cf6d855c5   1         0         0       8m58s
kube-system   replicaset.apps/coredns-7975d6fb9b              2         2         2       21h

And when I try to tail the log by kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler, it times out with error: timed out waiting for the condition.

How can I fix this error?

Grafana Latency dashboard not working

Hello,
I been following your tutorial on how to setup a NGINX-Prometheus-Grafana monitoring system.

Everything is working well until it's time to import the dashboard via json file.

It pops this error.

image

Thank you

Tutorial 102 Doesn't Work with Latest Terraform AWS Provider

Using the latest AWS provider gives the below error on terraform apply after 25 minutes - during the creation of private worker nodes:

╷
│ Warning: Argument is deprecated
│ 
│   with aws_route_table.private,
│   on network.tf line 97, in resource "aws_route_table" "private":
│   97:   route = [
│   98:     {
│   99:       cidr_block                 = "0.0.0.0/0"
│  100:       nat_gateway_id             = aws_nat_gateway.devops-research.id
│  101:       carrier_gateway_id         = ""
│  102:       destination_prefix_list_id = ""
│  103:       egress_only_gateway_id     = ""
│  104:       gateway_id                 = ""
│  105:       instance_id                = ""
│  106:       ipv6_cidr_block            = ""
│  107:       local_gateway_id           = ""
│  108:       network_interface_id       = ""
│  109:       transit_gateway_id         = ""
│  110:       vpc_endpoint_id            = ""
│  111:       vpc_peering_connection_id  = ""
│  112:     },
│  113:   ]
│ 
│ Use network_interface_id instead
│ 
│ (and one more similar warning elsewhere)
╵
╷
│ Error: "" is not a valid CIDR block: invalid CIDR address: 
│ 
│   with aws_route_table.private,
│   on network.tf line 97, in resource "aws_route_table" "private":
│   97:   route = [
│   98:     {
│   99:       cidr_block                 = "0.0.0.0/0"
│  100:       nat_gateway_id             = aws_nat_gateway.devops-research.id
│  101:       carrier_gateway_id         = ""
│  102:       destination_prefix_list_id = ""
│  103:       egress_only_gateway_id     = ""
│  104:       gateway_id                 = ""
│  105:       instance_id                = ""
│  106:       ipv6_cidr_block            = ""
│  107:       local_gateway_id           = ""
│  108:       network_interface_id       = ""
│  109:       transit_gateway_id         = ""
│  110:       vpc_endpoint_id            = ""
│  111:       vpc_peering_connection_id  = ""
│  112:     },
│  113:   ]
│ 
╵
╷
│ Error: "" is not a valid CIDR block: invalid CIDR address: 
│ 
│   with aws_route_table.public,
│   on network.tf line 123, in resource "aws_route_table" "public":
│  123:   route = [
│  124:     {
│  125:       cidr_block                 = "0.0.0.0/0"
│  126:       gateway_id                 = aws_internet_gateway.devops-research.id
│  127:       nat_gateway_id             = ""
│  128:       carrier_gateway_id         = ""
│  129:       destination_prefix_list_id = ""
│  130:       egress_only_gateway_id     = ""
│  131:       instance_id                = ""
│  132:       ipv6_cidr_block            = ""
│  133:       local_gateway_id           = ""
│  134:       network_interface_id       = ""
│  135:       transit_gateway_id         = ""
│  136:       vpc_endpoint_id            = ""
│  137:       vpc_peering_connection_id  = ""
│  138:     },
│  139:   ]
│ 
╵

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.