Coder Social home page Coder Social logo

aws-samples / hardeneks Goto Github PK

View Code? Open in Web Editor NEW
807.0 9.0 82.0 4.82 MB

Runs checks to see if an EKS cluster follows EKS Best Practices.

Home Page: https://aws-samples.github.io/hardeneks/

License: MIT No Attribution

Python 98.20% Shell 1.80%
aws best-practices eks k8s

hardeneks's Introduction

Hardeneks

PyPI version PyPI Supported Python Versions Python package Downloads

Runs checks to see if an EKS cluster follows EKS Best Practices.

Quick Start:

python3 -m venv /tmp/.venv
source /tmp/.venv/bin/activate
pip install hardeneks
hardeneks

alt text

Usage:

hardeneks [OPTIONS]

Options:

  • --region TEXT: AWS region of the cluster. Ex: us-east-1
  • --context TEXT: K8s context
  • --cluster TEXT: EKS Cluster name
  • --namespace TEXT: Namespace to be checked (default is all namespaces)
  • --config TEXT: Path to a hardeneks config file
  • --export-txt TEXT: Export the report in txt format
  • --export-html TEXT: Export the report in html format
  • --export-json TEXT: Export the report in json format
  • --insecure-skip-tls-verify: Skip TLS verification
  • --width: Width of the output (defaults to terminal size)
  • --height: Height of the output (defaults to terminal size)
  • --help: Show this message and exit.
  • K8S_CONTEXT

    You can get the contexts by running:

    kubectl config get-contexts
    

    or get the current context by running:

    kubectl config current-context
    
  • CLUSTER_NAME

    You can get the cluster names by running:

    aws eks list-clusters --region us-east-1
    

Configuration File:

Default behavior is to run all the checks. If you want to provide your own config file to specify list of rules to run, you can use the --config flag.You can also add namespaces to be skipped.

Following is a sample config file:

---
ignore-namespaces:
  - kube-node-lease
  - kube-public
  - kube-system
  - kube-apiserver
  - karpenter
  - kubecost
  - external-dns
  - argocd
  - aws-for-fluent-bit
  - amazon-cloudwatch
  - vpa
rules: 
  cluster_wide:
    security:
      iam:
        - disable_anonymous_access_for_cluster_roles
        - check_endpoint_public_access
        - check_aws_node_daemonset_service_account
        - check_access_to_instance_profile
        - restrict_wildcard_for_cluster_roles
      multi_tenancy:
        - ensure_namespace_quotas_exist
      detective_controls:
        - check_logs_are_enabled
      network_security:
        - check_vpc_flow_logs
        - check_awspca_exists
        - check_default_deny_policy_exists
      encryption_secrets:
        - use_encryption_with_ebs
        - use_encryption_with_efs
        - use_efs_access_points
      infrastructure_security:
        - deploy_workers_onto_private_subnets
        - make_sure_inspector_is_enabled
      pod_security:
        - ensure_namespace_psa_exist
      image_security:
        - use_immutable_tags_with_ecr
    reliability:
      applications:
        - check_metrics_server_is_running
        - check_vertical_pod_autoscaler_exists
  namespace_based:
    security: 
      iam:
        - disable_anonymous_access_for_roles
        - restrict_wildcard_for_roles
        - disable_service_account_token_mounts
        - disable_run_as_root_user
        - use_dedicated_service_accounts_for_each_deployment
        - use_dedicated_service_accounts_for_each_stateful_set
        - use_dedicated_service_accounts_for_each_daemon_set
      pod_security:
        - disallow_container_socket_mount
        - disallow_host_path_or_make_it_read_only
        - set_requests_limits_for_containers
        - disallow_privilege_escalation
        - check_read_only_root_file_system
      network_security:
        - use_encryption_with_aws_load_balancers
      encryption_secrets:
        - disallow_secrets_from_env_vars    
      runtime_security:
        - disallow_linux_capabilities
    reliability:
      applications:
        - check_horizontal_pod_autoscaling_exists
        - schedule_replicas_across_nodes
        - run_multiple_replicas
        - avoid_running_singleton_pods

RBAC:

In order to run hardeneks we need to have some permissions both on AWS side and k8s side.

Minimal IAM role policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "eks:ListClusters",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "eks:DescribeCluster",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ecr:DescribeRepositories",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "inspector2:BatchGetAccountStatus",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeFlowLogs",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeInstances",
            "Resource": "*"
        }
    ]
}

Minimal ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hardeneks-runner
rules:
- apiGroups: [""]
  resources: ["namespaces", "resourcequotas", "persistentvolumes", "pods", "services", "nodes"]
  verbs: ["list"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources: ["clusterroles", "clusterrolebindings", "roles", "rolebindings"]
  verbs: ["list"]
- apiGroups: ["networking.k8s.io"]
  resources: ["networkpolicies"]
  verbs: ["list"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["list"]
- apiGroups: ["apps"]
  resources: ["deployments", "daemonsets", "statefulsets"]
  verbs: ["list", "get"]
- apiGroups: ["autoscaling"]
  resources: ["horizontalpodautoscalers"]
  verbs: ["list"]

For Developers

Prerequisites:

  • This cli uses poetry. Follow instructions that are outlined here to install poetry.

Installation:

git clone [email protected]:dorukozturk/hardeneks.git
cd hardeneks
poetry install

Running Tests:

poetry shell
pytest --cov=hardeneks tests/ --cov-report term-missing

hardeneks's People

Contributors

amazon-auto avatar bryan-rhm avatar caruccio avatar dependabot[bot] avatar dorukozturk avatar michael-mcclelland avatar ninedongsu avatar ssup2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hardeneks's Issues

JSON Output Overwrites Results for Different Namespaces

Problem:

The JSON output for namespace-based results overwrites previous entries, leading to loss of data for different namespaces. This occurs because the namespace is not included in the JSON path.

Expected Behavior:

Each namespace-based result should be independently added to the JSON output without overwriting others.

Actual Behavior:

Only the last namespace-based result is retained in the JSON output.

Steps to Reproduce:

  1. Generate a report with multiple namespace-based results.
  2. Observe the JSON output.

Affected Version:

<=v0.10.4

Text Output:

│ applications │ argo           │ Deploy horizontal pod autoscaler for deployments. │ argo-cd-argocd-applicationset-controller │ Deployment    │ Link       │ 
│ applications │ dynatrace      │ Deploy horizontal pod autoscaler for deployments. │ dynatrace-operator                       │ Deployment    │ Link       │ 
│ applications │ port           │ Deploy horizontal pod autoscaler for deployments. │                                          │ Deployment    │ Link       │ 

JSON Output:

{
  "namespace_based": {
    "reliability": {
      "applications": {
        {
          "Deploy horizontal pod autoscaler for deployments.": {
            "status": true,
            "resources": [
              ""
            ],
            "resource_type": "Deployment",
            "namespace": "port",
            "resolution": "https://aws.github.io/aws-eks-best-practices/reliability/docs/application/#horizontal-pod-autoscaler-hpa"
          }
        }
      }
    }
  }
}

Proposed Solution:

Include the namespace in the JSON path to ensure unique addressing for each result. The modified code snippet is as follows:

json_blob[rule._type][rule.pillar][rule.section][rule.message] = result

-        json_blob[rule._type][rule.pillar][rule.section][rule.message] = result
+        if rule._type == "namespace_based":
+            json_blob[rule._type][rule.pillar][rule.section][rule.result.namespace][rule.message] = result
+        else:
+            json_blob[rule._type][rule.pillar][rule.section][rule.message] = result

bc6a1d5
This change ensures that results for different namespaces are stored under their respective namespace keys, preventing data overwrites.

Check for runAsUser and runAsGroup at container level

Currently, the script checks for securityContext and its runAsGroup and runAsUser values only at the pod level.

We have some charts that define it at the container level since the chart provider decided to implement it that way. In all of our cases, the pod has only one container, so it would work the same as if it were defined at the pod level.

Would it be possible to check the security context at the container level as well?

security_context = pod.spec.security_context

ApiException: (403)

HTTP response headers: HTTPHeaderDict({'Audit-Id': '<REDACTED>, 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid':
'<REDACTED>, 'X-Kubernetes-Pf-Prioritylevel-Uid': '<REDACTED>', 'Date': 'Tue, 13 Dec 2022 13:53:35 GMT', 'Content-Length': '271'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces is forbidden: User \"system:anonymous\" cannot list resource \"namespaces\" in API group \"\" at the cluster
scope","reason":"Forbidden","details":{"kind":"namespaces"},"code":403}

The following commands work so my access to my EKS cluster is working.

kubectl config get-contexts
aws eks list-clusters

Unique identifier for each rules

Thanks for the great tool to codify EKS best practices! It would be great to associate a unique identifier with each rules as it would help to track new findings & close findings related to a specific rule's assessment. e.g., for IAM rule disable_anonymous_access_for_cluster_roles, it can be assigned with an assessment id that follows pattern --. If this pattern is agreeable, I can submit a PR to implement the same.

False Positive with "Update the aws-node daemonset to use IRSA."

It seems there is a possible false positive with:
iam-->Cluster Wide-->Update the aws-node daemonset to use IRSA.
Resource: aws-node

I have "aws-node" as a service account, with a different role than the node, and its reporting as false/non-compliant:

% kubectl get daemonset aws-node -n kube-system -o json | jq ".spec.template.spec.serviceAccountName"
"aws-node"

Here is the IRSA role which I'm using:

kubectl get serviceaccount aws-node -nkube-system -o json | jq ".metadata.annotations" |grep arn
  "eks.amazonaws.com/role-arn": "arn:aws:iam::123456789012:role/eksctl-cluster-addon-iamserviceacc-Role1-JO4O8EGBK9J3",

If IRSA is not used, then the annotation "eks.amazonaws.com/role-arn" is not present.

Ergo, the compliance check could look for that annotation instead of checking the serviceAccountName.

False Positive With "Don't bind clusterroles to anonymous/unauthenticated groups."

It seems there is a possible false positive with the following:
iam-->Cluster Wide-->Don't bind clusterroles to anonymous/unauthenticated groups.

It flagging "system:public-info-viewer - ClusterRoleBinding"
EKS automatically creates this CRB.

If I remove this with the following command:

kubectl get ClusterRoleBinding -o json | jq -r '.items[] | select(.subjects[]?.name =="system:unauthenticated") | select(.metadata.name) | del(.subjects[] | select(.name =="system:unauthenticated"))'| kubectl apply -f -

It gets automatically re-added after a couple minutes:

# here its gone after I ran the above command
% ./rbac-lookup | grep -E 'system:(anonymous)|(unauthenticated)'

# then after a couple minutes, and run the same command again, its back:
% ./rbac-lookup | grep -E 'system:(anonymous)|(unauthenticated)'
system:unauthenticated                            cluster-wide   ClusterRole/system:public-info-viewer

Furthermore, according to the EKS best practice guide, it seems this particular role bound to the system:unauthenticated is OK:

Any role or ClusterRole other than system:public-info-viewer should not be bound to system:anonymous user or system:unauthenticated group.

Source:

Ergo, the check may need to exclude alerting for system:public-info-viewer bound to system:unauthenticated

Link is truncated for long links

When using smaller terminal widths long "Links2 are truncated so one cannot follow the recommendation because HTTP path is invalid. One can still lookup the Link in the corresponding py file. It would be great if:
a) show the full path by just wrapping the line
b) have an option with shows active checks from current config.yaml with the corresponding Links to GitHUb Best practices sections

HTML output is narrowed in linux runtime

Brief background:
When you run the tool, in the end it "renders" its output in logs and then, based on this render, it creates HTML file report for you.

the problem:
When you run on MAC M1Pro as in the readme example - it draws\renders this report with just fine width of columns. But when you try to do the same in linux runtime for automation (for example, I've tried in docker container amazonlinux:latest or in python:3) -- it renders the columns very narrowed and the resources\rulenames are unreadable - often go with three dots. Is it possible to fix it somehow, so the output is always wide? Feels like the output is tied to terminal width.

Add Date and Time to the Report Output

Reports should have a Date and Time at the header or footer. This would help for a way to keep track of when they were run so they can monitored and tracked over time.

Not Able to Run On Windows

ON Windows, I am able to install hardeneks, but when I run the command I get a 403 for every single kubernetes command.

It looks like the python client we are using is not sending the authorization bearer token alongside the request by default.
If I hardcode this in I see the auth token being passed and everything works. If I remove the api_key and prefix, I don't see it. possibly something that is wrong in my kube config, but on mac and linux it works fine.

        kubernetes.config.load_kube_config(context=context)
        configuration = kubernetes.client.Configuration.get_default_copy()
        configuration.debug = True
        configuration.api_key = {"authorization": "foobar"}
        configuration.api_key_prefix = {"authorization": "bearer"}
        kubernetes.client.Configuration.set_default(configuration)

I dont see an issue on the python kubernetes client repo that says other users are having this issue so it could possibly be a version issue of windows or the client I am running. I need more assistance to help test on Windows to confirm.

add option to skip certificate verification

Add option to skip certificate verification
Example error

you are using /tmp/.venv/lib/python3.8/site-packages/hardeneks/config.yaml as your config file

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /tmp/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py:703 in urlopen                  │
│                                                                                                  │
│    700 │   │   │   │   self._prepare_proxy(conn)                                                 │
│    701 │   │   │                                                                                 │
│    702 │   │   │   # Make the request on the httplib connection object.                          │
│ ❱  703 │   │   │   httplib_response = self._make_request(                                        │
│    704 │   │   │   │   conn,                                                                     │
│    705 │   │   │   │   method,                                                                   │
│    706 │   │   │   │   url,                                                                      │
│                                                                                                  │
│ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │
│ │ _is_ssl_error_message_from_http_proxy = <function                                            │ │
│ │                                         HTTPConnectionPool.urlopen.<locals>._is_ssl_error_m… │ │
│ │                                         at 0x7fc817242280>                                   │ │
│ │                      assert_same_host = False                                                │ │
│ │                                  body = None                                                 │ │
│ │                              body_pos = None                                                 │ │
│ │                               chunked = False                                                │ │
│ │                            clean_exit = False                                                │ │
│ │                                  conn = None                                                 │ │
│ │                    destination_scheme = None                                                 │ │
│ │                                   err = None                                                 │ │
│ │                               headers = {                                                    │ │
│ │                                         │   'Accept': 'application/json',                    │ │
│ │                                         │   'User-Agent': 'OpenAPI-Generator/25.3.0/python', │ │
│ │                                         │   'authorization': 'Bearer                         │ │
│ │                                         XXXXXXXXXXXXXXXXXXXXXXXXXXX… │ │
│ │                                         │   'Content-Type': 'application/json'               │ │
│ │                                         }                                                    │ │
│ │                  http_tunnel_required = False                                                │ │
│ │                     is_new_proxy_conn = False                                                │ │
│ │                                method = 'GET'                                                │ │
│ │                            parsed_url = Url(                                                 │ │
│ │                                         │   scheme=None,                                     │ │
│ │                                         │   auth=None,                                       │ │
│ │                                         │   host=None,                                       │ │
│ │                                         │   port=None,                                       │ │
│ │                                         │   path='/api/v1/namespaces',                       │ │
│ │                                         │   query=None,                                      │ │
│ │                                         │   fragment=None                                    │ │
│ │                                         )                                                    │ │
│ │                          pool_timeout = None                                                 │ │
│ │                              redirect = False                                                │ │
│ │                          release_conn = True                                                 │ │
│ │                     release_this_conn = True                                                 │ │
│ │                           response_kw = {                                                    │ │
│ │                                         │   'preload_content': True,                         │ │
│ │                                         │   'request_url':                                   │ │
│ │                                         'https://eks-2-oidc-proxy.local/api/v1/… │ │
│ │                                         }                                                    │ │
│ │                               retries = Retry(total=0, connect=None, read=None,              │ │
│ │                                         redirect=None, status=None)                          │ │
│ │                                  self = <urllib3.connectionpool.HTTPSConnectionPool object   │ │
│ │                                         at 0x7fc8170d9790>                                   │ │
│ │                               timeout = None                                                 │ │
│ │                           timeout_obj = Timeout(connect=None, read=None, total=None)         │ │
│ │                                   url = '/api/v1/namespaces'                                 │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                  │



MaxRetryError: HTTPSConnectionPool(host='eks-2-oidc-proxy.local', port=443): Max retries exceeded with url: /api/v1/namespaces (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate 
verify failed: unable to get issuer certificate (_ssl.c:1131)')))

Namespaces should have psa modes. - Not working

For the check:
pod_security │ Cluster Wide │ Namespaces should have psa modes. │ default

This seems not working correctly, as I have PSA mode enabled for the default namespace, specifically enforce/restricted.

% kubectl describe namespace default
Name:         default
Labels:       kubernetes.io/metadata.name=default
              pod-security.kubernetes.io/enforce=restricted
Annotations:  <none>
Status:       Active

And from the source code it looks like its only checking for enforce and warn (missing audit).

System ClusterRoles should have '*' in Verbs or Resources and Maybe Others?

Should there be a way to ignore a set of ClusterRoles from flagging this? We may want to allow some system level ClusterRoles to have * in them. This would allow for cluster that are spun up with basic settings to pass. I don't think any EKS clusters would actually pass this unless you go in and modify these directly. Which sounds worse than having this flag.

If there is no option to filter ClusterRoles, then there should at least be examples of how to properly set these ClusterRoles, without the user having to go through the hard work.

Here is an example of a fresh cluster that was just built.

──────────────────────────────────────────── ClusterRoles should not have '*' in Verbs or Resources ───────────────────────────────────────────────╮
│ ┏━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓                                                                           │
│ ┃ Kind        ┃ Namespace ┃ Name                                        ┃                                                                           │
│ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩                                                                           │
│ │ ClusterRole │           │ aws-node                                    │                                                                           │
│ │ ClusterRole │           │ cluster-admin                               │                                                                           │
│ │ ClusterRole │           │ cluster-admin                               │                                                                           │
│ │ ClusterRole │           │ cluster-admin                               │                                                                           │
│ │ ClusterRole │           │ eks:addon-manager                           │                                                                           │
│ │ ClusterRole │           │ eks:cloud-controller-manager                │                                                                           │
│ │ ClusterRole │           │ system:controller:generic-garbage-collector │                                                                           │
│ │ ClusterRole │           │ system:controller:horizontal-pod-autoscaler │                                                                           │
│ │ ClusterRole │           │ system:controller:namespace-controller      │                                                                           │
│ │ ClusterRole │           │ system:controller:resourcequota-controller  │                                                                           │
│ │ ClusterRole │           │ system:kube-controller-manager              │                                                                           │
│ │ ClusterRole │           │ system:kubelet-api-admin

Another benefit of adding an ignore option for ClusterRole would be, users would be able to pass their own in if they wanted too, because they are running a third party ClusterRole that they cant modify.

doesn't work with sso

We are using SSO in our organisation where hardenks can't find credentials and it fails

hardeneks --region us-east-1 --cluster XXX --context XXXX

                    • HARDENEKS * * * * * * * * * * * *
                      You are operating at us-east-1
                      You context is XXXX
                      Your cluster name is XXXX
                      You are using /opt/homebrew/lib/python3.11/site-packages/hardeneks/config.yaml
                      as your config file

[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials
[bold][red]Unable to locate credentials

JSON report not similar to HTML or TXT

Brief Background:
In an HTML or TXT report, there are sections for the following pieces of data:
Section, Namespace, Rule, Resource, Resource Type, Resolution

This can be seen in the code snippet here in the function print_consolidated_results

Problem:
The JSON output is essentially the same ( with a few changes such as grouping resources under one object), however the actual section for resolution (hardeneks best practice link) is not present, despite it being available in the actual object. The current formatting is like such:

result = {
            "status": rule.result.status,
            "resources": rule.result.resources,
            "resource_type": rule.result.resource_type,
            "namespace": rule.result.namespace,
        }

Solution:
Simply add to the result object the following key-value pair.
"resolution": rule.url

Feature Request: Add a flag to generate a config file with all rules

Feature Request: Add a flag to generate a config file with all rules

The sample on GitHub doesn't contains all the rules, it would be better to start with all rules, and then deselect the ones I don't want. Also, as new rules are added, the flag could be used to update the config file periodically.

Hardeneks doesn't work through the SSH tunnel

We are trying to use hardeneks for hardening our cluster. We connect to the cluster by the ssh tunnel through the bastion machine. Here is how we connect to our cluster.

Connecting to the AWS account admin user by sso

export AWS_PROFILE=MainAdmin
export AWS_REGION=eu-west-1
export K8S_AUTH_PROXY=""
export NO_PROXY=""
export HTTP_PROXY=""
export HTTPS_PROXY=""
aws configure sso

Creating an SSH tunnel and connection to the cluster

ssh-add ~/keys/key/our-key
ssh -L 8888:localhost:8888 -q -o StrictHostKeyChecking=no -C -N [email protected] &
export K8S_AUTH_PROXY=http://localhost:8888
export NO_PROXY=*.okta.com
export HTTP_PROXY=http://localhost:8888
export HTTPS_PROXY=http://localhost:8888
aws eks --region eu-west-1 update-kubeconfig --name our_cluster

After it, we can run all the kubectl-related commands and fully manage resources in our cluster.

But when we go with hardeneks, it first gets stuck here.

*  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  * HARDENEKS *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *
You are operating at eu-west-1
You context is arn:aws:eks:eu-west-1:717343414241:cluster/our_cluster
Your cluster name is our_cluster
You are using /private/tmp/.venv/lib/python3.9/site-packages/hardeneks/config.yaml as your config file

And then it fails with this error (the host was changed intensionally to hide the real DNS):

MaxRetryError: HTTPSConnectionPool(host='a9276e4d543d078f345a64b343d23eb1.gr7.eu-west-1.eks.amazonaws.com', port=443): Max retries exceeded with url:
/api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1092de490>: Failed to establish a new connection: [Errno 60]
Operation timed out'))

How can I configure the hardeneks to make requests through the SSH tunnel? I think that it is a typical issue since most clusters are not publicly exposed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.