Coder Social home page Coder Social logo

workshops's Introduction

Gloo workshops in markdown format

Workshop default airgap gitops standalone ipv6 cilium openshift
Gloo Mesh Core stable beta
Gloo Mesh Enterprise stable beta stable beta stable beta stable beta stable beta stable beta
Gloo Mesh Gateway stable beta stable beta stable beta stable beta stable beta
Gloo Mesh Gateway Portal stable beta stable beta stable beta stable beta
Gloo Mesh Gateway Advanced  stable beta stable beta
Gloo Platform (Mesh + Gateway)  stable beta stable beta stable beta stable beta
Gloo Edge stable

Gloo workshops in Instruqt

Workshop default gitops standalone
Gloo Mesh Core stable beta
Gloo Mesh Enterprise stable beta
Gloo Mesh Gateway stable beta stable beta
Gloo Mesh Gateway Portal stable beta stable beta
Gloo Mesh Gateway Advanced  stable beta
Gloo Platform (Mesh + Gateway)  stable beta
Gloo Edge stable

workshops's People

Contributors

antonioberben avatar asayah avatar bcollard avatar boes-man avatar christian-posta avatar cmwylie19 avatar dhawton avatar distributethe6ix avatar djannot avatar jameshbarton avatar jedwards-solo avatar jmunozro avatar lgadban avatar linsun avatar marcogschmidt avatar rachael-graham avatar rinormaloku avatar rvennam avatar soloio-bot avatar totallygreg avatar willowmck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

workshops's Issues

[Gloo Edge - Instruqt - Lab 3] jwt.io instructions are misplaced

Lab 3 has a nice section where the JWT token returned from Keycloak is pasted into jwt.io and clearly shows the claims being returned. However, in the Instruqt version of this lab, there is no JWT available to the user at the point where it instructs the user to paste it into jwt.io. These steps need to be reordered so that the JWT can be viewed at the proper time.

Get started with Istio - add non mTLS working before Peerauth

Under "Securing Communication Within Istio" it might be worth showing that from sleep pod in default NS we are able to curl the web-api pod before we create the PeeerAuth

  • Moving the following up &
kubectl apply -n default -f sample-apps/sleep.yaml
  • Adding a before and after of the same command might be useful:
kubectl exec deploy/sleep -n default -- curl http://web-api.istioinaction:8080/

Apple M1 gloo-mesh-2-0-single-cluster-single-workspace Deploy Error

Deploy script is generating invalid service subnet on M1 Mac

sh ./scripts/deploy.sh 1 cluster1 us-west us-west-1                                                               21:28:29
Unable to find image 'registry:2' locally
2: Pulling from library/registry
b3c136eddcbf: Pull complete 
c0a3192eca97: Pull complete 
a78a32497cf3: Pull complete 
980c1fd5760c: Pull complete 
8c5c94d5e05d: Pull complete 
Digest: sha256:bedef0f1d248508fe0a16d2cacea1d2e68e899b2220e2258f1b604e1f327d475
Status: Downloaded newer image for registry:2
6220471f5c9cc1e3f60615fec06e237e86adc5e49ad15ad6fd49363c82923c36
9b9804d859c445a0012b5eb927cdb2f383c9b350acf595af859591ef79ef8bc2
13426eedd311c7090fb4162e539945df89cca304e066a8e839ce69173a7d6cb3
f627f7b8dd366dd05b958ca3e1de37d3229e83f3298bebd8d2b84713c67c425f
ERROR: failed to create cluster: invalid service subnet failed to parse cidr value:"10.001.0.0/16" with error: invalid CIDR address: 10.001.0.0/16
Error: No such object: kind1-control-plane
jq: error (at <stdin>:1): Cannot iterate over null (null)
Cluster "kind-kind1" set.
error: context "kind-kind1" does not exist
error: context "kind-kind1" does not exist
error: context "kind-kind1" does not exist
error: context "kind-kind1" does not exist
Error response from daemon: network kind not found
Error response from daemon: network kind not found
Error response from daemon: network kind not found
Error response from daemon: network kind not found
configmap/local-registry-hosting unchanged
error: cannot rename the context "kind-kind1", it's not in /Users/me/.kube/config

Gloo Mesh 2 Workshop (EKS version) cluster registration check fails

The Gloo Mesh 2 Workshop (EKS version) cluster registration check fails with the following:

% pod=$(kubectl --context ${MGMT} -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${MGMT} -n gloo-mesh debug -q -i ${pod} --image=curlimages/curl -- curl -s http://localhost:9091/metrics | grep relay_push_clients_connected
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").

EKS Kube Server Version:

Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.6-eks-14c7a48", GitCommit:"35f06c94ad99b78216a3d8e55e04734a85da3f7b", GitTreeState:"clean", BuildDate:"2022-04-01T03:18:05Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

Gloo Edge workshop rate limiting typos

In the GlooEdge Workshop... The read along for the rate limiting section reference 'per second' and the YAML references 'per minute'. Please adjust based on the desired outcome.

// Snippet
Users of organizations with the enterprise subscription have a rate limit of 8 requests per second
Users of organizations with the free subscription have a rate limit of 2 requests per second
We define those rate limits using the following RateLimitConfig definition:

// YAML

kubectl apply -f - << EOF
apiVersion: ratelimit.solo.io/v1alpha1
kind: RateLimitConfig
metadata:
  name: limit-users
  namespace: gloo-system
spec:
  raw:
    setDescriptors:
    - simpleDescriptors:
      - key: email-key
      - key: organization-key
      - key: subscription-key
        value: free
      rateLimit:
        requestsPerUnit: 2
        unit: MINUTE
    - simpleDescriptors:
      - key: email-key
      - key: organization-key
      - key: subscription-key
        value: enterprise
      rateLimit:
        requestsPerUnit: 8
        unit: MINUTE
    rateLimits:
    - setActions:
      - requestHeaders:
          headerName: x-email
          descriptorKey: email-key
      - requestHeaders:
          headerName: x-organization
          descriptorKey: organization-key
      - requestHeaders:
          headerName: x-subscription
          descriptorKey: subscription-key
EOF

[packer-workshop] - npm KeyError version when installing dependencies via packages.json

bash ./build-vagrant-and-gcp-images.sh fails with error:

2021-10-20T13:57:02+11:00:     googlecompute.workshop: fatal: [default]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"~danwessels/.ansible/tmp/ansible-tmp-1634698621.142865-20644-167448415754121/AnsiballZ_npm.py\", line 100, in <module>\n    _ansiballz_main()\n  File \"~danwessels/.ansible/tmp/ansible-tmp-1634698621.142865-20644-167448415754121/AnsiballZ_npm.py\", line 92, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"~danwessels/.ansible/tmp/ansible-tmp-1634698621.142865-20644-167448415754121/AnsiballZ_npm.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.general.plugins.modules.npm', init_globals=dict(_module_fqn='ansible_collections.community.general.plugins.modules.npm', _modlib_path=modlib_path),\n  File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_community.general.npm_payload_jcr58woz/ansible_community.general.npm_payload.zip/ansible_collections/community/general/plugins/modules/npm.py\", line 333, in <module>\n  File \"/tmp/ansible_community.general.npm_payload_jcr58woz/ansible_community.general.npm_payload.zip/ansible_collections/community/general/plugins/modules/npm.py\", line 310, in main\n  File \"/tmp/ansible_community.general.npm_payload_jcr58woz/ansible_community.general.npm_payload.zip/ansible_collections/community/general/plugins/modules/npm.py\", line 219, in list\nKeyError: 'version'\nShared connection to 127.0.0.1 closed.\r\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

This was fixed in my case by upgrading ansible community.general by running ansible-galaxy collection install community.general --upgrade --ignore-certs on my macOS

Get started with Istio - add comment to CRs to easily show the changes

istio-workshops> istio-basics/labs/05/purchase-history-vs-all-v1-header-v2.yaml

kind: VirtualService
metadata:
  name: purchase-history-vs
spec:
  hosts:
  - purchase-history.istioinaction.svc.cluster.local
  http:
#--- If user: Tom is present in incoming request header, go to purchase-history-v2 ---
  - match:
    - headers:
        user:
          exact: Tom
    route:
    - destination:
        host: purchase-history.istioinaction.svc.cluster.local
        subset: v2
        port:
          number: 8080
#--- Else, go to purchase-history-v1 ---
  - route:
    - destination:
        host: purchase-history.istioinaction.svc.cluster.local
        subset: v1
        port:
          number: 8080
      weight: 100
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: purchase-history-vs
spec:
  hosts:
  - purchase-history.istioinaction.svc.cluster.local
  http:
  - match:
    - headers:
        user:
          exact: Tom
    route:
    - destination:
        host: purchase-history.istioinaction.svc.cluster.local
        subset: v3
        port:
          number: 8080
# ---- adding timeout per retry ----
      retries:
        attempts: 3
        perTryTimeout: 3s
# ---- adding an overall timeout ----
      timeout: 6s
  - route:
    - destination:
        host: purchase-history.istioinaction.svc.cluster.local
        subset: v2
        port:
          number: 8080
      weight: 100

Get started with Istio - ading "pc secret" commands

istioctl pc secret \
    -n istioinaction deploy/web-api -o json | \
    jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | \
    base64 -d | \
    openssl x509 -noout -text

Add pc secret commands to show the certificate

Gloo edge improvements

  • Remove troubleshooting Tip

  • Lab2: Create unauthenticated RateLimit for unauthenticated users

  • Lab2: Add another Ratelimit for authenticated users based on a new claim like subcription (values: gold, platinium, etc).

  • Lab6: For the AuthN scenario, it needs to be moved to Lab2.

  • Lab6: Remove the extractor for the id token since it is already taken before.

  • Improve WAF sample. Block a huge payload with POST to bookinfo for the case of unauthenticated users

  • Improve WAF sample. Add client-agent check

  • Add to response transformations. Take a header from the request and add it into the response.
    TODO: The docs are wrong. It's missing identation. Besides it does not work. Spike on it.

  • Improve response transformations. Take the 401 and transform the body in the response.

  • Add something with regex

transformationTemplate:
                           passthrough: {}
                           extractors:
                               originalClientIpAddress:
                                   header: 'x-forwarded-for'
                                   regex: '([^,\n]*).*$'
  • Move delegation after LAB1. Delegate to different routeTables to different teams /secure with reoutetable1 to be managed by team1. Keep the route / at the VS level to show that we can also keep things in the VS

  • Move RT to a specific namespace like team1 so they can see it can be in a different namespace.

  • Use label selectors for the RT so we can show that the name is note totally required

  • Lab6. Remove the extractor for the id token since it already taken before. → use the extauth > authconfig > oidc > headers > idTokenHeader to forward it upstream

[Gloo Edge - Instruqt - Lab 3] Data transformation example never shows custom HTML from RouteTable

The Lab 3 data transformation example never shows custom HTML from RouteTable below. The rate limit rule is apparently never triggered. It just keeps showing the "successful" result.

# cat files/rt-transformation.yaml

apiVersion: gateway.solo.io/v1
kind: RouteTable
metadata:
  name: httpbin-routetable
  namespace: team1
  labels:
    application-owner: team1
spec:
  routes:
# -------- Rate limit at route level requires to give a name -------
    - name: "not-secured"
# ------------------------------------------------------------------
      matchers:
        - prefix: /not-secured
      options:
        prefixRewrite: '/'
# -------- Rate limit as you saw before ------------
        ratelimitBasic:
          anonymousLimits:
            requestsPerUnit: 5
            unit: MINUTE
# --------------------------------------------------
# ---------------- Transformation ------------------
        transformations:
          responseTransformation:
            transformationTemplate:
              parseBodyBehavior: DontParse
              body:
                text: '{% if header(":status") == "429" %}<html><body style="background-color:powderblue;"><h1>Too many Requests!</h1><p>Try again after 1 minute</p></body></html>{% else %}{{ body() }}{% endif %}'
#---------------------------------------------------
      routeAction:
          single:
            upstream:
              name: team1-httpbin-8000
              namespace: gloo-system

Deploy Istio for Production - Running Envoy

  • Would be good to add the diagram on top of the section
    envoy

  • Would be good to show the envoy manifest

cat labs/01/envoy-proxy.yaml
  • would be good to split this into 2 separate code blocks
kubectl rollout restart deploy/envoy
kubectl exec deploy/sleep -- curl -s http://envoy/headers

[Gloo Edge - Instruqt - Lab 2] VirtualService doesn't match diagram

The diagram for Lab 2 indicates that both v2 and v3 will be routed to. The VS doesn't match the diagram. It only routes to v2 and is confusing to users.

Screen Shot 2021-11-02 at 5 44 45 PM

kubectl apply -f - <<EOF
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
  name: demo
  namespace: gloo-system
spec:
  sslConfig:
    secretRef:
      name: upstream-tls
      namespace: gloo-system
  virtualHost:
    domains:
      - '*'
    routes:
      - matchers:
          - prefix: /not-secured
        delegateAction:
          selector:
            namespaces:
              - team1
            labels:
              application-owner: team1
      - matchers:
          - prefix: /
# ------------------- OIDC - Only applied to this matcher -------------------
        options:
          extauth:
            configRef:
              name: oauth
              namespace: gloo-system
# ---------------------------------------------------------------------------
        routeAction:
          single:
            upstream:
              name: bookinfo-productpage-9080
              namespace: gloo-system
EOF

TokenRequest feature gate no longer relevant in 1.21

Looks like TokenRequest feature gate in 1.21 has been promoted as mentioned in changelog so no longer relevant.

This change impacts control planes created for new clusters. Leads to error

May 20 04:22:08 kind1-control-plane kubelet[290]: E0520 04:22:08.186018     290 server.go:216] "Failed to set feature gates from initia
l flags-based config" err="unrecognized feature gate: TokenRequest"

Either we should put a pre-req or fix the deploy.sh to take this into account.

Support for Openshift

I was trying to follow tutorial https://github.com/solo-io/workshops/blob/master/gloo-mesh/README-openshift.md, with creating cluster on IBM cloud and discovered that usage of the Kubernete contexts does not work. When I am creating clusters, they all are using a single user, so when I try to log into a second cluster, a token in kubectl is getting overwrited. The workaround for this is to explicitly specify a token in kubectl command, something like:

kubectl --context $MGMT_CONTEXT --token="token" get nodes

Unfortunately meshctl does not allow to specify --token="token" so nothing works.

Any suggestions?

The repository 'https://deb.nodesource.com/node_16.x focal Release' does not have a Release file.

Running into this error:

module.vm-image["workshop1"].google_compute_instance.vm (local-exec): failed: [35.235.70.50] (item=deb https://deb.nodesource.com/node_16.x focal main) => {"ansible_loop_var": "item", "changed": false, "item": "deb https://deb.nodesource.com/node_16.x focal main", "msg": "Failed to update apt cache: E:The repository 'https://deb.nodesource.com/node_16.x focal Release' does not have a Release file."}

KeyCloak readiness probe fails in default quickstart

When deploying the KeyCloak quickstart template to a cluster running with kind the readiness probe may fail. I needed to edit the deployment with the following settings to get it to start successfully.

.spec.template.spec.containers[0]].readinessProbe.initialDelaySeconds: 10 and
.spec.template.spec.containers[0].readinessProbe.timeoutSeconds: 10

This is probably highly dependent on how the cluster is built, so this is strictly a troubleshooting tip.

The git archive command fails on Vagrant

The git archive command needs to create output in a different directory than what it is archiving. Otherwise, you will get an error and the ansible playbook will exit.

Deploy Istio for Production - Rollout mTLS to your services section

Current:
Port-forward Kiali in the second terminal and navigate to the dashboard tab

Proposed:
Port-forward Kiali in the second terminal and navigate to the Graph tab & click on "Hide" in the right panel titled Current Graph

Without hiding - graph doesn't appear in the ideal form:
Screenshot 2023-03-28 at 5 31 48 PM

After hiding - graph appears:
Screenshot 2023-03-28 at 5 31 51 PM

Get started with Istio - trim log while retrying

After this section we could add a | tail -6

If you check the logs of the purchase-history service, you will see the retries:

kubectl logs deploy/purchase-history-v3 -n istioinaction | grep x-envoy-attempt-count | tail -6

in case someone retires many times, if we don't tail the log, the number of log lines become very long

Get started with Istio - multiline command for better readability

Make the for loop a multiline command to improve readability

Location:
instruqt > main> get-started-istio/05-control-traffic/assignment.md

for i in {1..10}
do
curl -s --cacert ./labs/02/certs/ca/root-ca.crt -H "Host: istioinaction.io" \
    https://istioinaction.io:$SECURE_INGRESS_PORT  \
    --resolve istioinaction.io:$SECURE_INGRESS_PORT:$GATEWAY_IP | \
    grep 'Hello From Purchase History'
done

404 Downloading VS Code

fatal: [35.224.134.84]: FAILED! => {"cache_update_time": 1623905482, "cache_updated": true, "changed": false, "msg": "'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'code'' failed: E: Failed to fetch https://packages.microsoft.com/repos/vscode/pool/main/c/code/code_1.57.0-1623259737_amd64.deb 404 Not Found [IP: 13.66.21.183 443]\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", "rc": 100, "stderr": "E: Failed to fetch https://packages.microsoft.com/repos/vscode/pool/main/c/code/code_1.57.0-1623259737_amd64.deb 404 Not Found [IP: 13.66.21.183 443]\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?\n", "stderr_lines": ["E: Failed to fetch https://packages.microsoft.com/repos/vscode/pool/main/c/code/code_1.57.0-1623259737_amd64.deb 404 Not Found [IP: 13.66.21.183 443]", "E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?"], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following NEW packages will be installed:\n code\n0 upgraded, 1 newly installed, 0 to remove and 88 not upgraded.\nNeed to get 76.4 MB of archives.\nAfter this operation, 293 MB of additional disk space will be used.\nErr:1 https://packages.microsoft.com/repos/vscode stable/main amd64 code amd64 1.57.0-1623259737\n 404 Not Found [IP: 13.66.21.183 443]\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "The following NEW packages will be installed:", " code", "0 upgraded, 1 newly installed, 0 to remove and 88 not upgraded.", "Need to get 76.4 MB of archives.", "After this operation, 293 MB of additional disk space will be used.", "Err:1 https://packages.microsoft.com/repos/vscode stable/main amd64 code amd64 1.57.0-1623259737", " 404 Not Found [IP: 13.66.21.183 443]"]}

[Gloo Edge - Instruqt - Lab 3] 1kb WAF payload size restriction is actually 1 byte

The WAF payload size example in Lab 3 says in multiple places that it is imposing a 1kb restriction. It in face imposes a 1-byte restriction. This should be cleaned up in the example and the surrounding text.

In addition, there should be both positive and negative examples showing this rule. There is currently only a negative example.

gloo-mesh-2-all-mgmt-ctrl track edit notes

After,

And also delete the different objects we've created:

kubectl --context ${MGMT} -n bookinfo-team delete virtualdestination productpage
kubectl --context ${MGMT} -n bookinfo-team delete outlierdetectionpolicy outlier-detection
  • We should also delete the failoverpolicy

  • Also, When we switch back to the original RT before the Zero Trust section of module 2, we should add a note on why we are switching back

  • Making this command and others multiline would improve readability-

pod=$(kubectl --context ${CLUSTER1} -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${CLUSTER1} -n httpbin debug -i -q ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0
  • Rather than doing a kubectl debug, it might be easier to do this via a sleep pod
    Current:
pod=$(kubectl --context ${CLUSTER1} -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${CLUSTER1} -n httpbin debug -i -q ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0

Proposed:

kubectl --context $CLUSTER1 -n httpbin \
apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/sleep/sleep.yaml

kubectl --context $CLUSTER1 -n httpbin \
get pod -l app=sleep;

kubectl --context $CLUSTER1 -n httpbin \
exec -it deploy/sleep -- \
curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0

We should break the following section into 3 separate sections

pod=$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}')
echo "From productpage to details, should be allowed"
kubectl --context ${CLUSTER1} -n bookinfo-frontends debug -i -q ${pod} --image=curlimages/curl -- curl -s http://details.bookinfo-backends:9080/details/0 | jq

echo "From productpage to reviews, should be allowed"
kubectl --context ${CLUSTER1} -n bookinfo-frontends debug -i -q ${pod} --image=curlimages/curl -- curl -s http://reviews.bookinfo-backends:9080/reviews/0 | jq

echo "From productpage to ratings, should be denied"
kubectl --context ${CLUSTER1} -n bookinfo-frontends debug -i -q ${pod} --image=curlimages/curl -- curl -s http://ratings.bookinfo-backends:9080/ratings/0 -i

Module 3

If you refresh your browser, you should see that you get a response either from the local service or from the external service.

^ this currently fails-

# curl -k https://10.5.0.254/get

error:

upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
istioctl --context $CLUSTER1 \
> pc secrets \
> -n istio-gateways svc/istio-ingressgateway
RESOURCE NAME               TYPE           STATUS     VALID CERT     SERIAL NUMBER                                        NOT AFTER                NOT BEFORE
kubernetes://tls-secret     CA             ACTIVE     true           417331438521364388010583664345130880179005978002     2024-03-31T13:24:41Z     2023-04-01T13:24:41Z
default                     Cert Chain     ACTIVE     true           138903141074340073272611178638063452375              2023-04-02T14:26:31Z     2023-04-01T14:24:31Z
ROOTCA                      CA             ACTIVE     true           268534852559328264948732062514656608914              2024-03-31T14:26:12Z     2023-04-01T14:26:12Z
istioctl --context $CLUSTER1 pc secrets -n httpbin deploy/in-mesh
RESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                               NOT AFTER                NOT BEFORE
default           Cert Chain     ACTIVE     true           205165220293220887344568674427115039589     2023-04-02T12:45:12Z     2023-04-01T12:43:12Z
ROOTCA            CA             ACTIVE     true           40700971172074859705572646854716953212      2033-03-29T12:43:20Z     2023-04-01T12:43:20Z

solution

  • restart the deployment so that it picks the new cert from istio
    kubectl --context $CLUSTER1 -n httpbin rollout restart deployment/in-mesh

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.