tetratelabs / getmesh Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
control plane version shows open source istio version vs others reports other tetratefips version.
$ getmesh version
getmesh version: 1.1.4
active istioctl: 1.16.0-tetratefips-v0
client version: 1.16.0-tetratefips-v0
control plane version: 1.16.0
data plane version: 1.16.0-tetratefips-v0 (9 proxies)
control plane shows the tetratefips version as well as the container images in istio-system are shown as below.
[containers.istio.tetratelabs.com/pilot:1.16.0-tetratefips-v0](http://containers.istio.tetratelabs.com/pilot:1.16.0-tetratefips-v0)
[containers.istio.tetratelabs.com/proxyv2:1.16.0-tetratefips-v0](http://containers.istio.tetratelabs.com/proxyv2:1.16.0-tetratefips-v0)
[containers.istio.tetratelabs.com/proxyv2:1.16.0-tetratefips-v0](http://containers.istio.tetratelabs.com/proxyv2:1.16.0-tetratefips-v0)
[containers.istio.tetratelabs.com/proxyv2:1.16.0-tetratefips-v0](http://containers.istio.tetratelabs.com/proxyv2:1.16.0-tetratefips-v0)
only tested the following two versions:
AWS
Use IstioOperator with istioctl.
HUB=containers.istio.tetratelabs.com
DISTRO=1.16.0-tetratefips-v0
getmesh istioctl manifest install -f dev.yaml --set hub=${HUB} --set tag=${DISTRO}
The example fails due to an incorrect IOP example. The example should be updated to properly reference the "istio-ca-root-cert" ConfigMap.
As OSX now support intel and apple silicon ARCHs, we need to fetch the correct istioctl binary.
After installing getmesh and running getmesh version
i got the following error:
getmesh version
getmesh version: 1.1.2
active istioctl: 1.10.3-tetrate-v0
error executing istioctl: fork/exec /Users/marcnavarro/.getmesh/istio/1.10.3-tetrate-v0/bin/istioctl: bad CPU type in executable
Seems that although we package a correct getmesh distribution for OSX ARM64, even we generate a correct istioctl binary at https://dl.getistio.io/public/raw/files/istioctl-1.10.3-tetrate-v0-osx-arm64.tar.gz for OSX ARM64. When getmesh fetches istioctl
it fetches the binary without arch:
getmesh/internal/istioctl/istioctl.go
Lines 233 to 246 in fd6d94a
This issue is related to issue #24.
I think that the fix is easy, and I am happy to do it. However I am wondering:
Across istioctl fetch
and istioctl switch
at least there's inconsistencies in how I set flags and which flags I need to set. This is confusing and annoying to work with. Example:
$ getistio list
ISTIO VERSION FLAVOR FLAVOR VERSION K8S VERSIONS
1.9.0 tetrate 0 1.17,1.18,1.19
*1.9.0 istio 0 1.17,1.18,1.19
1.8.3 tetrate 0 1.16,1.17,1.18
1.8.3 istio 0 1.16,1.17,1.18
1.8.2 tetrate 0 1.16,1.17,1.18
1.8.2 tetratefips 0 1.16,1.17,1.18
1.8.1 tetrate 0 1.16,1.17,1.18
1.8.0 tetrate 0 1.16,1.17,1.18
1.7.7 tetrate 0 1.16,1.17,1.18
1.7.6 tetrate 0 1.16,1.17,1.18
1.7.5 tetrate 0 1.16,1.17,1.18
1.7.4 tetrate 0 1.16,1.17,1.18
$ getistio fetch --flavor istio --version 1.9.0
fallback to the flavor 0 version which is the latest one in 1.9.0-istio
1.9.0-istio-v0 already fetched: download skipped
For more information about 1.9.0-istio-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.9.x/announcing-1.9/
istioctl switched to 1.9.0-istio-v0 now
$ getistio fetch --flavor tetrate
1.9.0-tetrate-v0 already fetched: download skipped
For more information about 1.9.0-tetrate-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.9.x/announcing-1.9/
istioctl switched to 1.9.0-tetrate-v0 now
$ getistio switch --flavor istio
required flag(s) "version", "flavor-version" not set
Further, I'd expect the following to work, but it doesn't today:
$ getistio show
1.8.2-tetrate-v0
1.8.3-istio-v0
1.9.0-istio-v0
1.9.0-tetrate-v0 (Active)
$ getistio switch 1.9.0-istio-v0
required flag(s) "version", "flavor", "flavor-version" not set
We are evaluating FIPS support and whilst I can see claims of FIPS compliance (https://www.tetrate.io/blog/tetrate-istio-distro-achieves-fips-certification/) the FIPS certificate details are not available, which calls into question the validity of the FIPS compliance.
Please can we see the FIPS certificate to determine if tetrateFIPS is in fact FIPS 140-2 compliant?
Thanks
https://istio.io/latest/news/releases/1.11.x/announcing-1.11/
Istio 1.11 is already GA, but not available in getmesh.
hi,
is it already deprectated that we could not install istio by getmesh any more?
getmesh list
error fetching manifest: error unmarshalling fetched manifest: invalid character '<' looking for beginning of value
Repo steps:
getmesh istioctl install --set profile=demo
)Error:
Error: could not overlay user config over base: json merge error (unable to find api field in struct meshConfigExtensionProvider for the json field "envoyOtelAls") for base object: {
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"namespace": "istio-system"
},
"spec": {
"components": {
"base": {
"enabled": true
},
"cni": {
"enabled": false
},
"egressGateways": [
{
"enabled": true,
"k8s": {
"resources": {
"requests": {
"cpu": "10m",
"memory": "40Mi"
}
}
},
"name": "istio-egressgateway"
}
],
"ingressGateways": [
{
"enabled": true,
"k8s": {
"resources": {
"requests": {
"cpu": "10m",
"memory": "40Mi"
}
},
"service": {
"ports": [
{
"name": "status-port",
"port": 15021,
"targetPort": 15021
},
{
"name": "http2",
"port": 80,
"targetPort": 8080
},
{
"name": "https",
"port": 443,
"targetPort": 8443
},
{
"name": "tcp",
"port": 31400,
"targetPort": 31400
},
{
"name": "tls",
"port": 15443,
"targetPort": 15443
}
]
}
},
"name": "istio-ingressgateway"
}
],
"istiodRemote": {
"enabled": false
},
"pilot": {
"enabled": true,
"k8s": {
"env": [
{
"name": "PILOT_TRACE_SAMPLING",
"value": "100"
}
],
"resources": {
"requests": {
"cpu": "10m",
"memory": "100Mi"
}
}
}
}
},
"hub": "containers.istio.tetratelabs.com",
"meshConfig": {
"accessLogFile": "/dev/stdout",
"defaultConfig": {
"proxyMetadata": {}
},
"enablePrometheusMerge": true,
"extensionProviders": [
{
"envoyOtelAls": {
"port": 4317,
"service": "otel-collector.istio-system.svc.cluster.local"
},
"name": "otel"
}
]
},
"tag": "1.13.3-tetrate-v0",
"values": {
"base": {
"enableCRDTemplates": false,
"validationURL": ""
},
"defaultRevision": "",
"gateways": {
"istio-egressgateway": {
"autoscaleEnabled": false,
"env": {},
"name": "istio-egressgateway",
"secretVolumes": [
{
"mountPath": "/etc/istio/egressgateway-certs",
"name": "egressgateway-certs",
"secretName": "istio-egressgateway-certs"
},
{
"mountPath": "/etc/istio/egressgateway-ca-certs",
"name": "egressgateway-ca-certs",
"secretName": "istio-egressgateway-ca-certs"
}
],
"type": "ClusterIP"
},
"istio-ingressgateway": {
"autoscaleEnabled": false,
"env": {},
"name": "istio-ingressgateway",
"secretVolumes": [
{
"mountPath": "/etc/istio/ingressgateway-certs",
"name": "ingressgateway-certs",
"secretName": "istio-ingressgateway-certs"
},
{
"mountPath": "/etc/istio/ingressgateway-ca-certs",
"name": "ingressgateway-ca-certs",
"secretName": "istio-ingressgateway-ca-certs"
}
],
"type": "LoadBalancer"
}
},
"global": {
"configValidation": true,
"defaultNodeSelector": {},
"defaultPodDisruptionBudget": {
"enabled": true
},
"defaultResources": {
"requests": {
"cpu": "10m"
}
},
"imagePullPolicy": "",
"imagePullSecrets": [],
"istioNamespace": "istio-system",
"istiod": {
"enableAnalysis": false
},
"jwtPolicy": "third-party-jwt",
"logAsJson": false,
"logging": {
"level": "default:info"
},
"meshNetworks": {},
"mountMtlsCerts": false,
"multiCluster": {
"clusterName": "",
"enabled": false
},
"network": "",
"omitSidecarInjectorConfigMap": false,
"oneNamespace": false,
"operatorManageWebhooks": false,
"pilotCertProvider": "istiod",
"priorityClassName": "",
"proxy": {
"autoInject": "enabled",
"clusterDomain": "cluster.local",
"componentLogLevel": "misc:error",
"enableCoreDump": false,
"excludeIPRanges": "",
"excludeInboundPorts": "",
"excludeOutboundPorts": "",
"image": "proxyv2",
"includeIPRanges": "*",
"logLevel": "warning",
"privileged": false,
"readinessFailureThreshold": 30,
"readinessInitialDelaySeconds": 1,
"readinessPeriodSeconds": 2,
"resources": {
"limits": {
"cpu": "2000m",
"memory": "1024Mi"
},
"requests": {
"cpu": "10m",
"memory": "40Mi"
}
},
"statusPort": 15020,
"tracer": "zipkin"
},
"proxy_init": {
"image": "proxyv2",
"resources": {
"limits": {
"cpu": "2000m",
"memory": "1024Mi"
},
"requests": {
"cpu": "10m",
"memory": "10Mi"
}
}
},
"sds": {
"token": {
"aud": "istio-ca"
}
},
"sts": {
"servicePort": 0
},
"tracer": {
"datadog": {},
"lightstep": {},
"stackdriver": {},
"zipkin": {}
},
"useMCP": false
},
"istiodRemote": {
"injectionURL": ""
},
"pilot": {
"autoscaleEnabled": false,
"autoscaleMax": 5,
"autoscaleMin": 1,
"configMap": true,
"cpu": {
"targetAverageUtilization": 80
},
"deploymentLabels": null,
"enableProtocolSniffingForInbound": true,
"enableProtocolSniffingForOutbound": true,
"env": {
"ENABLE_LEGACY_FSGROUP_INJECTION": false
},
"image": "pilot",
"keepaliveMaxServerConnectionAge": "30m",
"nodeSelector": {},
"podLabels": {},
"replicaCount": 1,
"traceSampling": 1
},
"telemetry": {
"enabled": true,
"v2": {
"enabled": true,
"metadataExchange": {
"wasmEnabled": false
},
"prometheus": {
"enabled": true,
"wasmEnabled": false
},
"stackdriver": {
"configOverride": {},
"enabled": false,
"logging": false,
"monitoring": false,
"topology": false
}
}
}
}
}
}
override object: {
"apiVersion": "install.istio.io/v1alpha1",
"kind": "IstioOperator",
"metadata": {
"annotations": {
"install.istio.io/ignoreReconcile": "true",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"install.istio.io/v1alpha1\",\"kind\":\"IstioOperator\",\"metadata\":{\"annotations\":{\"install.istio.io/ignoreReconcile\":\"true\"},\"creationTimestamp\":null,\"name\":\"installed-state\",\"namespace\":\"istio-system\"},\"spec\":{\"components\":{\"base\":{\"enabled\":true},\"cni\":{\"enabled\":false},\"egressGateways\":[{\"enabled\":true,\"k8s\":{\"resources\":{\"requests\":{\"cpu\":\"10m\",\"memory\":\"40Mi\"}}},\"name\":\"istio-egressgateway\"}],\"ingressGateways\":[{\"enabled\":true,\"k8s\":{\"resources\":{\"requests\":{\"cpu\":\"10m\",\"memory\":\"40Mi\"}},\"service\":{\"ports\":[{\"name\":\"status-port\",\"port\":15021,\"protocol\":\"TCP\",\"targetPort\":15021},{\"name\":\"http2\",\"port\":80,\"protocol\":\"TCP\",\"targetPort\":8080},{\"name\":\"https\",\"port\":443,\"protocol\":\"TCP\",\"targetPort\":8443},{\"name\":\"tcp\",\"port\":31400,\"protocol\":\"TCP\",\"targetPort\":31400},{\"name\":\"tls\",\"port\":15443,\"protocol\":\"TCP\",\"targetPort\":15443}]}},\"name\":\"istio-ingressgateway\"}],\"istiodRemote\":{\"enabled\":false},\"pilot\":{\"enabled\":true,\"k8s\":{\"env\":[{\"name\":\"PILOT_TRACE_SAMPLING\",\"value\":\"100\"}],\"resources\":{\"requests\":{\"cpu\":\"10m\",\"memory\":\"100Mi\"}}}}},\"hub\":\"containers.istio.tetratelabs.com\",\"meshConfig\":{\"accessLogFile\":\"/dev/stdout\",\"defaultConfig\":{\"proxyMetadata\":{}},\"enablePrometheusMerge\":true,\"extensionProviders\":[{\"envoyOtelAls\":{\"port\":4317,\"service\":\"otel-collector.istio-system.svc.cluster.local\"},\"name\":\"otel\"}]},\"profile\":\"demo\",\"tag\":\"1.13.3-tetrate-v0\",\"values\":{\"base\":{\"enableCRDTemplates\":false,\"validationURL\":\"\"},\"defaultRevision\":\"\",\"gateways\":{\"istio-egressgateway\":{\"autoscaleEnabled\":false,\"env\":{},\"name\":\"istio-egressgateway\",\"secretVolumes\":[{\"mountPath\":\"/etc/istio/egressgateway-certs\",\"name\":\"egressgateway-certs\",\"secretName\":\"istio-egressgateway-certs\"},{\"mountPath\":\"/etc/istio/egressgateway-ca-certs\",\"name\":\"egressgateway-ca-certs\",\"secretName\":\"istio-egressgateway-ca-certs\"}],\"type\":\"ClusterIP\"},\"istio-ingressgateway\":{\"autoscaleEnabled\":false,\"env\":{},\"name\":\"istio-ingressgateway\",\"secretVolumes\":[{\"mountPath\":\"/etc/istio/ingressgateway-certs\",\"name\":\"ingressgateway-certs\",\"secretName\":\"istio-ingressgateway-certs\"},{\"mountPath\":\"/etc/istio/ingressgateway-ca-certs\",\"name\":\"ingressgateway-ca-certs\",\"secretName\":\"istio-ingressgateway-ca-certs\"}],\"type\":\"LoadBalancer\"}},\"global\":{\"configValidation\":true,\"defaultNodeSelector\":{},\"defaultPodDisruptionBudget\":{\"enabled\":true},\"defaultResources\":{\"requests\":{\"cpu\":\"10m\"}},\"imagePullPolicy\":\"\",\"imagePullSecrets\":[],\"istioNamespace\":\"istio-system\",\"istiod\":{\"enableAnalysis\":false},\"jwtPolicy\":\"third-party-jwt\",\"logAsJson\":false,\"logging\":{\"level\":\"default:info\"},\"meshNetworks\":{},\"mountMtlsCerts\":false,\"multiCluster\":{\"clusterName\":\"\",\"enabled\":false},\"network\":\"\",\"omitSidecarInjectorConfigMap\":false,\"oneNamespace\":false,\"operatorManageWebhooks\":false,\"pilotCertProvider\":\"istiod\",\"priorityClassName\":\"\",\"proxy\":{\"autoInject\":\"enabled\",\"clusterDomain\":\"cluster.local\",\"componentLogLevel\":\"misc:error\",\"enableCoreDump\":false,\"excludeIPRanges\":\"\",\"excludeInboundPorts\":\"\",\"excludeOutboundPorts\":\"\",\"image\":\"proxyv2\",\"includeIPRanges\":\"*\",\"logLevel\":\"warning\",\"privileged\":false,\"readinessFailureThreshold\":30,\"readinessInitialDelaySeconds\":1,\"readinessPeriodSeconds\":2,\"resources\":{\"limits\":{\"cpu\":\"2000m\",\"memory\":\"1024Mi\"},\"requests\":{\"cpu\":\"10m\",\"memory\":\"40Mi\"}},\"statusPort\":15020,\"tracer\":\"zipkin\"},\"proxy_init\":{\"image\":\"proxyv2\",\"resources\":{\"limits\":{\"cpu\":\"2000m\",\"memory\":\"1024Mi\"},\"requests\":{\"cpu\":\"10m\",\"memory\":\"10Mi\"}}},\"sds\":{\"token\":{\"aud\":\"istio-ca\"}},\"sts\":{\"servicePort\":0},\"tracer\":{\"datadog\":{},\"lightstep\":{},\"stackdriver\":{},\"zipkin\":{}},\"useMCP\":false},\"istiodRemote\":{\"injectionURL\":\"\"},\"pilot\":{\"autoscaleEnabled\":false,\"autoscaleMax\":5,\"autoscaleMin\":1,\"configMap\":true,\"cpu\":{\"targetAverageUtilization\":80},\"deploymentLabels\":null,\"enableProtocolSniffingForInbound\":true,\"enableProtocolSniffingForOutbound\":true,\"env\":{\"ENABLE_LEGACY_FSGROUP_INJECTION\":false},\"image\":\"pilot\",\"keepaliveMaxServerConnectionAge\":\"30m\",\"nodeSelector\":{},\"podLabels\":{},\"replicaCount\":1,\"traceSampling\":1},\"telemetry\":{\"enabled\":true,\"v2\":{\"enabled\":true,\"metadataExchange\":{\"wasmEnabled\":false},\"prometheus\":{\"enabled\":true,\"wasmEnabled\":false},\"stackdriver\":{\"configOverride\":{},\"enabled\":false,\"logging\":false,\"monitoring\":false,\"topology\":false}}}}}}\n"
},
"creationTimestamp": null,
"generation": 1,
"name": "installed-state",
"namespace": "istio-system",
"resourceVersion": "8141",
"uid": "e4321802-3eba-4867-b651-6d52d89eb768"
},
"spec": {
"components": {
"base": {
"enabled": true
},
"cni": {
"enabled": false
},
"egressGateways": [
{
"enabled": true,
"k8s": {
"resources": {
"requests": {
"cpu": "10m",
"memory": "40Mi"
}
}
},
"name": "istio-egressgateway"
}
],
"ingressGateways": [
{
"enabled": true,
"k8s": {
"resources": {
"requests": {
"cpu": "10m",
"memory": "40Mi"
}
},
"service": {
"ports": [
{
"name": "status-port",
"port": 15021,
"protocol": "TCP",
"targetPort": 15021
},
{
"name": "http2",
"port": 80,
"protocol": "TCP",
"targetPort": 8080
},
{
"name": "https",
"port": 443,
"protocol": "TCP",
"targetPort": 8443
},
{
"name": "tcp",
"port": 31400,
"protocol": "TCP",
"targetPort": 31400
},
{
"name": "tls",
"port": 15443,
"protocol": "TCP",
"targetPort": 15443
}
]
}
},
"name": "istio-ingressgateway"
}
],
"istiodRemote": {
"enabled": false
},
"pilot": {
"enabled": true,
"k8s": {
"env": [
{
"name": "PILOT_TRACE_SAMPLING",
"value": "100"
},
{
"name": "ENABLE_LEGACY_FSGROUP_INJECTION",
"value": "false"
}
],
"nodeSelector": {},
"replicaCount": 1,
"resources": {
"requests": {
"cpu": "10m",
"memory": "100Mi"
}
}
}
}
},
"hub": "containers.istio.tetratelabs.com",
"meshConfig": {
"accessLogFile": "/dev/stdout",
"defaultConfig": {
"proxyMetadata": {}
},
"enablePrometheusMerge": true,
"extensionProviders": [
{
"envoyOtelAls": {
"port": 4317,
"service": "otel-collector.istio-system.svc.cluster.local"
},
"name": "otel"
}
]
},
"profile": "demo",
"tag": "1.13.3-tetrate-v0",
"values": {
"base": {
"enableCRDTemplates": false,
"validationURL": ""
},
"defaultRevision": "",
"gateways": {
"istio-egressgateway": {
"autoscaleEnabled": false,
"env": {},
"name": "istio-egressgateway",
"secretVolumes": [
{
"mountPath": "/etc/istio/egressgateway-certs",
"name": "egressgateway-certs",
"secretName": "istio-egressgateway-certs"
},
{
"mountPath": "/etc/istio/egressgateway-ca-certs",
"name": "egressgateway-ca-certs",
"secretName": "istio-egressgateway-ca-certs"
}
],
"type": "ClusterIP"
},
"istio-ingressgateway": {
"autoscaleEnabled": false,
"env": {},
"name": "istio-ingressgateway",
"secretVolumes": [
{
"mountPath": "/etc/istio/ingressgateway-certs",
"name": "ingressgateway-certs",
"secretName": "istio-ingressgateway-certs"
},
{
"mountPath": "/etc/istio/ingressgateway-ca-certs",
"name": "ingressgateway-ca-certs",
"secretName": "istio-ingressgateway-ca-certs"
}
],
"type": "LoadBalancer"
}
},
"global": {
"configValidation": true,
"defaultNodeSelector": {},
"defaultPodDisruptionBudget": {
"enabled": true
},
"defaultResources": {
"requests": {
"cpu": "10m"
}
},
"imagePullPolicy": "",
"imagePullSecrets": [],
"istioNamespace": "istio-system",
"istiod": {
"enableAnalysis": false
},
"jwtPolicy": "third-party-jwt",
"logAsJson": false,
"logging": {
"level": "default:info"
},
"meshNetworks": {},
"mountMtlsCerts": false,
"multiCluster": {
"clusterName": "",
"enabled": false
},
"network": "",
"omitSidecarInjectorConfigMap": false,
"oneNamespace": false,
"operatorManageWebhooks": false,
"pilotCertProvider": "istiod",
"priorityClassName": "",
"proxy": {
"autoInject": "enabled",
"clusterDomain": "cluster.local",
"componentLogLevel": "misc:error",
"enableCoreDump": false,
"excludeIPRanges": "",
"excludeInboundPorts": "",
"excludeOutboundPorts": "",
"image": "proxyv2",
"includeIPRanges": "*",
"logLevel": "warning",
"privileged": false,
"readinessFailureThreshold": 30,
"readinessInitialDelaySeconds": 1,
"readinessPeriodSeconds": 2,
"resources": {
"limits": {
"cpu": "2000m",
"memory": "1024Mi"
},
"requests": {
"cpu": "10m",
"memory": "40Mi"
}
},
"statusPort": 15020,
"tracer": "zipkin"
},
"proxy_init": {
"image": "proxyv2",
"resources": {
"limits": {
"cpu": "2000m",
"memory": "1024Mi"
},
"requests": {
"cpu": "10m",
"memory": "10Mi"
}
}
},
"sds": {
"token": {
"aud": "istio-ca"
}
},
"sts": {
"servicePort": 0
},
"tracer": {
"datadog": {},
"lightstep": {},
"stackdriver": {},
"zipkin": {}
},
"useMCP": false
},
"istiodRemote": {
"injectionURL": ""
},
"pilot": {
"autoscaleEnabled": false,
"autoscaleMax": 5,
"autoscaleMin": 1,
"configMap": true,
"cpu": {
"targetAverageUtilization": 80
},
"enableProtocolSniffingForInbound": true,
"enableProtocolSniffingForOutbound": true,
"env": {
"ENABLE_LEGACY_FSGROUP_INJECTION": false
},
"image": "pilot",
"keepaliveMaxServerConnectionAge": "30m",
"nodeSelector": {},
"podLabels": {},
"replicaCount": 1,
"traceSampling": 1
},
"telemetry": {
"enabled": true,
"v2": {
"enabled": true,
"metadataExchange": {
"wasmEnabled": false
},
"prometheus": {
"enabled": true,
"wasmEnabled": false
},
"stackdriver": {
"configOverride": {},
"enabled": false,
"logging": false,
"monitoring": false,
"topology": false
}
}
}
}
}
}
error executing istioctl: exit status 1
While fetching the last fips version using last getmesh client
getmesh fetch --version 1.10.3 --flavor tetratefips
I get the following error
$ getmesh fetch --version 1.10.3 --flavor tetratefips fallback to the flavor 0 version which is the latest one in 1.10.3-tetratefips Downloading 1.10.3-tetratefips-v0 from https://istio.tetratelabs.io/getmesh/files/istio-1.10.3-tetratefips-v0-linux.tar.gz ...error while dowloading istio: exit status 22
From the logs I see, that it takes non existing file https://istio.tetratelabs.io/getmesh/files/istio-1.10.3-tetratefips-v0-linux.tar.gz. The correct on is with amd64 at the end.
Seems like the issue happens only when I run it on FIPS enabled runner.
Probably related to this PR: #66
To avoid the need of downloading Istio in order to get the grafana|zipkin|prometheus examples could be useful to wrap the grafana|zipkin|prometheus command and download the yaml and apply them.
Hello,
I installed istiod:1.13.2 with opensource image(non-fips) successfully. But failed to start up pilot pod using tetrate fips image. Error logs as following. I couldn't find a clue to it. I'd appreciate it if you guys took a look.
.
Install istiod with helm chart
releases:
Image was pulled from IronBank and pushed to our AWS ECR.
pilot:
image: xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/k8s-component/tetrate/istio/pilot:1.13.2-tetratefips-v0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m48s default-scheduler Successfully assigned istio-system/istiod-5475bd8f69-wfhvk to ip-10-190-1-37.ec2.internal
Warning Unhealthy 5m43s (x2 over 5m45s) kubelet Readiness probe failed: Get "http://10.190.1.204:8080/ready": dial tcp 10.190.1.204:8080: connect: connection refused
Normal Created 4m58s (x4 over 5m46s) kubelet Created container discovery
Normal Started 4m58s (x4 over 5m46s) kubelet Started container discovery
Normal Pulled 4m6s (x5 over 5m47s) kubelet Container image "xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/k8s-component/tetrate/istio/pilot:1.13.2-tetratefips-v0@sha256:eaf8c4f4b9d200ef9a6bd7f95a750eef3c02173aa1b50cfb89dce5299d79ccf4" already present on machine
Warning BackOff 39s (x28 over 5m42s) kubelet Back-off restarting failed container
2022-06-01T08:20:55.318190Z info FLAG: --caCertFile=""
2022-06-01T08:20:55.318250Z info FLAG: --clusterAliases="[]"
2022-06-01T08:20:55.318261Z info FLAG: --clusterID="Kubernetes"
2022-06-01T08:20:55.318267Z info FLAG: --clusterRegistriesNamespace="istio-system"
2022-06-01T08:20:55.318274Z info FLAG: --configDir=""
2022-06-01T08:20:55.318280Z info FLAG: --ctrlz_address="localhost"
2022-06-01T08:20:55.318291Z info FLAG: --ctrlz_port="9876"
2022-06-01T08:20:55.318297Z info FLAG: --domain="cluster.local"
2022-06-01T08:20:55.318303Z info FLAG: --grpcAddr=":15010"
2022-06-01T08:20:55.318311Z info FLAG: --help="false"
2022-06-01T08:20:55.318317Z info FLAG: --httpAddr=":8080"
2022-06-01T08:20:55.318323Z info FLAG: --httpsAddr=":15017"
2022-06-01T08:20:55.318332Z info FLAG: --keepaliveInterval="30s"
2022-06-01T08:20:55.318339Z info FLAG: --keepaliveMaxServerConnectionAge="30m0s"
2022-06-01T08:20:55.318345Z info FLAG: --keepaliveTimeout="10s"
2022-06-01T08:20:55.318350Z info FLAG: --kubeconfig=""
2022-06-01T08:20:55.318358Z info FLAG: --kubernetesApiBurst="160"
2022-06-01T08:20:55.318366Z info FLAG: --kubernetesApiQPS="80"
2022-06-01T08:20:55.318372Z info FLAG: --log_as_json="false"
2022-06-01T08:20:55.318378Z info FLAG: --log_caller=""
2022-06-01T08:20:55.318384Z info FLAG: --log_output_level="default:info"
2022-06-01T08:20:55.318392Z info FLAG: --log_rotate=""
2022-06-01T08:20:55.318398Z info FLAG: --log_rotate_max_age="30"
2022-06-01T08:20:55.318404Z info FLAG: --log_rotate_max_backups="1000"
2022-06-01T08:20:55.318410Z info FLAG: --log_rotate_max_size="104857600"
2022-06-01T08:20:55.318416Z info FLAG: --log_stacktrace_level="default:none"
2022-06-01T08:20:55.318481Z info FLAG: --log_target="[stdout]"
2022-06-01T08:20:55.318512Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2022-06-01T08:20:55.318520Z info FLAG: --monitoringAddr=":15014"
2022-06-01T08:20:55.318527Z info FLAG: --namespace="istio-system"
2022-06-01T08:20:55.318696Z info FLAG: --networksConfig="./etc/istio/config/meshNetworks"
2022-06-01T08:20:55.318715Z info FLAG: --plugins="[ext_authz,authn,authz]"
2022-06-01T08:20:55.318722Z info FLAG: --profile="true"
2022-06-01T08:20:55.318735Z info FLAG: --registries="[Kubernetes]"
2022-06-01T08:20:55.318741Z info FLAG: --resync="1m0s"
2022-06-01T08:20:55.318747Z info FLAG: --secureGRPCAddr=":15012"
2022-06-01T08:20:55.318753Z info FLAG: --shutdownDuration="10s"
2022-06-01T08:20:55.318759Z info FLAG: --tls-cipher-suites="[]"
2022-06-01T08:20:55.318765Z info FLAG: --tlsCertFile=""
2022-06-01T08:20:55.318770Z info FLAG: --tlsKeyFile=""
2022-06-01T08:20:55.318778Z info FLAG: --vklog="0"
2022-06-01T08:20:55.352095Z info klog Config not found: /var/run/secrets/remote/config
2022-06-01T08:20:55.360935Z info initializing mesh configuration ./etc/istio/config/mesh
2022-06-01T08:20:55.462159Z info controllers starting controller=configmap istio
2022-06-01T08:20:55.462576Z info Loaded MeshNetworks config from Kubernetes API server.
2022-06-01T08:20:55.462622Z info mesh networks configuration updated to: {
"networks": {
}
}
2022-06-01T08:20:55.464968Z info Loaded MeshConfig config from Kubernetes API server.
2022-06-01T08:20:55.465801Z info mesh configuration updated to: {
"proxyListenPort": 15001,
"connectTimeout": "10s",
"protocolDetectionTimeout": "0s",
"ingressClass": "istio",
"ingressService": "istio-ingressgateway",
"ingressControllerMode": "STRICT",
"enableTracing": true,
"defaultConfig": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"parentShutdownDuration": "60s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"concurrency": 2,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"outboundTrafficPolicy": {
"mode": "ALLOW_ANY"
},
"enableAutoMtls": true,
"trustDomain": "cluster.local",
"trustDomainAliases": [
],
"defaultServiceExportTo": [
"*"
],
"defaultVirtualServiceExportTo": [
"*"
],
"defaultDestinationRuleExportTo": [
"*"
],
"rootNamespace": "istio-system",
"localityLbSetting": {
"enabled": true
},
"dnsRefreshRate": "5s",
"certificates": [
],
"thriftConfig": {
},
"serviceSettings": [
],
"enablePrometheusMerge": true,
"extensionProviders": [
{
"name": "prometheus",
"prometheus": {
}
},
{
"name": "stackdriver",
"stackdriver": {
}
},
{
"name": "envoy",
"envoyFileAccessLog": {
"path": "/dev/stdout"
}
}
],
"defaultProviders": {
}
}
2022-06-01T08:20:55.561936Z info initializing mesh networks from mesh config watcher
2022-06-01T08:20:55.562582Z info mesh configuration: {
"proxyListenPort": 15001,
"connectTimeout": "10s",
"protocolDetectionTimeout": "0s",
"ingressClass": "istio",
"ingressService": "istio-ingressgateway",
"ingressControllerMode": "STRICT",
"enableTracing": true,
"defaultConfig": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"parentShutdownDuration": "60s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"concurrency": 2,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"outboundTrafficPolicy": {
"mode": "ALLOW_ANY"
},
"enableAutoMtls": true,
"trustDomain": "cluster.local",
"trustDomainAliases": [
],
"defaultServiceExportTo": [
"*"
],
"defaultVirtualServiceExportTo": [
"*"
],
"defaultDestinationRuleExportTo": [
"*"
],
"rootNamespace": "istio-system",
"localityLbSetting": {
"enabled": true
},
"dnsRefreshRate": "5s",
"certificates": [
],
"thriftConfig": {
},
"serviceSettings": [
],
"enablePrometheusMerge": true,
"extensionProviders": [
{
"name": "prometheus",
"prometheus": {
}
},
{
"name": "stackdriver",
"stackdriver": {
}
},
{
"name": "envoy",
"envoyFileAccessLog": {
"path": "/dev/stdout"
}
}
],
"defaultProviders": {
}
}
2022-06-01T08:20:55.562614Z info version: 1.13.2-tetratefips-v0-af687222b70be38751d8d0238045bc606f54f8ff-Clean
2022-06-01T08:20:55.562992Z info flags: {
"ServerOptions": {
"HTTPAddr": ":8080",
"HTTPSAddr": ":15017",
"GRPCAddr": ":15010",
"MonitoringAddr": ":15014",
"EnableProfiling": true,
"TLSOptions": {
"CaCertFile": "",
"CertFile": "",
"KeyFile": "",
"TLSCipherSuites": null,
"CipherSuits": null
},
"SecureGRPCAddr": ":15012"
},
"InjectionOptions": {
"InjectionDirectory": "./var/lib/istio/inject"
},
"PodName": "istiod-5475bd8f69-wfhvk",
"Namespace": "istio-system",
"Revision": "default",
"MeshConfigFile": "./etc/istio/config/mesh",
"NetworksConfigFile": "./etc/istio/config/meshNetworks",
"RegistryOptions": {
"FileDir": "",
"Registries": [
"Kubernetes"
],
"KubeOptions": {
"SystemNamespace": "",
"MeshServiceController": null,
"ResyncPeriod": 60000000000,
"DomainSuffix": "cluster.local",
"ClusterID": "Kubernetes",
"ClusterAliases": {},
"Metrics": null,
"XDSUpdater": null,
"NetworksWatcher": null,
"MeshWatcher": null,
"EndpointMode": 1,
"KubernetesAPIQPS": 80,
"KubernetesAPIBurst": 160,
"SyncInterval": 0,
"SyncTimeout": null,
"DiscoveryNamespacesFilter": null
},
"ClusterRegistriesNamespace": "istio-system",
"KubeConfig": "",
"DistributionCacheRetention": 60000000000,
"DistributionTrackingEnabled": true
},
"CtrlZOptions": {
"Port": 9876,
"Address": "localhost"
},
"Plugins": [
"ext_authz",
"authn",
"authz"
],
"KeepaliveOptions": {
"Time": 30000000000,
"Timeout": 10000000000,
"MaxServerConnectionAge": 1800000000000,
"MaxServerConnectionAgeGrace": 10000000000
},
"ShutdownDuration": 10000000000,
"JwtRule": ""
}
2022-06-01T08:20:55.563009Z info initializing mesh handlers
2022-06-01T08:20:55.563200Z info model reloading network gateways
2022-06-01T08:20:55.563218Z info creating CA and initializing public key
2022-06-01T08:20:55.563276Z info Use self-signed certificate as the CA certificate
2022-06-01T08:20:55.567335Z info pkica Load signing key and cert from existing secret istio-system:istio-ca-secret
2022-06-01T08:20:55.568141Z info pkica Using existing public key: -----BEGIN CERTIFICATE-----
xxxxxxxxxxxxxxxx......................
-----END CERTIFICATE-----
2022-06-01T08:20:55.568203Z info rootcertrotator Set up back off time 18m8s to start rotator.
2022-06-01T08:20:55.568227Z info initializing controllers
2022-06-01T08:20:55.568284Z info No certificates specified, skipping K8S DNS certificate controller
2022-06-01T08:20:55.568650Z info rootcertrotator Jitter is enabled, wait 18m8s before starting root cert rotator.
2022-06-01T08:20:55.759169Z warn kube Skipping CRD gateway.networking.k8s.io/v1alpha2/GatewayClass as it is not present
2022-06-01T08:20:55.759242Z warn kube Skipping CRD gateway.networking.k8s.io/v1alpha2/Gateway as it is not present
2022-06-01T08:20:55.759253Z warn kube Skipping CRD gateway.networking.k8s.io/v1alpha2/HTTPRoute as it is not present
2022-06-01T08:20:55.759260Z warn kube Skipping CRD gateway.networking.k8s.io/v1alpha2/ReferencePolicy as it is not present
2022-06-01T08:20:55.759268Z warn kube Skipping CRD gateway.networking.k8s.io/v1alpha2/TCPRoute as it is not present
2022-06-01T08:20:55.759278Z warn kube Skipping CRD gateway.networking.k8s.io/v1alpha2/TLSRoute as it is not present
2022-06-01T08:20:55.759677Z info Adding Kubernetes registry adapter
2022-06-01T08:20:55.759720Z info handling remote clusters in *controller.Multicluster
2022-06-01T08:20:55.759759Z info initializing Istiod DNS certificates host: istiod.istio-system.svc, custom host:
2022-06-01T08:20:56.081424Z info Generating istiod-signed cert for [istiod.istio-system.svc istiod-remote.istio-system.svc istio-pilot.istio-system.svc]:
-----BEGIN CERTIFICATE-----
xxxxxx.........................
-----END CERTIFICATE-----
2022-06-01T08:20:56.081633Z info No plugged-in cert at etc/cacerts/ca-key.pem; self-signed cert is used
2022-06-01T08:20:56.081992Z info x509 cert - Issuer: "O=cluster.local", Subject: "", SN: d010855a107d57c9aa300d69fa811358, NotBefore: "2022-06-01T08:18:56Z", NotAfter: "2032-05-29T08:20:56Z"
2022-06-01T08:20:56.082004Z info Istiod certificates are reloaded
2022-06-01T08:20:56.082094Z info spiffe Added 1 certs to trust domain cluster.local in peer cert verifier
2022-06-01T08:20:56.082104Z info initializing secure discovery service
2022-06-01T08:20:56.082150Z info initializing secure webhook server for istiod webhooks
2022-06-01T08:20:56.088222Z info initializing sidecar injector
2022-06-01T08:20:56.097142Z info initializing config validator
2022-06-01T08:20:56.097197Z info initializing Istiod admin server
2022-06-01T08:20:56.097378Z info initializing registry event handlers
2022-06-01T08:20:56.097470Z info starting discovery service
2022-06-01T08:20:56.097514Z info handling remote clusters in *kube.Multicluster
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7f963d0f4bd0]
runtime stack:
runtime.throw({0x3818d1a, 0x7f963e5a2880})
runtime/panic.go:1198 +0x71
runtime.sigpanic()
runtime/signal_unix.go:719 +0x396
goroutine 99 [syscall]:
runtime.cgocall(0x2bc6540, 0xc000093d90)
runtime/cgocall.go:156 +0x5c fp=0xc000093d68 sp=0xc000093d30 pc=0x40653c
net._C2func_getaddrinfo(0xc001665430, 0x0, 0xc001695e00, 0xc0008cb8c8)
_cgo_gotypes.go:91 +0x56 fp=0xc000093d90 sp=0xc000093d68 pc=0x55b0f6
net.cgoLookupIPCNAME.func1({0xc001665430, 0xc000093df8, 0xc000093f38}, 0xc0016652b0, 0x203000)
net/cgo_unix.go:163 +0x9f fp=0xc000093de8 sp=0xc000093d90 pc=0x55ce3f
net.cgoLookupIPCNAME({0x379df63, 0x3}, {0xc0016652b0, 0x589daa})
net/cgo_unix.go:163 +0x16d fp=0xc000093f38 sp=0xc000093de8 pc=0x55c68d
net.cgoIPLookup(0x5881e5, {0x379df63, 0xc000093fd0}, {0xc0016652b0, 0xc0005e1a80})
net/cgo_unix.go:220 +0x3b fp=0xc000093fa8 sp=0xc000093f38 pc=0x55cefb
net.cgoLookupIP·dwrap·25()
net/cgo_unix.go:230 +0x36 fp=0xc000093fe0 sp=0xc000093fa8 pc=0x55d376
runtime.goexit()
runtime/asm_amd64.s:1581 +0x1 fp=0xc000093fe8 sp=0xc000093fe0 pc=0x46cc81
created by net.cgoLookupIP
net/cgo_unix.go:230 +0x125
goroutine 1 [select]:
net.(*Resolver).lookupIPAddr(0x61af840, {0x3dc3a68, 0xc000078038}, {0x379df63, 0x20}, {0xc0016652b0, 0x9})
net/lookup.go:302 +0x5c7
net.(*Resolver).internetAddrList(0x3dc3a68, {0x3dc3a68, 0xc000078038}, {0x379df63, 0x3}, {0xc0016652b0, 0xe})
net/ipsock.go:288 +0x67a
net.(*Resolver).resolveAddrList(0x410065, {0x3dc3a68, 0xc000078038}, {0x37a26fc, 0x6}, {0x379df63, 0x7f963d312ad8}, {0xc0016652b0, 0xe}, {0x0, ...})
net/dial.go:221 +0x41b
net.(*ListenConfig).Listen(0xc001541698, {0x3dc3a68, 0xc000078038}, {0x379df63, 0xc0015416a8}, {0xc0016652b0, 0xe})
net/dial.go:626 +0x85
net.Listen({0x379df63, 0x5}, {0xc0016652b0, 0x2})
net/dial.go:712 +0x4b
istio.io/pkg/ctrlz.Run(0xc000c70810, {0x0, 0x0, 0x17})
istio.io/[email protected]/ctrlz/ctrlz.go:168 +0x66a
istio.io/istio/pilot/pkg/bootstrap.NewServer(0xc000afef00, {0x0, 0x0, 0x0})
istio.io/istio/pilot/pkg/bootstrap/server.go:350 +0x161f
istio.io/istio/pilot/cmd/pilot-discovery/app.newDiscoveryCommand.func2(0xc000afec80, {0xc00044ade0, 0x6, 0x6})
istio.io/istio/pilot/cmd/pilot-discovery/app/cmd.go:92 +0x4e
github.com/spf13/cobra.(*Command).execute(0xc000afec80, {0xc00044ad80, 0x6, 0x6})
github.com/spf13/[email protected]/command.go:856 +0x60e
github.com/spf13/cobra.(*Command).ExecuteC(0xc000afea00)
github.com/spf13/[email protected]/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:902
main.main()
istio.io/istio/pilot/cmd/pilot-discovery/main.go:27 +0x25
goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
k8s.io/klog/[email protected]/klog.go:1283 +0x6a
created by k8s.io/klog/v2.init.0
k8s.io/klog/[email protected]/klog.go:420 +0xfb
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070500)
[email protected]/stats/view/worker.go:276 +0xb9
created by go.opencensus.io/stats/view.init.0
[email protected]/stats/view/worker.go:34 +0x92
goroutine 40 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001173a40)
k8s.io/[email protected]/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
k8s.io/[email protected]/util/workqueue/delaying_queue.go:68 +0x247
goroutine 23 [select]:
istio.io/pkg/cache.(*ttlCache).evicter(0xc000139f00, 0xc00054daa0)
istio.io/[email protected]/cache/ttlCache.go:123 +0xb2
created by istio.io/pkg/cache.NewTTLWithCallback
istio.io/[email protected]/cache/ttlCache.go:102 +0x165
goroutine 41 [select]:
istio.io/istio/pkg/kube/controllers.Queue.Run({{0x3e2e458, 0xc00055c460}, 0xc000c38e00, {0xc000c386b0, 0xf}, 0x0, 0xc000976230, 0xc000b70360}, 0xc0000c63c0)
istio.io/istio/pkg/kube/controllers/queue.go:107 +0x225
istio.io/istio/pkg/kube/configmapwatcher.(*Controller).Run(0xc0005e0200, 0xc0000c63c0)
istio.io/istio/pkg/kube/configmapwatcher/configmapwatcher.go:81 +0x1eb
created by istio.io/istio/pkg/config/mesh/kubemesh.NewConfigMapWatcher
istio.io/istio/pkg/config/mesh/kubemesh/watcher.go:59 +0x252
goroutine 24 [select]:
istio.io/istio/pilot/pkg/model.(*JwksResolver).refresher(0xc000b2c000)
istio.io/istio/pilot/pkg/model/jwks_resolver.go:385 +0xb2
created by istio.io/istio/pilot/pkg/model.newJwksResolverWithCABundlePaths
istio.io/istio/pilot/pkg/model/jwks_resolver.go:218 +0x313
goroutine 33 [select]:
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc0005e0280)
k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x156
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
goroutine 66 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000c6aac8, 0x0)
runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x379d901)
sync/cond.go:56 +0x8c
golang.org/x/net/http2.(*pipe).Read(0xc000c6aab0, {0xc0005b4e00, 0x200, 0x200})
golang.org/x/[email protected]/http2/pipe.go:76 +0xeb
golang.org/x/net/http2.transportResponseBody.Read({0x100000000000000}, {0xc0005b4e00, 0x0, 0xc00087dcb0})
golang.org/x/[email protected]/http2/transport.go:2384 +0x85
encoding/json.(*Decoder).refill(0xc00099cb40)
encoding/json/stream.go:165 +0x17f
encoding/json.(*Decoder).readValue(0xc00099cb40)
encoding/json/stream.go:140 +0xbb
encoding/json.(*Decoder).Decode(0xc00099cb40, {0x31cd580, 0xc000965248})
encoding/json/stream.go:63 +0x78
k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc0005e4f00, {0xc0011db800, 0x400, 0x400})
k8s.io/[email protected]/pkg/util/framer/framer.go:152 +0x19c
k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc00080e820, 0xc0000c73e0, {0x3d87520, 0xc00115b500})
k8s.io/[email protected]/pkg/runtime/serializer/streaming/streaming.go:77 +0xa7
k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc0000396e0)
k8s.io/[email protected]/rest/watch/decoder.go:49 +0x4f
k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc00115b4c0)
k8s.io/[email protected]/pkg/watch/streamwatcher.go:105 +0x11c
created by k8s.io/apimachinery/pkg/watch.NewStreamWatcher
k8s.io/[email protected]/pkg/watch/streamwatcher.go:76 +0x135
goroutine 30 [IO wait]:
internal/poll.runtime_pollWait(0x7f963fdf9118, 0x72)
runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000b2cd00, 0xc001410000, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b2cd00, {0xc001410000, 0x8d42, 0x8d42})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc000b2cd00, {0xc001410000, 0xc001414718, 0x1a})
net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc0008ca0e8, {0xc001410000, 0x6e8919, 0xc0011f57f0})
net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00139dea8, {0xc001410000, 0x0, 0x40cd6d})
crypto/tls/conn.go:777 +0x3d
bytes.(*Buffer).ReadFrom(0xc000b52278, {0x3d2a860, 0xc00139dea8})
bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000b52000, {0x3d43520, 0xc0008ca0e8}, 0x462f)
crypto/tls/conn.go:799 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc000b52000, 0x0)
crypto/tls/conn.go:606 +0x112
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0xc000b52000, {0xc0011f9000, 0x1000, 0x919e60})
crypto/tls/conn.go:1277 +0x16f
bufio.(*Reader).Read(0xc0009c1260, {0xc000148740, 0x9, 0x934e22})
bufio/bufio.go:227 +0x1b4
io.ReadAtLeast({0x3d2a5c0, 0xc0009c1260}, {0xc000148740, 0x9, 0x9}, 0x9)
io/io.go:328 +0x9a
io.ReadFull(...)
io/io.go:347
golang.org/x/net/http2.readFrameHeader({0xc000148740, 0x9, 0xc0013fccf0}, {0x3d2a5c0, 0xc0009c1260})
golang.org/x/[email protected]/http2/frame.go:237 +0x6e
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000148700)
golang.org/x/[email protected]/http2/frame.go:498 +0x95
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0011f5f98)
golang.org/x/[email protected]/http2/transport.go:2101 +0x130
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000b3f980)
golang.org/x/[email protected]/http2/transport.go:1997 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
golang.org/x/[email protected]/http2/transport.go:725 +0xac5
goroutine 55 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001172660)
k8s.io/[email protected]/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
k8s.io/[email protected]/util/workqueue/delaying_queue.go:68 +0x247
goroutine 56 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0011727e0)
k8s.io/[email protected]/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
k8s.io/[email protected]/util/workqueue/delaying_queue.go:68 +0x247
goroutine 44 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0000e6a28, 0x1)
runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0xc00055cbc0)
sync/cond.go:56 +0x8c
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0000e6a00, 0xc000976470)
k8s.io/[email protected]/tools/cache/delta_fifo.go:527 +0x233
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000b703f0)
k8s.io/[email protected]/tools/cache/controller.go:183 +0x36
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f963dbcd548)
k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x13d1828, {0x3d427a0, 0xc00092e930}, 0x1, 0xc0000c63c0)
k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b70458, 0x3b9aca00, 0x0, 0x0, 0x7f963dba3840)
k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc000b703f0, 0xc0000c63c0)
k8s.io/[email protected]/tools/cache/controller.go:154 +0x2fb
k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0000e6500, 0xc001173a40)
k8s.io/[email protected]/tools/cache/shared_informer.go:414 +0x498
created by istio.io/istio/pkg/kube/configmapwatcher.(*Controller).Run
istio.io/istio/pkg/kube/configmapwatcher/configmapwatcher.go:76 +0xc5
goroutine 72 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009c0cc0)
k8s.io/[email protected]/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
k8s.io/[email protected]/util/workqueue/delaying_queue.go:68 +0x247
goroutine 32 [chan receive]:
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
k8s.io/[email protected]/tools/cache/shared_informer.go:782 +0x49
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f963dbcc4d0)
k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087df38, {0x3d427a0, 0xc0005e4cf0}, 0x1, 0xc0000c73e0)
k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0xc00087df88)
k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
k8s.io/[email protected]/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0005e0280)
k8s.io/[email protected]/tools/cache/shared_informer.go:781 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
goroutine 48 [chan receive]:
k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc000bc61c0, 0x0)
k8s.io/[email protected]/tools/cache/shared_informer.go:638 +0x45
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:56 +0x22
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
goroutine 49 [chan receive]:
k8s.io/client-go/tools/cache.(*controller).Run.func1()
k8s.io/[email protected]/tools/cache/controller.go:130 +0x28
created by k8s.io/client-go/tools/cache.(*controller).Run
k8s.io/[email protected]/tools/cache/controller.go:129 +0x105
goroutine 50 [select]:
k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc001181500, {0x0, 0x0, 0x61b26c0}, {0x3d877c8, 0xc00115b4c0}, 0xc0011f1d18, 0xc0002cb1a0, 0xc0000c63c0)
k8s.io/[email protected]/tools/cache/reflector.go:469 +0x1b6
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc001181500, 0xc0000c63c0)
k8s.io/[email protected]/tools/cache/reflector.go:429 +0x696
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
k8s.io/[email protected]/tools/cache/reflector.go:221 +0x26
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7f963dbcd548)
k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001148ac0, {0x3d42780, 0xc0011a4ff0}, 0x1, 0xc0000c63c0)
k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xb6
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc001181500, 0xc0000c63c0)
k8s.io/[email protected]/tools/cache/reflector.go:220 +0x1f8
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:56 +0x22
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
goroutine 53 [select]:
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func2()
k8s.io/[email protected]/tools/cache/reflector.go:374 +0x12d
created by k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch
k8s.io/[email protected]/tools/cache/reflector.go:368 +0x378
goroutine 54 [select]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc000c6aa80, 0xc00053f300)
golang.org/x/[email protected]/http2/transport.go:1323 +0xaa8
golang.org/x/net/http2.(*clientStream).doRequest(0x0, 0x0)
golang.org/x/[email protected]/http2/transport.go:1185 +0x1e
created by golang.org/x/net/http2.(*ClientConn).RoundTrip
golang.org/x/[email protected]/http2/transport.go:1114 +0x30f
goroutine 67 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00095bed0, 0x0)
runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x341c940)
sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001173920)
k8s.io/[email protected]/util/workqueue/queue.go:157 +0x9e
istio.io/istio/pkg/kube/controllers.Queue.processNextItem({{0x3e2e458, 0xc00055c460}, 0xc000c38e00, {0xc000c386b0, 0xf}, 0x0, 0xc000976230, 0xc000b70360})
istio.io/istio/pkg/kube/controllers/queue.go:131 +0x95
istio.io/istio/pkg/kube/controllers.Queue.Run.func1()
istio.io/istio/pkg/kube/controllers/queue.go:103 +0x4e
created by istio.io/istio/pkg/kube/controllers.Queue.Run
istio.io/istio/pkg/kube/controllers/queue.go:101 +0x1d3
goroutine 69 [select]:
istio.io/istio/security/pkg/pki/ca.(*SelfSignedCARootCertRotator).Run(0xc0000aee20, 0xc0000c63c0)
istio.io/istio/security/pkg/pki/ca/selfsignedcarootcertrotator.go:84 +0x119
created by istio.io/istio/security/pkg/pki/ca.(*IstioCA).Run
istio.io/istio/security/pkg/pki/ca/ca.go:304 +0x88
goroutine 70 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007ec900)
k8s.io/[email protected]/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
k8s.io/[email protected]/util/workqueue/delaying_queue.go:68 +0x247
goroutine 98 [select]:
net.cgoLookupIP({0x3dc3a30, 0xc001698440}, {0x379df63, 0x9}, {0xc0016652b0, 0x0})
net/cgo_unix.go:231 +0x1b7
net.(*Resolver).lookupIP(0x61af840, {0x3dc3a30, 0xc001698440}, {0x379df63, 0x3}, {0xc0016652b0, 0x9})
net/lookup_unix.go:97 +0x128
net.glob..func1({0x3dc3a30, 0xc001698440}, 0x3, {0x379df63, 0x0}, {0xc0016652b0, 0xc001672598})
net/hook.go:23 +0x3d
net.(*Resolver).lookupIPAddr.func1()
net/lookup.go:296 +0x9f
internal/singleflight.(*Group).doCall(0x61af850, 0xc000801db0, {0xc0016652c0, 0xd}, 0xc001682880)
internal/singleflight/singleflight.go:95 +0x3b
created by internal/singleflight.(*Group).DoChan
internal/singleflight/singleflight.go:88 +0x2f1
Environment:
RKE2 K8S Cluster
NAME STATUS ROLES AGE VERSION
rke2-dev-001 Ready control-plane,etcd,master 22d v1.25.6+rke2r1
rke2-dev-002 Ready control-plane,etcd,master 22d v1.25.6+rke2r1
rke2-dev-003 Ready control-plane,etcd,master 22d v1.25.6+rke2r1
rke2-dev-004 Ready 22d v1.25.6+rke2r1
rke2-dev-005 Ready 22d v1.25.6+rke2r1
Issue
Trying to follow guide https://docs.tetrate.io/getmesh-cli/install/install-istio/, install appears to be successful but getmesh validation fails.
[root@placeholder-rke2-dev-001 maintuser]# getmesh version
getmesh version: 1.1.4
active istioctl: 1.16.1-tetratefips-v0
client version: 1.16.1-tetratefips-v0
control plane version: 1.16.1
data plane version: 1.16.1 (1 proxies)
[root@placeholder-rke2-dev-001 maintuser]# getmesh config-validate
Running the config validator. This may take some time...
2023-04-07T00:02:10Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:10Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:11Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:11.510652Z info klog Waited for 1.185786199s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/apis/networking.istio.io/v1alpha3/namespaces/placeholder/destinationrules?labelSelector=
2023-04-07T00:02:11Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:12Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:12Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:13Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:14Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:15Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:16Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:17Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:17Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:18Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:19Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:19Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:20Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:22.107014Z info klog Waited for 1.194946355s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/api/v1/namespaces/placeholder
2023-04-07T00:02:22Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:22Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:23Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:24Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:24Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:25Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:25Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:26Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:28Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:28Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:29Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:30Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:30Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:31Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:32Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:32Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:34Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:34.506253Z info klog Waited for 1.195471236s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/api/v1/namespaces
2023-04-07T00:02:35Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:35Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:36Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:37Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:37Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:38Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:38Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:40Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:41Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:42Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:42Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:43Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:43Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:44Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:45Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:46.906630Z info klog Waited for 1.195688531s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/api/v1/namespaces
2023-04-07T00:02:47Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
2023-04-07T00:02:47Z ERR Error fetching CronJobs per namespace default: the server could not find the requested resource
2023-04-07T00:02:48Z ERR Error fetching CronJobs per namespace elastic-system: the server could not find the requested resource
2023-04-07T00:02:48Z ERR Error fetching CronJobs per namespace placeholder: the server could not find the requested resource
2023-04-07T00:02:49Z ERR Error fetching CronJobs per namespace istio-system: the server could not find the requested resource
2023-04-07T00:02:50Z ERR Error fetching CronJobs per namespace keycloak: the server could not find the requested resource
2023-04-07T00:02:50Z ERR Error fetching CronJobs per namespace longhorn-system: the server could not find the requested resource
2023-04-07T00:02:51Z ERR Error fetching CronJobs per namespace neuvector: the server could not find the requested resource
error kiali validation:
the server could not find the requested resource
the server could not find the requested resource
the server could not find the requested resource
the server could not find the requested resource
the server could not find the requested resource
the server could not find the requested resource
the server could not find the requested resource
Steps to reproduce:
kubectl create namespace istio-system
Make it privileged
kubectl label namespace istio-system "pod-security.kubernetes.io/enforce=privileged"
[root@placeholder-rke2-dev-001 maintuser]# getmesh istioctl install --set profile=default --set hub=10.128.8.119/ironbank/tetrate --set tag=1.16.1-tetratefips-v1
[WARNING] your current patch version 1.16.1 is not the latest version 1.16.3. We recommend you fetch the latest version through "getmesh fetch" command, and switch to the latest version through "getmesh switch" command
? Proceed? [y/N] y█
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
To get started, check out https://istio.io/latest/docs/setup/getting-started/
This will install the Istio 1.16.1 default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Installation complete
Making this installation the default for injection and validation.
Thank you for installing Istio 1.16. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/99uiMML96AmsXY5d6
1 Istio control planes detected, checking --revision "default" only
✔ ClusterRole: istiod-istio-system.istio-system checked successfully
✔ ClusterRole: istio-reader-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istio-reader-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istiod-istio-system.istio-system checked successfully
✔ ServiceAccount: istio-reader-service-account.istio-system checked successfully
✔ Role: istiod-istio-system.istio-system checked successfully
✔ RoleBinding: istiod-istio-system.istio-system checked successfully
✔ ServiceAccount: istiod-service-account.istio-system checked successfully
✔ CustomResourceDefinition: wasmplugins.extensions.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: destinationrules.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: envoyfilters.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: gateways.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: proxyconfigs.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: serviceentries.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: sidecars.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: virtualservices.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: workloadentries.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: workloadgroups.networking.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: authorizationpolicies.security.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: peerauthentications.security.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: requestauthentications.security.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: telemetries.telemetry.istio.io.istio-system checked successfully
✔ CustomResourceDefinition: istiooperators.install.istio.io.istio-system checked successfully
✔ HorizontalPodAutoscaler: istiod.istio-system checked successfully
✔ ClusterRole: istiod-clusterrole-istio-system.istio-system checked successfully
✔ ClusterRole: istiod-gateway-controller-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istiod-clusterrole-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istiod-gateway-controller-istio-system.istio-system checked successfully
✔ ConfigMap: istio.istio-system checked successfully
✔ Deployment: istiod.istio-system checked successfully
✔ ConfigMap: istio-sidecar-injector.istio-system checked successfully
✔ MutatingWebhookConfiguration: istio-sidecar-injector.istio-system checked successfully
✔ PodDisruptionBudget: istiod.istio-system checked successfully
✔ ClusterRole: istio-reader-clusterrole-istio-system.istio-system checked successfully
✔ ClusterRoleBinding: istio-reader-clusterrole-istio-system.istio-system checked successfully
✔ Role: istiod.istio-system checked successfully
✔ RoleBinding: istiod.istio-system checked successfully
✔ Service: istiod.istio-system checked successfully
✔ ServiceAccount: istiod.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.13.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.13.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.14.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.14.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.15.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.15.istio-system checked successfully
✔ EnvoyFilter: stats-filter-1.16.istio-system checked successfully
✔ EnvoyFilter: tcp-stats-filter-1.16.istio-system checked successfully
✔ ValidatingWebhookConfiguration: istio-validator-istio-system.istio-system checked successfully
✔ HorizontalPodAutoscaler: istio-ingressgateway.istio-system checked successfully
✔ Deployment: istio-ingressgateway.istio-system checked successfully
✔ PodDisruptionBudget: istio-ingressgateway.istio-system checked successfully
✔ Role: istio-ingressgateway-sds.istio-system checked successfully
✔ RoleBinding: istio-ingressgateway-sds.istio-system checked successfully
✔ Service: istio-ingressgateway.istio-system checked successfully
✔ ServiceAccount: istio-ingressgateway-service-account.istio-system checked successfully
Checked 15 custom resource definitions
Checked 2 Istio Deployments
✔ Istio is installed and verified successfully
Thanks.
support user to fetch istio distribution using command like
getistio fetch --name 1.9.0-istio-v0
I've tried to install getmesh cli by executing curl -sL https://istio.tetratelabs.io/getmesh/install.sh | bash
.
The output is the following:
bash-3.2$ curl -sL https://istio.tetratelabs.io/getmesh/install.sh | bash
tetratelabs/getmesh info checking GitHub for latest tag
tetratelabs/getmesh info found version: 1.1.2 for v1.1.2/darwin/arm64
tetratelabs/getmesh info installed /Users/marcnavarro/.getmesh/bin/getmesh
No errors, but also no successful installation message.
My first though was that the command to grep GETMESH_HOME from the detected profile was failing somehow, so I added an else
statement:
log_info "profile detected at ($detected_profile)"
if ! command grep -qc 'GETMESH_HOME' "$detected_profile"; then
log_info "updating user profile ($detected_profile)..."
log_info "the following two lines are added into your profile ($detected_profile):"
printf "\n$path_str\n"
command printf "$path_str" >> "$detected_profile"
printf "\nFinished installation. Open a new terminal to start using getmesh!\n"
else
log_info "your profile already contains GETMESH_HOME in the path" #expected log_info
fi
But surprisingly, the expected log_info (added in the else part) was no being printed. And my profile still doesn't contain any GETMESH_HOME.
bash-3.2$ ./install.sh
tetratelabs/getmesh info checking GitHub for latest tag
tetratelabs/getmesh info found version: 1.1.2 for v1.1.2/darwin/arm64
tetratelabs/getmesh info installed /Users/marcnavarro/.getmesh/bin/getmesh
tetratelabs/getmesh info profile detected at (/Users/marcnavarro/.zshrc)
I've been playing around trying to figure out what was the culprit, but couldn't really figure it out. Don't know if command
, or my OS, or my ARCH (Apple m1), or zsh default version from mac osx bigsur.
Moreover command grep ...
seems to execute correctly if the -q
one can see the output of 0
.
The only solution that I found was storing grep's
count output into a variable and compare the output to 0:
log_info "profile detected at ($detected_profile)"
local getmesh_in_profile=$(command grep -c 'GETMESH_HOME' "$detected_profile")
log_info "found ($getmesh_in_profile) GETMESH_HOME in detected profile"
if [[ "$getmesh_in_profile" -eq 0 ]]; then
log_info "updating user profile ($detected_profile)..."
log_info "the following two lines are added into your profile ($detected_profile):"
printf "\n$path_str\n"
command printf "$path_str" >> "$detected_profile"
printf "\nFinished installation. Open a new terminal to start using getmesh!\n"
else
log_info "your profile already contains getmesh in the path"
fi
Now it worked:
marcnavarro@Marcs-MacBook-Pro ~ % ./install_debug.sh
tetratelabs/getmesh info checking GitHub for latest tag
tetratelabs/getmesh info found version: 1.1.2 for v1.1.2/darwin/arm64
tetratelabs/getmesh info installed /Users/marcnavarro/.getmesh/bin/getmesh
tetratelabs/getmesh info profile detected and found (/Users/marcnavarro/.zshrc)
tetratelabs/getmesh info found (0) GETMESH_HOME in detected profile
tetratelabs/getmesh info updating user profile (/Users/marcnavarro/.zshrc)...
tetratelabs/getmesh info the following two lines are added into your profile (/Users/marcnavarro/.zshrc):
export GETMESH_HOME="$HOME/.getmesh"
export PATH="$GETMESH_HOME/bin:$PATH"
Finished installation. Open a new terminal to start using getmesh!
Do you know what could be the problem of the actual code in main?
Should I create a PR with the proposed workaround of comparing grep's
count result?
getmesh version
getmesh version: 1.1.3
active istioctl: 1.11.3-tetrate-v0
no running Istio pods in "istio-system"
1.11.3-tetrate-v0
getmesh istioctl install --set profile=demo --filename istio-2-gw.yaml
Execute istioctl with given arguments where the version of istioctl is set by "getsitio fetch or switch"
Usage:
getmesh istioctl <args...> [flags]
Examples:
# install Istio with the default profile
getmesh istioctl install --set profile=default
# check versions of Istio data plane, control plane, and istioctl
getmesh istioctl version
Flags:
-h, --help help for istioctl
Global Flags:
-c, --kubeconfig string Kubernetes configuration file
Dear Tetratelabs Team,
Please up the kiali version in the go code. There is an existing dependency for version 1.43+, but it's being replaced with an older package here:
Line 101 in 6089ff1
Currently, getmesh version 1.1.5 vulnerability scan comes up with a CVE vulnerability, which is older than 1 year - CVE-2021-20278
https://nvd.nist.gov/vuln/detail/CVE-2021-20278
Please remove the replacement or replace it with a newer version and release it. Thank you!
azuterios
Right now getmesh check-upgrade
displays:
[Summary of your Istio mesh]
active istioctl version: 1.11.3-tetrate-v0
control plane version: 1.11.3-tetrate-v0
[GetMesh Check]
- There is the available patch for the minor version 1.11-tetrate. We recommend upgrading all 1.11-tetrate versions -> 1.11.6-tetrate-v0
While that is useful we should also provide a hint on how to upgrade, for example brew does
:
SOmething like:
Hint: run getmesh switch [the version]
would help with autodiscovery and save a couple of commands to figure out how.
While following the doc the command is failing with the error
getmesh istioctl install -f istio-2-gw.yaml
error executing istioctl: exit status 64, Error: unknown shorthand flag: 'f' in -f
I'm not sure if the same command worked for previous releases but not working anymore.
We need to update the step with the correct command or remove it completely
Some istio images seem to be using native go instead of boringgo. This seems like an issue in the build pipeline. This needs to be addressed immediately.
❯ go test ./...
? github.com/tetratelabs/getmesh [no test files]
ok github.com/tetratelabs/getmesh/cmd 0.016s
? github.com/tetratelabs/getmesh/doc [no test files]
2023/01/22 09:45:19 fork/exec ./getmesh: no such file or directory
FAIL github.com/tetratelabs/getmesh/e2e 0.011s
ok github.com/tetratelabs/getmesh/internal/cacerts/certutils 0.227s
? github.com/tetratelabs/getmesh/internal/cacerts/k8s [no test files]
ok github.com/tetratelabs/getmesh/internal/cacerts/providers 0.283s
ok github.com/tetratelabs/getmesh/internal/cacerts/providers/config 0.016s
? github.com/tetratelabs/getmesh/internal/cacerts/providers/models [no test files]
ok github.com/tetratelabs/getmesh/internal/checkupgrade 0.040s
ok github.com/tetratelabs/getmesh/internal/configvalidator 0.012s
ok github.com/tetratelabs/getmesh/internal/getmesh 0.003s
ok github.com/tetratelabs/getmesh/internal/istioctl 8.985s
ok github.com/tetratelabs/getmesh/internal/manifest 0.020s
ok github.com/tetratelabs/getmesh/internal/manifestchecker 0.006s
? github.com/tetratelabs/getmesh/internal/test [no test files]
ok github.com/tetratelabs/getmesh/internal/util 0.006s
? github.com/tetratelabs/getmesh/internal/util/logger [no test files]
FAIL
I am running getmesh istioctl install --set profile=minimal --set tag=1.13.3-tetratefips-v0-distroless
to install Tetrate Istio with FIPS mode and selected distroless option.
I am getting this error in the istiod container log:
Error: failed to create discovery service:
error initializing kube client:
failed reading mesh config:
cannot read mesh config file open ./etc/istio/config/mesh: permission denied
Here is complete log:
2022-05-08T21:10:56.373600Z info FLAG: --caCertFile=""
2022-05-08T21:10:56.373631Z info FLAG: --clusterAliases="[]"
2022-05-08T21:10:56.373635Z info FLAG: --clusterID="Kubernetes"
2022-05-08T21:10:56.373638Z info FLAG: --clusterRegistriesNamespace="istio-system"
2022-05-08T21:10:56.373641Z info FLAG: --configDir=""
2022-05-08T21:10:56.373643Z info FLAG: --ctrlz_address="localhost"
2022-05-08T21:10:56.373648Z info FLAG: --ctrlz_port="9876"
2022-05-08T21:10:56.373650Z info FLAG: --domain="cluster.local"
2022-05-08T21:10:56.373653Z info FLAG: --grpcAddr=":15010"
2022-05-08T21:10:56.373659Z info FLAG: --help="false"
2022-05-08T21:10:56.373661Z info FLAG: --httpAddr=":8080"
2022-05-08T21:10:56.373664Z info FLAG: --httpsAddr=":15017"
2022-05-08T21:10:56.373669Z info FLAG: --keepaliveInterval="30s"
2022-05-08T21:10:56.373672Z info FLAG: --keepaliveMaxServerConnectionAge="30m0s"
2022-05-08T21:10:56.373675Z info FLAG: --keepaliveTimeout="10s"
2022-05-08T21:10:56.373679Z info FLAG: --kubeconfig=""
2022-05-08T21:10:56.373685Z info FLAG: --kubernetesApiBurst="160"
2022-05-08T21:10:56.373694Z info FLAG: --kubernetesApiQPS="80"
2022-05-08T21:10:56.373700Z info FLAG: --log_as_json="false"
2022-05-08T21:10:56.373704Z info FLAG: --log_caller=""
2022-05-08T21:10:56.373708Z info FLAG: --log_output_level="default:info"
2022-05-08T21:10:56.373712Z info FLAG: --log_rotate=""
2022-05-08T21:10:56.373715Z info FLAG: --log_rotate_max_age="30"
2022-05-08T21:10:56.373720Z info FLAG: --log_rotate_max_backups="1000"
2022-05-08T21:10:56.373725Z info FLAG: --log_rotate_max_size="104857600"
2022-05-08T21:10:56.373729Z info FLAG: --log_stacktrace_level="default:none"
2022-05-08T21:10:56.373737Z info FLAG: --log_target="[stdout]"
2022-05-08T21:10:56.373748Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2022-05-08T21:10:56.373752Z info FLAG: --monitoringAddr=":15014"
2022-05-08T21:10:56.373756Z info FLAG: --namespace="istio-system"
2022-05-08T21:10:56.373761Z info FLAG: --networksConfig="./etc/istio/config/meshNetworks"
2022-05-08T21:10:56.373768Z info FLAG: --plugins="[ext_authz,authn,authz]"
2022-05-08T21:10:56.373778Z info FLAG: --profile="true"
2022-05-08T21:10:56.373786Z info FLAG: --registries="[Kubernetes]"
2022-05-08T21:10:56.373796Z info FLAG: --resync="1m0s"
2022-05-08T21:10:56.373800Z info FLAG: --secureGRPCAddr=":15012"
2022-05-08T21:10:56.373805Z info FLAG: --shutdownDuration="10s"
2022-05-08T21:10:56.373809Z info FLAG: --tls-cipher-suites="[]"
2022-05-08T21:10:56.373814Z info FLAG: --tlsCertFile=""
2022-05-08T21:10:56.373819Z info FLAG: --tlsKeyFile=""
2022-05-08T21:10:56.373829Z info FLAG: --vklog="0"
2022-05-08T21:10:56.381865Z error failed to create discovery service: error initializing kube client: failed reading mesh config: cannot read mesh config file open ./etc/istio/config/mesh: permission denied
Error: failed to create discovery service: error initializing kube client: failed reading mesh config: cannot read mesh config file open ./etc/istio/config/mesh: permission denied
It would be nice if getistio switch --version 1.8
worked and translated to the latest patch version of the provided minor release in the same way fetch does. Instead, today, it translates to an invalid version, even though fetch
does the right thing:
~ getistio version
getistio version: 1.0.3
# snip
~ getistio fetch --version 1.9
fallback to the tetrate flavor since --flavor flag is not given or not supported
fallback to 1.9.0 which is the latest patch version in the given verion minor 1.9
fallback to the flavor 0 version which is the latest one in 1.9.0-tetrate
1.9.0-tetrate-v0 already fetched: download skipped
For more information about 1.9.0-tetrate-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.9.x/announcing-1.9/
istioctl switched to 1.9.0-tetrate-v0 now
~ getistio fetch --version 1.8
fallback to the tetrate flavor since --flavor flag is not given or not supported
fallback to 1.8.3 which is the latest patch version in the given verion minor 1.8
fallback to the flavor 0 version which is the latest one in 1.8.3-tetrate
1.8.3-tetrate-v0 already fetched: download skipped
For more information about 1.8.3-tetrate-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.8.x/announcing-1.8.3/
istioctl switched to 1.8.3-tetrate-v0 now
~ getistio switch --version 1.9
istioctl not fetched for 1.9-tetrate-v0. Please run `getistio fetch`: file does not exist
✘ ~ getistio switch --version 1.8
istioctl not fetched for 1.8-tetrate-v0. Please run `getistio fetch`: file does not exist
✘ ~
More generally, I think we need a single spot in our codebase where, given flavor
, version
, and flavorversion
we return a consistent name
. There keep being small issues where our handling of the three values is slightly different in different contexts, which is painful for users.
Installation failed as it failed to pull the v1.9.0 fips images (tetrate-docker-getistio-docker.bintray.io/proxyv2:1.9.0-tetratefips-v1, etc.) anymore. Are those images moved?
I think the naming format for the resource has changed.
If I use the following as CA name: "projects/my-project-name/locations/us-west1/certificateAuthorities/20210809-46c-fh8"
the getmesh gen-ca
command fails with:
unable to issue CA, due to error: unable to create GCP certificate: rpc error: code = NotFound desc = Requested entity was not found.
It looks like a concept of caPools
was introduced - here's the full resource name of the CA I created in GCP:
projects/my-project-name/locations/us-west1/caPools/istio-tetratelabs-io/certificateAuthorities/20210809-46c-fh8
However, if I try to use the above format, getmesh fails again (different error):
unable to issue CA, due to error: unable to create GCP certificate: rpc error: code = InvalidArgument desc = Malformed collection name: 'caPools/certificateAuthorities/certificates'
I am assuming this is because getmesh still uses the v1beta version of the API/protos. Perhaps we should migrate to v1 of the privateca api.
Currently we do not have a way to say/choose the distroless image to be used for any of the flavor at the time of install .
This request came from Microstrategy as they are seeing some CVE issue in the ubuntu
based image.
Though we are releasing the image for distroless
for every flavor .
I attempted to run the commands from the Getting Started Quickly guide via a YAML configuration on a CD platform. This is executing via bash in a container after Terraform deploys the EKS cluster and installs/configures the app via kubectl.
I'm experiencing issues with the bash profile where it says getistio was install properly, but then it can't find the command.
Container Image:
- node:12-alpine
Commands listed in YAML:
- curl -sL https://tetrate.bintray.com/getistio/download.sh | bash
- getistio istioctl install --set profile=demo
- getistio version
- getistio config-validate
Console Output / Errors:
> curl -sL https://tetrate.bintray.com/getistio/download.sh | bash
Downloading GetIstio from https://tetrate.bintray.com/getistio/getistio_linux_amd64_v1.0.0.tar.gz ...
GetIstio Download Complete!
Error: No user profile found.
Tried $PROFILE (), ~/.bashrc, ~/.bash_profile, ~/.zshrc, ~/.profile, and ~/.config/fish/config.fish.
You can either create one of these and try again or add this to the appropriate file:
export GETISTIO_HOME="$HOME/.getistio"
export PATH="$GETISTIO_HOME/bin:$PATH"
Downloading latest istio ...
Downloading 1.8.2-tetrate-v0 from https://tetrate.bintray.com/getistio/istio-1.8.2-tetrate-v0-linux-amd64.tar.gz ...
Istio 1.8.2 Download Complete!
Istio has been successfully downloaded into your system.
For more information about 1.8.2-tetrate-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.8.x/announcing-1.8.2/
istioctl switched to 1.8.2-tetrate-v0 now
Finished installation. Open a new terminal to start using getistio!
> getistio istioctl install --set profile=demo
/bin/sh: getistio: not found
1.12.1 images were released. Update getmesh to support these images.
We are seeing CVE-2022-27664 vulnerability reported because of Getmesh having v0.0.0-20210614182718-04defd469f4e
Affected packages are : golang.org/x/net , golang.org/x/net/http2 and golang.org/x/net/http/httpguts
version reporting this vulnerability : v0.0.0-20210614182718-04defd469f4e
Fix is available in : 0.0.0-20220906165146-f3363e06e74c
I request you to please update all the affected packages mentioned above to fix version 0.0.0-20220906165146-f3363e06e74c
We are trying to see if you can produce a distroless image version of the fips enabled istio. On the current version of istio, a security scan reveals some of the vulnerabilities that are related to the kernel and components that are not required for the istio to function. Any help would greatly appreciate!
https://github.com/tetratelabs/getmesh#overview does elaborate on what flavor mean/are. It would be great to also explain what flavor versions are with examples
Fetching a binary using only the --flavor
flag should work (pulling the latest verson/flavor-version available), but it doesn't work in isolation today; you have to provide both --flavor
and --version
to pull the correct binary:
$ getistio fetch --flavor istio
1.9.0-tetrate-v0 already fetched: download skipped
For more information about 1.9.0-tetrate-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.9.x/announcing-1.9/
istioctl switched to 1.9.0-tetrate-v0 now
$ getistio fetch --flavor istio --flavor-version 0
1.9.0-tetrate-v0 already fetched: download skipped
For more information about 1.9.0-tetrate-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.9.x/announcing-1.9/
istioctl switched to 1.9.0-tetrate-v0 now
Note that the tetrate
flavor was pulled for both, not istio
.
$ getistio fetch --flavor istio --version 1.9.0
fallback to the flavor 0 version which is the latest one in 1.9.0-istio
1.9.0-istio-v0 already fetched: download skipped
For more information about 1.9.0-istio-v0, please refer to the release notes:
- https://istio.io/latest/news/releases/1.9.x/announcing-1.9/
istioctl switched to 1.9.0-istio-v0 now
CLI does the right thing when --version
is supplied.
current we support to fetch istio through name
getistio fetch --name 1.9.0-tetrate-v0
above command could fetch istio version 1.9.0 in flavor tetrate with flavor version 0
and we also support to switch to given istio version through name
getistio switch--name 1.9.0-tetrate-v0
the name of each distribution did not shown and user need to figure out name by themself which is not user friendly, thus we want to show the column data in format like:
NAME | ISTIO VERSION | FLAVOR | FLAVOR VERSION | K8S VERSIONS |
---|---|---|---|---|
1.9.0-tetrate-v0 | *1.9.0 | tetrate | 0 | 1.17,1.18,1.19 |
1.9.0-tetratefips-v0 | 1.9.0 | tetratefips | 0 | 1.17,1.18,1.19 |
1.9.0-istio-v0 | 1.9.0 | istio | 0 | 1.17,1.18,1.19 |
Twistlock vulnerability scanner is reporting the following vulnerabilities against getmesh cli version 1.1.4
which is built from go1.17.9
CVE-2022-30629|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.3, 1.17.11|3.1|low
CVE-2022-30580|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.3, 1.17.11|7.8|high
CVE-2022-1962|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|5.5|medium
CVE-2022-1705|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|6.5|medium
CVE-2022-32148|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|6.5|medium
CVE-2022-28131|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|7.5|high
CVE-2022-30630|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|7.5|high
CVE-2022-30631|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|7.5|high
CVE-2022-30632|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|7.5|high
CVE-2022-30633|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|7.5|high
CVE-2022-30635|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.4, 1.17.12|7.5|high
CVE-2022-32189|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.18.5, 1.17.13|7.5|high
CVE-2022-27664|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.19.1, 1.18.6|7.5|high
CVE-2022-2879|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.19.2, 1.18.7|7.5|high
CVE-2022-2880|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.19.2, 1.18.7|7.5|high
CVE-2022-41715|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.19.2, 1.18.7|7.5|high
CVE-2022-41716|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.19.3, 1.18.8|5.4|medium
CVE-2022-41717|go|/usr/local/bin/getmesh|1.17.9|fixed in 1.19.4, 1.18.9|5.3|medium
CVE-2022-27664|golang.org/x/net|/usr/local/bin/getmesh|v0.0.0-20210614182718-04defd469f4e|fixed in 0.0.0-20220906165146-f3363e06e74c|7|high
CVE-2021-3495|github.com/kiali/kiali|/usr/local/bin/getmesh|v1.29.1-0.20210125202741-72d2ce2fceb5|fixed in 1.33.0|7|high
CVE-2020-26160|github.com/dgrijalva/jwt-go|/usr/local/bin/getmesh|v3.2.0|open|7|high
Re-building from golang 1.19.4+
and releasing a newer version should fix majority of the above issues.
healthy and invalid namespaces are created as part of e2e test run. They need to be cleaned up once the run is done.
One piece of feedback we got from early users is that a simple example of CA integrations using openssl locally on a machine would be useful for understanding what "magic" the tool is doing and would help folks new to this to understand what's going on. I agree; I think we should add support for an openssl provider that just defers out to the openssl
command on the local system to generate certs and load them into Istio. Maybe link to the page on istio.io that walks through using openssl to create the CA certs then contrast that with running istioctl gen-ca --config-file openssl.yaml
getmesh
fails to recongnize the -f
or --filename
flag for istioctl install
.
$ getmesh istioctl install -f config.yaml -y --verify
error executing istioctl: exit status 64, Error: unknown shorthand flag: 'f' in -f
$ istioctl version
client version: 1.12.0
control plane version: 1.12.0
data plane version: 1.12.0 (1 proxies)
$ getmesh version
getmesh version: 1.1.3
active istioctl: 1.11.3-tetrate-v0
client version: 1.11.3-tetrate-v0
control plane version: 1.12.0
data plane version: 1.12.0 (1 proxies)
xref: https://istio.tetratelabs.io/istio-ca-certs-integrations/cert-manager-integration/
See tetratelabs/istio-distro.io#136. ATM the force
field is a bit unclear; I think a name like overrideExistingCACertsSecret
makes it more obvious/self documenting.
I would like the ability to use getIstio to deploy images from IronBank offered by the Department of Defense listed here
https://registry1.dso.mil/harbor/projects/3/repositories/opensource%2Fistio%2Fproxyv2
https://registry1.dso.mil/harbor/projects/3/repositories/opensource%2Fistio%2Foperator
https://registry1.dso.mil/harbor/projects/3/repositories/opensource%2Fistio%2Fpilot
https://registry1.dso.mil/harbor/projects/3/repositories/opensource%2Fistio-1.8%2Fistioctl
It does require docker login credentials but it is free to sign up.
We've had some requests from folks for both GetIstio and GetEnvoy CLIs in containers so they can be more easily consumed by things like CI/CD systems. We should offer a GetIstio download as a container in addition to the current binary download for these use cases.
I believe if we stick the getistio binary into the container as the entrypoint, we'd be able to do stuff like: docker run somehub/getistio fetch --version 1.9 --flavor istio
From PR #130
You simply change the 1.16.2 version to 1.16.3. Which means that 1.16.2 is took down from your support list, Can you confirm it is intended or just a mistake?
Trying to figure out if after I use getistio to install the fips compliant Istio, whether the license allows us to use it without the enterprise support contract.
Thanks!
Currently getmesh list
only shows x64 flavours of TID. We have ARM builds, so we should have them show up in the output of the command.
The v2 of AWS SDK for Go was GA in Jan 2021 (https://aws.amazon.com/blogs/developer/aws-sdk-for-go-version-2-general-availability), we should use it for better performance and maintainability (https://github.com/aws/aws-sdk-go-v2/tree/main/service/acmpca)
Currently, running getistio version
if there's no active Kubernetes cluster shows:
getistio version: 1.0.1
active istioctl: 1.8.2-tetratefips-v0
unable to retrieve Pods: Get "http://localhost:8080/api/v1/namespaces/istio-system/pods?fieldSelector=status.phase%!D(MISSING)Running&l
abelSelector=app%!D(MISSING)istiod": dial tcp 127.0.0.1:8080: connect: connection refused
1.8.2-tetratefips-v0
We could show a user-friendly message (e.g. no active Kubernetes clusters found
, can't connect to Kubernetes cluster
) if we couldn't connect to k8s cluster.
I'm trying to download containers.istio.tetratelabs.com/pilot:1.17.0-tetratefips-v0-distroless and containers.istio.tetratelabs.com/proxyv2:1.17.0-tetratefips-v0-distroless, but I'm getting the following error message:
Error response from daemon: manifest for containers.istio.tetratelabs.com/pilot:1.17.0-tetratefips-v0-distroless not found: manifest unknown: manifest unknown: {'Name': 'pilot', 'Reference': '1.17.0-tetratefips-v0-distroless', 'Type': 'manifest'}
When I run getmesh list
I don't see a tetratefips flavor of 1.17.0:
$ getmesh list
ISTIO VERSION FLAVOR FLAVOR VERSION K8S VERSIONS
1.17.0 tetrate 0 1.23,1.24,1.25,1.26
1.17.0 istio 0 1.23,1.24,1.25,1.26
*1.16.2 tetrate 0 1.22,1.23,1.24,1.25
1.16.2 tetratefips 0 1.22,1.23,1.24,1.25
1.16.2 istio 0 1.22,1.23,1.24,1.25
....
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.