Coder Social home page Coder Social logo

ocp-power-automation / ocp4-upi-powervs Goto Github PK

View Code? Open in Web Editor NEW
20.0 7.0 45.0 2.38 MB

OpenShift on Power Virtual Server

License: Apache License 2.0

HCL 98.76% Shell 1.03% Smarty 0.20%
powervs ocp openshift openshift-v4 openshift-deployment power ppc64le ibm infrastructure-as-code powervm

ocp4-upi-powervs's Introduction

Table of Contents

Introduction

The ocp4-upi-powervs project provides Terraform based automation code to help with the deployment of OpenShift Container Platform (OCP) 4.x on IBM® Power Systems™ Virtual Server on IBM Cloud.

This project leverages the helpernode ansible playbook internally for OCP deployment on IBM Power Systems Virtual Servers (PowerVS).

!!! Note For bugs/enhancement requests etc. please open a GitHub issue

!!! Note Use the main branch to install any versions of OCP starting from 4.6, including pre-release versions.

Getting Started With PowerVS

Automation Host Prerequisites

The automation needs to run from a system with internet access. This could be your laptop or a VM with public internet connectivity. This automation code have been tested on the following Operating Systems:

  • Mac OSX (Darwin)
  • Linux (x86_64/ppc64le)
  • Windows 10

Follow the guide to complete the prerequisites.

PowerVS Prerequisites

Follow the guide to complete the PowerVS prerequisites.

OCP Install

Follow the quickstart guide for OCP installation on PowerVS.

Contributing

Please see the contributing doc for more details. PRs are most welcome !!

ocp4-upi-powervs's People

Contributors

aishwaryabk avatar bpradipt avatar christopher-horn avatar clnperez avatar cs-zhang avatar dpkshetty avatar elayarajadhanapal avatar gauravpbankar avatar ltccci avatar miyamotoh avatar mkumatag avatar poorna-gottimukkula1 avatar ppc64le-cloud-bot avatar prajyot-parab avatar pravin-dsilva avatar prb112 avatar sachin-itagi avatar sajauddin avatar yussufsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ocp4-upi-powervs's Issues

Create T-Shirt size clusters configuration for deployment

To easy the first time experience for those not familiar with OCP or PowerVS and reduce the amount of the necessary configurations we could provide predefined flavors: small, medium, large, x-large. This option could be set from the command line, during the execution:

terraform apply -var-file var.tfvars -size=small

Suggestions:

small

bastion     = {memory      = "16",   processors  = "1",    "count"   = 1}
bootstrap   = {memory      = "32",   processors  = "0.5",  "count"   = 1}
master      = {memory      = "32",   processors  = "0.5",  "count"   = 3}
worker      = {memory      = "32",   processors  = "0.5",  "count"   = 2}

medium

bastion     = {memory      = "16",   processors  = "1",    "count"   = 1}
bootstrap   = {memory      = "32",   processors  = "0.5",  "count"   = 1}
master      = {memory      = "32",   processors  = "0.5",  "count"   = 3}
worker      = {memory      = "32",   processors  = "1",  "count"   = 4}

large

bastion     = {memory      = "16",   processors  = "1",    "count"   = 1}
bootstrap   = {memory      = "32",   processors  = "0.5",  "count"   = 1}
master      = {memory      = "32",   processors  = "1",  "count"   = 3}
worker      = {memory      = "64",   processors  = "1",  "count"   = 6}

x-large

bastion     = {memory      = "16",   processors  = "1",    "count"   = 1}
bootstrap   = {memory      = "32",   processors  = "0.5",  "count"   = 1}
master      = {memory      = "32",   processors  = "1",  "count"   = 3}
worker      = {memory      = "64",   processors  = "2",  "count"   = 8}

dhcpd is always running on env2 interface

The DHCP configuration using the helpernode is always using the default interface name env2. This is because of ip r command is returning multiple lines for the sed command to fail with below error:

module.install.null_resource.config (remote-exec): sed: -e expression #1, char 45: unknown command: `.'

Creating Bastion manually with Public IP and using the same for Install

For cases, where we see higher failure rate with public network reachability, can we ask the end-user to create Bastion with public IP as a pre-requisite and validate if the public IP is reachable before standing up OCP using TF automation ?

Understand this is not the best experience, however this will ensure a working cluster is available.

Fix DirectLink interface bug in template

module.install.null_resource.powervs_config[0] (remote-exec): sed: -e expression #1, char 33: unknown command: `.'
module.install.null_resource.powervs_config[0] (remote-exec): Running powervs specific nodes configuration playbook...

Set http fetch timeout for ignition hosted on HTTP sever

Ignition fetch stage continue to GET the HTTP file with default timeout set to 0 (indefinite).

Need to set the time to proper value in the automation.

Note: Ignition provider does not allow to change the timeout. Need to use string manipulation to add timeout in rendered ignition string.

Allow selecting which IP to use to SSH into the cluster

By default we are using the public IP to SSH into the cluster. This covers the case where we assume someone is running this automation from a remote source, outside PowerVS. Eventually someone will have a full deployment only within PowerVS using private IPs, without using any route outside this environment.

I would be a good feature if we can select via TF which IP we want to use: bastion_public_ip or bastion_private_ip.

Set specific git clone commands for each branch

To avoid misunderstandings regarding using specific versions of this code, it would be good to add in the docs the following:

Clone the correct branch based on the OpenShift version you would like to install:

For OCP 4.6:

git clone --single-branch --branch release-4.6 https://github.com/ocp-power-automation/ocp4-upi-powervs.git

For OCP 4.5:

git clone --single-branch --branch release-4.5 https://github.com/ocp-power-automation/ocp4-upi-powervs.git

Intermittent issue while getting public IP for ssh connections

Public IP data source not working at initial step. Could be issue populating the public vlan at the API side.

    {
      "module": "module.prepare",
      "mode": "data",
      "type": "ibm_pi_instance_ip",
      "name": "bastion_public_ip",
      "each": "list",
      "provider": "provider.ibm",
      "instances": [
        {
          "index_key": 0,
          "schema_version": 0,
          "attributes": {
            "external_ip": "158.176.146.37",
            "id": "c67f6948-8f24-486d-8ade-62560886403e",
            "ip": "192.168.139.37",
            "ipoctet": "37",
            "macaddress": "fa:5c:f9:80:7f:21",
            "network_id": "c67f6948-8f24-486d-8ade-62560886403e",
            "pi_cloud_instance_id": "729b4527-6709-4ecb-8803-e0f1709ea945",
            "pi_instance_name": "yus-ha-bastion-0",
            "pi_network_name": "yus-ha-pub-net",
            "type": "fixed"
          }
        },
      ]
    },

This is only first time, subsequent apply works without any issues and the data resource is populated. But the bastion node resource has the external IP.

Bastion node does not connect over public IP and does not have internet access

On PowerVS we use both private and public network for the bastion. External IP is used to connect to the bastion from the TF terminal.

Actual Behaviour: The bastion node does not connect via external IP on specific zone.

Excepted Behaviour: The external IP should work as the only way to connect via the TF terminal.

Additional Info: The bastion node always connects via external IP when using the UI console. Also, we don't need to provide the public network information while creating the node.

Image upgrade fails when providing the upgrade_image

console:

module.install.null_resource.upgrade[0]: Creating...
module.install.null_resource.upgrade[0]: Provisioning with 'file'...
module.install.null_resource.upgrade[0]: Provisioning with 'remote-exec'...
module.install.null_resource.upgrade[0] (remote-exec): Connecting to remote host via SSH...
module.install.null_resource.upgrade[0] (remote-exec):   Host: 169.48.X.X
module.install.null_resource.upgrade[0] (remote-exec):   User: root
module.install.null_resource.upgrade[0] (remote-exec):   Password: false
module.install.null_resource.upgrade[0] (remote-exec):   Private key: true
module.install.null_resource.upgrade[0] (remote-exec):   Certificate: false
module.install.null_resource.upgrade[0] (remote-exec):   SSH Agent: false
module.install.null_resource.upgrade[0] (remote-exec):   Checking Host Key: false
module.install.null_resource.upgrade[0] (remote-exec): Connected!
module.install.null_resource.upgrade[0] (remote-exec): Running ocp upgrade playbook...
module.install.null_resource.upgrade[0] (remote-exec): Using /root/ocp4-playbooks/ansible.cfg as config file
module.install.null_resource.upgrade[0] (remote-exec): [WARNING]: Found both group and host with same name: bootstrap

module.install.null_resource.upgrade[0] (remote-exec): PLAY [Upgrade OCP cluster] *****************************************************

module.install.null_resource.upgrade[0] (remote-exec): TASK [Gathering Facts] *********************************************************
module.install.null_resource.upgrade[0] (remote-exec): ok: [test-ocp-6f2c-bastion-0]

module.install.null_resource.upgrade[0] (remote-exec): TASK [include_role : ocp-upgrade] **********************************************
module.install.null_resource.upgrade[0] (remote-exec): fatal: [test-ocp-6f2c-bastion-0]: FAILED! => {"msg": "The conditional check 'upgrade_version != \"\"' failed. The error was: error while evaluating conditional (upgrade_version != \"\"): 'upgrade_version' is undefined\n\nThe error appears to be in '/root/ocp4-playbooks/playbooks/upgrade.yaml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n  tasks:\n    - include_role:\n      ^ here\n"}

module.install.null_resource.upgrade[0] (remote-exec): PLAY RECAP *********************************************************************
module.install.null_resource.upgrade[0] (remote-exec): test-ocp-6f2c-bastion-0    : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0



Error: error executing "/tmp/terraform_1158875228.sh": Process exited with status 2


Verify whether or not IPs that belongs to the public network created by the automation are being used

Once the cluster is up and running it will be natural to expand the resources around it and eventually use the same pub network created to deploy the cluster for another VMs. For instance, I manually created a new NFS server in the pub network created by the TF automation. However, when deleting the cluster (specifically the network), it fails because of the IPs of that network is allocated.

To ensure this error does not happen, we should verify during the deletion of the network if there is any port allocated to a VM which was not created by the automation and either leave the network there or ask about forcing the resources deleting completely.

Error:

Error: {"description":"an error has occurred; please try again: unable to delete subnet for network 4f9be979-bafd-4478-bbfd-3023c9a43171 for cloud instance dad1780f13c54b8a9f7ea55778e965e0: Expected HTTP response code [] when accessing [DELETE https://192.168.24.136:9696/v2.0/subnets/87e3804c-301c-4b2e-8d69-0655a4d26d14], but got 409 instead\n{\"NeutronError\": {\"message\": \"Unable to complete operation on subnet 87e3804c-301c-4b2e-8d69-0655a4d26d14: One or more ports have an IP allocation from this subnet.\", \"type\": \"SubnetInUse\", \"detail\": \"\"}}","error":"internal server error"}

Not wait for long to timeout during bastion destroy

Currently it is waiting for 15m in destroy provisioner for bastion pi_instance. In case the bastion was not accessible over ssh TF waits to timeout after 15m.
We should exit in less time since ssh is not working which could mean the machine was never accessible. We can ignore the destroy commands in this case.

rh subscription failure should fail the installation

Currently if the subscription-manager subscribe step fails, the automation continues. Since no packages are installed, the bastion setup will not complete, which prevents the cluster installation. I see that the subscription manager registration is done via a series of shell commands inside an inline block. From what I've read, this will only fail if the very last command fails. I've experimented with dropping a set -e into the top of the inline section, but that still did not seem to cause the registration failure to halt the automation.

Intermittent error getting public network details using ibm_pi_network data source

Error seen during TF apply getting data source for public network.

module.prepare.ibm_pi_volume.volume[0]: Creation complete after 16s [id=fc0db58c-a3d3-433c-8175-e9f5f814d83c/1d065ef7-5583-405e-b865-8a7511685d2e]
Error: {"description":"an error has occurred; please try again: unable to get network, subnet, and possibly public network cidr for network 0a91e6a7-63c0-490b-8855-8d590eb7528b on cloud instance a007eb5664ab4188a236468c86e51f16","error":"internal server error"}
  on modules/1_prepare/prepare.tf line 49, in data "ibm_pi_network" "public_network":
  49: data "ibm_pi_network" "public_network" {

Remove the health check for the bastion during create

When creating the bastion node we check for pi_health_status and allow the creation to complete when the value is WARNING.

We have seen issues that the bastion fails to connect via ssh with the root cause of going into grub/maintenance prompt. Removing the health status check will help for the TF to fail on the creation and not while attempting to ssh.

Need to keep health check to default just for the bastion node.

Support running terraform again with bootstrap count as 0

Currently terraform apply will fail if we delete the bootstrap node and taint the config & install resources.
To overcome this we have similar change in helpernode. The TF changes is needed to not set bootstrap details in playbook vars.

Install playbook execution failing with default chrony config

Install module is failing with default chrony config ie: chrony_config=true and chrony_config_servers=[].

module.install.null_resource.install (remote-exec): TASK [ocp-config : Configure chrony to synchronize with ntp servers] ***********
module.install.null_resource.install (remote-exec): fatal: [****-9433-bastion-0]: FAILED! => {"msg": "Invalid data passed to 'loop', it requires a list, got this instead: . Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."}

This is introduced with the validation PR #133 where the type of chrony_config_servers mentioned is...

type    = list(object({
       server  = string,
       options = string
   }))

Allow bastion node to use pi_health_status as an input variable

This issues is to perform bastion health status check with an optional variable. That way one can set the value as OK/WARNING if you don't want to wait till health status is ACTIVE for bastion.

The default will be OK as before with no change in the flow. We can decide later if we want to change the default to WARNING instead for bastion.

Note that the other cluster nodes are already using WARNING as health_status as they need the ignition file to boot and RMC pods are creating after deployment. We cannot use OK as health_status for them.

Make de NFS provisioner configuration optional

As an user deploying my own cluster I would like to avoid getting the NFS provisioner configured out of the box without explicitly setting it via vat.tf. In a production deployment I will probably use an already existing or a new NFS server and will include some customization for my deployment to meet some requirements.

I had to execute the following before installing a customized one:

oc delete all -l app=nfs-client-provisioner -n nfs-provisioner
oc delete project nfs-provisioner

error with terraform destroy-time provisioner variable access

I bumped up my terraform version from 0.12.29 to 0.13.x and hit this:

> terraform init                                               
Initializing modules...                                                                             
There are some problems with the configuration, described below.                                    
                                                                                                    
The Terraform configuration must be valid before initialization so that                             
Terraform can determine which modules and providers need to be installed.
                                                                                                    
Error: Invalid reference from destroy provisioner                                                   
                                                                                                    
  on modules/1_prepare/prepare.tf line 93, in resource "ibm_pi_instance" "bastion":                 
  93:             user        = var.rhel_username                                                   
                                                                                                    
Destroy-time provisioners and their connection configurations may only                              
reference attributes of the related resource, via 'self', 'count.index', or                         
'each.key'.                                                                                         
                                                                                                    
References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.

Confirmed that moving back to 0.12 made this go away.

I think this is the same as hashicorp/terraform#23679

TF is failing on deleting the cloud-init package


module.prepare.null_resource.rhel83_fix[0] (remote-exec): Transaction Summary
module.prepare.null_resource.rhel83_fix[0] (remote-exec): ========================================
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Remove  1 Package

module.prepare.null_resource.rhel83_fix[0] (remote-exec): Freed space: 3.0 M
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Running transaction check
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Transaction check succeeded.
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Running transaction test
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Transaction test succeeded.
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Running transaction
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Preparing        :                1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Running scriptlet: cloud-init-1   1/1

module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud [      ] 1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud [=     ] 1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud [==    ] 1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud [===   ] 1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud [====  ] 1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud [===== ] 1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Erasing          : cloud-init-1   1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec): warning: /etc/cloud/cloud.cfg saved as /etc/cloud/cloud.cfg.rpmsave

module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Running scriptlet: cloud-init-1   1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   Verifying        : cloud-init-1   1/1
module.prepare.null_resource.rhel83_fix[0] (remote-exec): Installed products updated.

module.prepare.null_resource.rhel83_fix[0] (remote-exec): Removed:
module.prepare.null_resource.rhel83_fix[0] (remote-exec):   cloud-init-19.4-11.el8_3.1.noarch

module.prepare.null_resource.rhel83_fix[0] (remote-exec): Complete!
module.prepare.null_resource.rhel83_fix[0]: Creation complete after 8s [id=9048021131775509033]

Error: error executing "/tmp/terraform_1915964773.sh": Process exited with status 2

terraform destroy error

module.nodes.null_resource.remove_worker[0] (remote-exec): pod/redhat-marketplace-4mf5r evicted
module.nodes.null_resource.remove_worker[0] (remote-exec): pod/prometheus-adapter-79bd7d5c9-ffzjn evicted
module.nodes.null_resource.remove_worker[0]: Still destroying... [id=7928149752953593866, 30s elapsed]
module.nodes.null_resource.remove_worker[1]: Still destroying... [id=4203784824677227330, 30s elapsed]
module.nodes.ibm_pi_instance.master[2]: Still destroying... [id=fac4755e-8aff-45f5-8d5c-1d3b58b7a229/c141c210-316f-4a76-9a5a-7f9a9ddf3e16, 30s elapsed]
module.nodes.null_resource.remove_worker[1] (remote-exec): There are pending pods in node "worker-1" when an error occurred: error when evicting pod "router-default-6d99c496cc-rl2n4": rpc error: code = Unavailable desc = transport is closing
module.nodes.null_resource.remove_worker[1] (remote-exec): pod/router-default-6d99c496cc-rl2n4
module.nodes.null_resource.remove_worker[1] (remote-exec): Following errors occurred while getting the list of pods to delete:
module.nodes.null_resource.remove_worker[1] (remote-exec): cannot delete rpc error: code = Unavailable desc = transport is closing: openshift-machine-config-operator/machine-config-daemon-lb9qmerror: unable to drain node "worker-1", aborting command...

module.nodes.null_resource.remove_worker[1] (remote-exec): There are pending nodes to be drained:

`terraform destroy` leaves behind resources after a failed `terraform apply`

After a cluster creation attempt that fails, I've seen the terraform destroy command complete but still leave behind instances and volumes. The scenario seems to be that the cluster creation is attempted, instances are created, but never boot. The instances created include the bastion, bootstrap, workers, and masters. All of these are left behind in the case that none of them come up.

Allow using PowerVS catalog images for bastion

Now we have CentOS 8.3 image as part of the image catalog from PowerVS.

The Terraform automation should be able to use the stock images to deploy the bastion node. We need to find the given variable rhel_image value in catalog images and if not found look at boot images.

This is also helpful for the infra/pvsadm automation which uses the 1_prepare module.

Terraform crash when resource does not have a public network

The terraform module will crash with index out of bound error as mentioned in IBM-Cloud/terraform-provider-ibm#1701

TF provider assume there will always be 1 and only 1 public network within a resource instance.

As observed on UI the public network is created automatically when a VM with public network is created. Also once the allocation pool is completely utilized it will create a new public network with different subnet. Still, TF provider will return the first indexed network.

Wildcard DNS search in resolv.conf prevents resolving service IPs

I provisioned a cluster using cluster_name = "nip.io", as suggested by the var.tfvars comment.

When attempting to deploy an operator using an internal registry we ran into the following DNS issue (credit to @dberg1 for writing this up):

The multicluster-operators-standalone-subscription pod is trying to resolve IP address for hostname multiclusterhub-repo.open-cluster-management.svc.cluster.local.
Here is the resolv.conf of the pod:

# oc rsh -n open-cluster-management multicluster-operators-standalone-subscription-7d7986f9f6-rnpvj
sh-4.4$ cat /etc/resolv.conf
search open-cluster-management.svc.cluster.local svc.cluster.local cluster.local ocp-dal12-hub.169.59.253.69.nip.io 169.59.253.69.nip.io
nameserver 172.30.0.10
options ndots:5

Since the hostname the pod is trying to resolve has less than 5 dots, it will not do a direct lookup first.
It will try all the suffixes listed in the search list. Since 169.59.253.69.nip.io is in the search list, it will always get the IP address of the bastion node (because nip.io is a wildcard DNS that will reply for any hostname under 169.59.253.69.nip.io). 169.59.253.69.nip.io should not be put in the search list.

NFS storage mount restrictive permissions inhibit nfs-storage-provisioner

I provisioned from the release-4.7 branch (at c98d7d6), and once the cluster was up the deployment/image-registry operator reported as degraded:

Cluster operator image-registry has been degraded for 10 minutes. Operator is degraded because Unavailable and cluster upgrades will be unstable.
status:
  conditions:
  - lastTransitionTime: "2021-03-03T13:52:28Z"
    lastUpdateTime: "2021-03-03T13:52:28Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available

The image-registry wasn't able to get a PersistentVolumeClaim:

ocp-dal12-hub.ibm-power-acm.com > oc describe PersistentVolumeClaim/registry-pvc
Name:          registry-pvc
Namespace:     openshift-image-registry
StorageClass:  nfs-storage-provisioner
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: nfs-storage
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       image-registry-5dd46bcfb6-crccn
Events:
  Type     Reason                Age                    From                                                                                      Message
  ----     ------                ----                   ----                                                                                      -------
  Normal   ExternalProvisioning  3m34s (x331 over 83m)  persistentvolume-controller                                                               waiting for a volume to be created, either by external provisioner "nfs-storage" or manually created by system administrator
  Normal   Provisioning          119s (x11 over 83m)    nfs-storage_nfs-client-provisioner-544bb74bd4-9tgxj_40e80696-59a2-417b-88c5-f4a4db4b5512  External provisioner is provisioning volume for claim "openshift-image-registry/registry-pvc"
  Warning  ProvisioningFailed    119s (x11 over 83m)    nfs-storage_nfs-client-provisioner-544bb74bd4-9tgxj_40e80696-59a2-417b-88c5-f4a4db4b5512  failed to provision volume with StorageClass "nfs-storage-provisioner": unable to create directory to provision new pv: mkdir /persistentvolumes/openshift-image-registry-registry-pvc-pvc-de4f6b47-fb49-4bb4-b996-669b8869d988: permission denied

Looks like a permissions issue, so I ssh'd into the bastion:

[root@ocp-dal12-hub-bastion-0 ~]# ls -ld /export/
drwxr-xr-x. 3 root root 4096 Mar  3 12:51 /export
[root@ocp-dal12-hub-bastion-0 ~]# chmod 777 /export/
[root@ocp-dal12-hub-bastion-0 ~]# ls -ld /export/
drwxrwxrwx. 3 root root 4096 Mar  3 12:51 /export/

And with the that PersistenVolume was created and clusteroperator/image-registry became available.

With latest terraform getting deprecated warnings for interpolation syntax

With TF v0.13.4 started getting the interpolation syntax warnings. This will show up during apply and validate commands.

# terraform validate

Warning: Interpolation-only expressions are deprecated

  on variables.tf line 230, in locals:
 230:     private_key_file    = "${var.private_key_file == "" ? "${path.cwd}/data/id_rsa" : "${var.private_key_file}" }"

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and 7 more similar warnings elsewhere)

Success! The configuration is valid, but there were some validation warnings as shown above.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.