Coder Social home page Coder Social logo

sassoftware / viya4-iac-azure Goto Github PK

View Code? Open in Web Editor NEW
70.0 20.0 86.0 1.12 MB

This project contains Terraform configuration files to provision infrastructure components required to deploy SAS Viya platform products on Microsoft Azure Cloud.

License: Apache License 2.0

Dockerfile 0.66% HCL 95.76% Shell 3.57%
iac azure terraform sas-viya cloud-resources aks

viya4-iac-azure's Introduction

SAS Viya 4 Infrastructure as Code (IaC) for Microsoft Azure

Overview

This project helps you to automate the cluster-provisioning phase of SAS Viya platform deployment. It contains Terraform scripts to provision the Microsoft Azure Cloud infrastructure resources that are required to deploy SAS Viya platform product offerings. Here is a list of resources that this project can create:

  • Azure resource group(s): primary resource group and AKS resource group
  • Virtual network, network security groups, and network security rules
  • Managed Azure Kubernetes Service (AKS) cluster
  • System and User AKS Node pools with required Labels and Taints
  • Infrastructure to deploy SAS Viya platform CAS in SMP or MPP mode
  • Storage options for SAS Viya platform - NFS Server (Standard) or Azure NetApp Files (HA)
  • Azure DB for PostgreSQL, optional
  • Azure Container Registry, optional

Architecture Diagram

This project addresses the first of three steps in Steps for Getting Started in SAS® Viya® Platform Operations:

  1. Provision resources.
  2. Prepare for the deployment.
  3. Customize and deploy the SAS Viya platform.

Note: The scripts in this project are provided as examples. They do not provide comprehensive configuration. The second and third steps include additional configuration tasks. Some of those tasks (for example, enabling logging and specifying available IP addresses) are essential for a more secure deployment.

Once the cloud resources are provisioned, use the viya4-deployment project to deploy the SAS Viya platform in your cloud environment. To learn about all phases and options of the SAS Viya platform deployment process, see Getting Started with SAS Viya and Azure Kubernetes Service in SAS Viya Platform Operations.

This project follows the SemVer versioning scheme. Given a version number MAJOR.MINOR.PATCH, we increment the:

  • MAJOR version when we make changes that are incompatible with the functionality of a previous component
  • MINOR version when we add functionality that is backwards-compatible
  • PATCH version when we make bug fixes that are backwards-compatible

Note: You must take down your existing infrastructure and rebuild it when you are upgrading to a new major version because of potential backward incompatibility. For details about the changes that are added in each release, see the Release Notes.

Prerequisites

Use of these tools requires operational knowledge of the following technologies:

Technical Prerequisites

This project supports two options for running Terraform scripts:

  • Terraform installed on your local machine

  • Using a Docker container to run Terraform

    For more information, see Docker Usage. Using Docker to run the Terraform scripts is recommended.

Access to an Azure Subscription and an Identity with the Contributor role are required.

Terraform Requirements:

Docker Requirements:

Getting Started

When you have prepared your environment with the prerequisites, you are ready to obtain and customize the Terraform scripts that will set up your Kubernetes cluster.

Clone this Project

Run the following commands from a terminal session:

# clone this repo
git clone https://github.com/sassoftware/viya4-iac-azure

# move to the project directory
cd viya4-iac-azure

Authenticating Terraform to Access Microsoft Azure

The Terraform process manages Microsoft Azure resources on your behalf. In order to do so, it needs your Azure account information and a user identity with the required permissions. See Terraform Azure Authentication for details.

Customizing Input Values

Terraform scripts require variable definitions as input. Review and modify default values to meet your requirements. Create a file named terraform.tfvars to customize any input variable value documented in the CONFIG-VARS.md file.

To get started, you can copy one of the example variable definition files provided in the ./examples folder. For more information about the variables that are declared in each file, refer to the CONFIG-VARS.md file.

You have the option to specify variable definitions that are not included in terraform.tfvars or to use a variable definition file other than terraform.tfvars. See Advanced Terraform Usage for more information.

Creating and Managing the Cloud Resources

Create and manage the required cloud resources. Perform one of the following steps, based on whether you are using Docker:

Troubleshooting

See the Troubleshooting page for information about possible issues that you might encounter.

Security

Additional configuration to harden your cluster environment is supported and encouraged. For example, you can limit cluster access to specified IP addresses. You can also deploy a load balancer or application gateway to mediate data flows between SAS Viya platform components and the ingress controller.

Contributing

We welcome your contributions! See CONTRIBUTING.md for information about how to submit contributions to this project.

License

This project is licensed under the Apache 2.0 License.

Additional Resources

Azure Resources

Terraform Resources

viya4-iac-azure's People

Contributors

ajeffowens avatar andybouts avatar b2ku avatar bernardmaltais avatar dhoucgitter avatar dixoncrews avatar dzanter avatar enderm avatar gonzaloulla avatar jarpat avatar jason-matthew avatar manoatsas avatar morenovj avatar normjohniv avatar ranieuwe avatar riragh avatar stturc avatar supear avatar thpang avatar timcrider avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

viya4-iac-azure's Issues

changes needed to support azure network plugin

I needed the azure network plugin to install and use cilium which provides detailed performance details for communication. In modules/azure_aks/variables.tf I changed the variable "aks_network_plugin" default value to "azure".

I got an error that pod_cidr was incompatible with the azure plugin.
In the same directory, I modified main.tf and commented out the line that sets a value for pod_cidr

I got another error that said too few addresses were available in the subnet.
In the top level directory, I modified main.tf's value for aks_subnet_cidr_block to "192.168.16.0/20"

This last statement made many more ip addresses available than are necessary, but my Viya 4 deployment was successful after these changes.

I'm not recommending these settings in general, just documenting what worked for me.

request for a CHANGELOG file

It would be nice, once the release archive has been extracted or the project "git cloned" somewhere to be able to know the current release and changelog. SAS Viya Ark (3.5 version), for example has a CHANGELOG.md file that contains this information.
Thanks

Dependency for SAS-ACR & Postgres

Hi,

Here are the two ERRORs I was able to trigger in my deployment :
Case 1
Error Creating/Updating Network Security Rule "SAS-ACR" (NSG "viya-XXX-nsg" / Resource Group "viya-XXX-rg")
: network.SecurityRulesClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ResourceNotFound"
Message="The Resource 'Microsoft.Network/networkSecurityGroups/viya-XXX-nsg' under resource group 'viya-XXX-rg' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"

Case 2
Error: Creating PostgreSQL Server "viya-XXX-pgsql" (Resource Group "viya-XXX-rg")
: postgresql.ServersClient#Create: Failure sending request: StatusCode=404 -- Original Error: Code="ResourceGroupNotFound"
Message="Resource group 'viya-XXX-rg' could not be found."

Dependencies seems not checked in case one on NSG and on RG for the second case.

Here is what I did to solve it (in main.tf):
in resource "azurerm_network_security_rule" "acr" {
++ depends_on = [module.nsg]
in module "postgresql" {
++ depends_on = [module.resource_group]

Let me know if this seems to be accurate changes,

Does not work on windows

Hello,

I have followed the steps in the readme but it seems terraform plan is not working on windows.

C:\Users\david\source\repos\viya4-iac-azure>terraform plan
var.prefix
A prefix used in the name for all the Azure resources created by this script. The prefix string must start with lowercase letter and contain only alphanumeric characters and hyphen or dash(-), but can not start or end with '-'.

Enter a value: sasviya

var.subscription_id
Enter a value: redacted

var.tenant_id
Enter a value: redacted

Warning: "skip_credentials_validation": [DEPRECATED] This field is deprecated and will be removed in version 3.0 of the Azure Provider

Error: Invalid function argument

on main.tf line 228, in module "nfs":
228: ssh_public_key = file(var.ssh_public_key)
|----------------
| var.ssh_public_key is "~/.ssh/id_rsa.pub"

Invalid value for "path" parameter: no file exists at
C:\Users\david.ssh\id_rsa.pub; this function works only with files that are
distributed as part of the configuration source code, so if this file will be
created by a resource in this configuration you must instead obtain this
result from an attribute of that resource.

Error: Invalid function argument

on main.tf line 296, in module "aks":
296: aks_cluster_ssh_public_key = file(var.ssh_public_key)
|----------------
| var.ssh_public_key is "~/.ssh/id_rsa.pub"

Invalid value for "path" parameter: no file exists at
C:\Users\david.ssh\id_rsa.pub; this function works only with files that are
distributed as part of the configuration source code, so if this file will be
created by a resource in this configuration you must instead obtain this
result from an attribute of that resource.

Error: failed to execute "files/iac_git_info.sh": fork/exec files/iac_git_info.sh: %1 is not a valid Win32 application.

on main.tf line 401, in data "external" "git_hash":
401: data "external" "git_hash" {

Error: failed to execute "files/iac_tooling_version.sh": fork/exec files/iac_tooling_version.sh: %1 is not a valid Win32 application.

on main.tf line 405, in data "external" "iac_tooling_version":
405: data "external" "iac_tooling_version" {

2 requests:

  1. When running local, I'd like to be prompted for the ssh key path instead of it being defaulted. Alternatively, I could pass it as a variable but I am getting an error: A variable named "'ssh_public_key" was assigned on the command line, but the
    root module does not declare a variable of that name. To use this value, add a
    "variable" block to the configuration.
  2. Is it possible to pass my SSH key as a string instead of referencing a file on my system? When automating a deployment, I prefer not to put my ssh key on the deployment agent.

Expose NFS Disk Type Parameter

Please expose the NFS Disk Type parameter so a user can select a premium or ultra disk for a "standard" VM based NFS configuration.

Documentation Issue

There are references to AWS in the docs instead of Azure.

Can we get this updated?

Starting & stopping of jump-vm and nfs-vm blocks ssh access

Hey,
Still needs to be confirm but I created twice the jump-vm and nfs-vm with the IaC TF templates. Used my own private key. After stopping and starting my jump-vm and nfs-vm I can't ssh anymore to the jump-vm.

ssh -i sbxfrv-viyadep-key [email protected]
ssh: connect to host 52.247.125.242 port 22: Connection refused

No change in inbound firewall rules or public IP address. So port 22 is still open. Only reason I can think of is something must be changed with public key in the nfs-vm?

Frederik.

Default Standard_B2s_v3 provided for the Jump VM size is not valid

Using Default parameters in EASTUS2

Error: creating Linux Virtual Machine "viya4-jump-vm" (Resource Group "viya4-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value Standard_B2s_v3 provided for the VM size is not valid. The valid sizes in the current region are: Standard_B1ls,Standard_B1ms,Standard_B1s,Standard_B2ms,Standard_B2s,Standard_B4ms,Standard_B8ms,Standard_B12ms,Standard_B16ms,Standard_B20ms,Standard_D1_v2,Standard_D2_v2,Standard_D3_v2,Standard_D4_v2,Standard_D5_v2,Standard_D11_v2,Standard_D12_v2,Standard_D13_v2,Standard_D14_v2,Standard_D15_v2,Standard_D2_v2_Promo,Standard_D3_v2_Promo,Standard_D4_v2_Promo,Standard_D5_v2_Promo,Standard_D11_v2_Promo,Standard_D12_v2_Promo,Standard_D13_v2_Promo,Standard_D14_v2_Promo,Standard_F1,Standard_F2,Standard_F4,Standard_F8,Standard_F16,Standard_DS1_v2,Standard_DS2_v2,Standard_DS3_v2,Standard_DS4_v2,Standard_DS5_v2,Standard_DS11-1_v2,Standard_DS11_v2,Standard_DS12-1_v2,Standard_DS12-2_v2,Standard_DS12_v2,Standard_DS13-2_v2,Standard_DS13-4_v2,Standard_DS13_v2,Standard_DS14-4_v2,Standard_DS14-8_v2,Standard_DS14_v2,Standard_DS15_v2,Standard_DS2_v2_Promo,Standard_DS3_v2_Promo,Standard_DS4_v2_Promo,Standard_DS5_v2_Promo,Standard_DS11_v2_Promo,Standard_DS12_v2_Promo,Standard_DS13_v2_Promo,Standard_DS14_v2_Promo,Standard_F1s,Standard_F2s,Standard_F4s,Standard_F8s,Standard_F16s,Standard_A1_v2,Standard_A2m_v2,Standard_A2_v2,Standard_A4m_v2,Standard_A4_v2,Standard_A8m_v2,Standard_A8_v2,Standard_D2_v3,Standard_D4_v3,Standard_D8_v3,Standard_D16_v3,Standard_D32_v3,Standard_D48_v3,Standard_D64_v3,Standard_D2s_v3,Standard_D4s_v3,Standard_D8s_v3,Standard_D16s_v3,Standard_D32s_v3,Standard_D48s_v3,Standard_D64s_v3,Standard_E2_v3,Standard_E4_v3,Standard_E8_v3,Standard_E16_v3,Standard_E20_v3,Standard_E32_v3,Standard_E2s_v3,Standard_E4-2s_v3,Standard_E4s_v3,Standard_E8-2s_v3,Standard_E8-4s_v3,Standard_E8s_v3,Standard_E16-4s_v3,Standard_E16-8s_v3,Standard_E16s_v3,Standard_E20s_v3,Standard_E32-8s_v3,Standard_E32-16s_v3,Standard_E32s_v3,Standard_A0,Standard_A1,Standard_A2,Standard_A3,Standard_A5,Standard_A4,Standard_A6,Standard_A7,Basic_A0,Basic_A1,Basic_A2,Basic_A3,Basic_A4,Standard_NV6s_v2,Standard_NV12s_v2,Standard_NV24s_v2,Standard_NV12s_v3,Standard_NV24s_v3,Standard_NV48s_v3,Standard_E48_v3,Standard_E64i_v3,Standard_E64_v3,Standard_E48s_v3,Standard_E64-16s_v3,Standard_E64-32s_v3,Standard_E64is_v3,Standard_E64s_v3,Standard_E2_v4,Standard_E4_v4,Standard_E8_v4,Standard_E16_v4,Standard_E20_v4,Standard_E32_v4,Standard_E2d_v4,Standard_E4d_v4,Standard_E8d_v4,Standard_E16d_v4,Standard_E20d_v4,Standard_E32d_v4,Standard_E2s_v4,Standard_E4-2s_v4,Standard_E4s_v4,Standard_E8-2s_v4,Standard_E8-4s_v4,Standard_E8s_v4,Standard_E16-4s_v4,Standard_E16-8s_v4,Standard_E16s_v4,Standard_E20s_v4,Standard_E32-8s_v4,Standard_E32-16s_v4,Standard_E32s_v4,Standard_E2ds_v4,Standard_E4-2ds_v4,Standard_E4ds_v4,Standard_E8-2ds_v4,Standard_E8-4ds_v4,Standard_E8ds_v4,Standard_E16-4ds_v4,Standard_E16-8ds_v4,Standard_E16ds_v4,Standard_E20ds_v4,Standard_E32-8ds_v4,Standard_E32-16ds_v4,Standard_E32ds_v4,Standard_D2d_v4,Standard_D4d_v4,Standard_D8d_v4,Standard_D16d_v4,Standard_D32d_v4,Standard_D48d_v4,Standard_D64d_v4,Standard_D2_v4,Standard_D4_v4,Standard_D8_v4,Standard_D16_v4,Standard_D32_v4,Standard_D48_v4,Standard_D64_v4,Standard_D2ds_v4,Standard_D4ds_v4,Standard_D8ds_v4,Standard_D16ds_v4,Standard_D32ds_v4,Standard_D48ds_v4,Standard_D64ds_v4,Standard_D2s_v4,Standard_D4s_v4,Standard_D8s_v4,Standard_D16s_v4,Standard_D32s_v4,Standard_D48s_v4,Standard_D64s_v4,Standard_F2s_v2,Standard_F4s_v2,Standard_F8s_v2,Standard_F16s_v2,Standard_F32s_v2,Standard_F48s_v2,Standard_F64s_v2,Standard_F72s_v2,Standard_NC4as_T4_v3,Standard_NC8as_T4_v3,Standard_NC16as_T4_v3,Standard_NC64as_T4_v3,Standard_D1,Standard_D2,Standard_D3,Standard_D4,Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_E48_v4,Standard_E64_v4,Standard_E48d_v4,Standard_E64d_v4,Standard_E48s_v4,Standard_E64-16s_v4,Standard_E64-32s_v4,Standard_E64s_v4,Standard_E80is_v4,Standard_E48ds_v4,Standard_E64-16ds_v4,Standard_E64-32ds_v4,Standard_E64ds_v4,Standard_E80ids_v4,Standard_L8s_v2,Standard_L16s_v2,Standard_L32s_v2,Standard_L48s_v2,Standard_L64s_v2,Standard_L80s_v2,Standard_NC6s_v3,Standard_NC12s_v3,Standard_NC24rs_v3,Standard_NC24s_v3,Standard_DS1,Standard_DS2,Standard_DS3,Standard_DS4,Standard_DS11,Standard_DS12,Standard_DS13,Standard_DS14,Standard_NV4as_v4,Standard_NV8as_v4,Standard_NV16as_v4,Standard_NV32as_v4,Standard_G1,Standard_G2,Standard_G3,Standard_G4,Standard_G5,Standard_GS1,Standard_GS2,Standard_GS3,Standard_GS4,Standard_GS4-4,Standard_GS4-8,Standard_GS5,Standard_GS5-8,Standard_GS5-16,Standard_L4s,Standard_L8s,Standard_L16s,Standard_L32s,Standard_M64,Standard_M64m,Standard_M128,Standard_M128m,Standard_M8-2ms,Standard_M8-4ms,Standard_M8ms,Standard_M16-4ms,Standard_M16-8ms,Standard_M16ms,Standard_M32-8ms,Standard_M32-16ms,Standard_M32ls,Standard_M32ms,Standard_M32ts,Standard_M64-16ms,Standard_M64-32ms,Standard_M64ls,Standard_M64ms,Standard_M64s,Standard_M128-32ms,Standard_M128-64ms,Standard_M128ms,Standard_M128s,Standard_M32ms_v2,Standard_M64ms_v2,Standard_M64s_v2,Standard_M128ms_v2,Standard_M128s_v2,Standard_M192ims_v2,Standard_M192is_v2,Standard_M32dms_v2,Standard_M64dms_v2,Standard_M64ds_v2,Standard_M128dms_v2,Standard_M128ds_v2,Standard_M192idms_v2,Standard_M192ids_v2,Standard_M208ms_v2,Standard_M208s_v2,Standard_M416s_v2,Standard_M416ms_v2,Standard_D2a_v4,Standard_D4a_v4,Standard_D8a_v4,Standard_D16a_v4,Standard_D32a_v4,Standard_D48a_v4,Standard_D64a_v4,Standard_D96a_v4,Standard_D2as_v4,Standard_D4as_v4,Standard_D8as_v4,Standard_D16as_v4,Standard_D32as_v4,Standard_D48as_v4,Standard_D64as_v4,Standard_D96as_v4,Standard_E2a_v4,Standard_E4a_v4,Standard_E8a_v4,Standard_E16a_v4,Standard_E20a_v4,Standard_E32a_v4,Standard_E48a_v4,Standard_E64a_v4,Standard_E96a_v4,Standard_E2as_v4,Standard_E4-2as_v4,Standard_E4as_v4,Standard_E8-2as_v4,Standard_E8-4as_v4,Standard_E8as_v4,Standard_E16-4as_v4,Standard_E16-8as_v4,Standard_E16as_v4,Standard_E20as_v4,Standard_E32-8as_v4,Standard_E32-16as_v4,Standard_E32as_v4,Standard_E48as_v4,Standard_E64-16as_v4,Standard_E64-32as_v4,Standard_E64as_v4,Standard_E96-24as_v4,Standard_E96-48as_v4,Standard_E96as_v4,Standard_NV6,Standard_NV12,Standard_NV24,Standard_NV6_Promo,Standard_NV12_Promo,Standard_NV24_Promo,Standard_NC6,Standard_NC12,Standard_NC24,Standard_NC24r,Standard_NC6_Promo,Standard_NC12_Promo,Standard_NC24_Promo,Standard_NC24r_Promo,Standard_M416-208s_v2,Standard_M416-208ms_v2. Find out more on the valid VM sizes in each region at https://aka.ms/azure-regions." Target="vmSize"

Inconsistencies with the 'REQUIRED VARIABLES' in the example tfvars files

Just some nits/questions about the initial setup steps...

sample-input-minimal.tfvars has more required variables:

sample-input-ha.tfvars and sample-input.tfvars:

# ****************  REQUIRED VARIABLES  ****************
# These required variables' values MUST be provided by the user
prefix                                  = "<prefix-value>"
location                                = "<azure-location-value>" # e.g., "useast2"
tags                                    = { } # e.g., { "key1" = "value1", "key2" = "value2" }
# ****************  REQUIRED VARIABLES  ****************

sample-input-minimal.tfvars:

# ****************  REQUIRED VARIABLES  ****************
# These required variables' values MUST be provided by the User
prefix                                  = "<prefix-value>"
location                                = "<azure-location-value>" # e.g., "useast2"
default_public_access_cidrs             = []  # e.g., ["123.45.6.89/32"]
tags                                    = { } # e.g., { "key1" = "value1", "key2" = "value2" }
# ****************  REQUIRED VARIABLES  ****************

Note addition of: default_public_access_cidrs - which in the previous files is in the Admin access section.

In my opinion the mandatory required variables should be consistent - as this suggests the former two files also need the default_public_access_cidrs to be populated - despite not being in the required variables section.

No description for variables

These have useful descriptions:

prefix                                  = "<prefix-value>"
location                                = "<azure-location-value>" # e.g., "useast2"

These do not:

default_public_access_cidrs             = []  # e.g., ["123.45.6.89/32"]
tags                                    = { } # e.g., { "key1" = "value1", "key2" = "value2" }

Especially tags - theres no explanation why this is in the required variables and what they should be.

prefix and location were auto populated

In my first deployment, the location (useast) was auto populated and based on my Azure SAS subscription, and prefix - was requested by terraform. One could argue they aren't required variables?

NSG inbound rule for SSH connection to NFS VM is not created when create_jump_vm=false

In my scenario don't want a Jump VM, but I want an NFS VM (storage_type="standard") and to be able to SSH to it.

create_jump_vm                       = false
default_public_access_cidrs         = ["109.232.56.224/27", "149.173.0.0/16", "194.206.69.176/28"]
storage_type                         = "standard"
# we want to be able to access our NFS server VM form the outside.
create_nfs_public_ip                 = true

After the execution of the TF apply, the inbound rule for SSH connection (port 22) to NFS VM is not created and I can not SSH to the VM Public IP. I can add it manually by myself (with Destination : Any) but I would have expected it to be already there.

Thanks
Raphael

re-entrance and idempotency of IAC-Azure

So, I deployed AKS with the following command:

time docker container run --rm -it \
 --env-file $HOME/jumphost/workspace/.tf_creds \
 -v $HOME/jumphost/workspace:/workspace \
 viya4-iac-azure \
 apply \
 -auto-approve \
 -var-file /workspace/gel01-vars.tfvars \
 -state /workspace/gel01-vars.tfstate

That worked, and took

real    9m14.046s
user    0m0.095s
sys    0m0.091s

Now, having not done ANYTHING else, I re-run the exact same command again, expecting it to have nothing to do and go a lot faster on second run.
instead, I see a lot of "detroying" messages, and it ends up taking another 9 minutes and ends with a failure:

Error: Error: Subnet "myuser-iac-viya4aks-aks-subnet" (Virtual Network "myuser-iac-viya4aks-vnet" / Resource Group "myuser-iac-viya4aks-rg") was not found
  on main.tf line 142, in data "azurerm_subnet" "aks-subnet":
 142: data "azurerm_subnet" "aks-subnet" {

Error: Error: Subnet "myuser-iac-viya4aks-misc-subnet" (Virtual Network "myuser-iac-viya4aks-vnet" / Resource Group "myuser-iac-viya4aks-rg") was not found
  on main.tf line 150, in data "azurerm_subnet" "misc-subnet":
 150: data "azurerm_subnet" "misc-subnet" {

So, am I doing something wrong? Should this not behave differently?

The complete log for the second run is as follows:

data.external.git_hash: Refreshing state... [id=-]
data.template_file.nfs-cloudconfig: Refreshing state... [id=6192c77eb6427bd8f9e07a36e079ea774aa733b3bac6f8cd731d62816588f716]
data.external.iac_tooling_version: Refreshing state... [id=-]
data.template_cloudinit_config.nfs: Refreshing state... [id=891762343]
data.azuread_service_principal.sp_client: Refreshing state... [id=2b472701-0d81-4e0d-bda1-5ea07cde75b1]
local_file.sas_iac_buildinfo: Refreshing state... [id=29f8748d58ca09d657b5f395feaf6a08db4100bc]
data.azurerm_subscription.current: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c]
azurerm_resource_group.azure_rg: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg]
azurerm_network_security_group.nsg: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/networkSecurityGroups/myuser-iac-viya4aks-nsg]
module.vnet.data.azurerm_resource_group.vnet: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg]
module.vnet.azurerm_virtual_network.vnet: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet]
module.vnet.azurerm_subnet.subnet[0]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet/subnets/myuser-iac-viya4aks-aks-subnet]
module.vnet.azurerm_subnet.subnet[1]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet/subnets/myuser-iac-viya4aks-misc-subnet]
data.azurerm_subnet.aks-subnet: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet/subnets/myuser-iac-viya4aks-aks-subnet]
data.azurerm_subnet.misc-subnet: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet/subnets/myuser-iac-viya4aks-misc-subnet]
data.template_file.jump-cloudconfig: Refreshing state... [id=a7d5559239ff40853d176aeef95498ea3437c879f27f83252cd44fc97637f2a3]
data.template_cloudinit_config.jump: Refreshing state... [id=3215457033]
module.aks.azurerm_kubernetes_cluster.aks: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks]
local_file.kubeconfig: Refreshing state... [id=ddfbb40db103daa5dad95c80d015b96125579715]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/cas]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/stateful]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/stateless]
null_resource.sas_iac_buildinfo: Refreshing state... [id=2720308387643598644]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/connect]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/compute]
data.azurerm_public_ip.aks_public_ip: Refreshing state... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/MC_myuser-iac-viya4aks-rg_myuser-iac-viya4aks-aks_eastus/providers/Microsoft.Network/publicIPAddresses/0eb57010-2a03-47d5-b27d-97c65dfc300b]
null_resource.sas_iac_buildinfo: Destroying... [id=2720308387643598644]
null_resource.sas_iac_buildinfo: Destruction complete after 0s
data.template_file.sas_iac_buildinfo: Reading...
data.template_file.sas_iac_buildinfo: Read complete after 0s [id=58453827b9a79a49e377b1a293a7ba429f1be93eadf0064dc49147a27f4b7ec1]
local_file.sas_iac_buildinfo: Creating...
local_file.sas_iac_buildinfo: Creation complete after 0s [id=06fd2104f118d7badecb14a3c87bdab3a6cd5353]
^@module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/compute]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/connect]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/stateful]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/cas]
data.template_file.jump-cloudconfig: Reading... [id=a7d5559239ff40853d176aeef95498ea3437c879f27f83252cd44fc97637f2a3]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks/agentPools/stateless]
data.template_file.jump-cloudconfig: Read complete after 0s [id=a7d5559239ff40853d176aeef95498ea3437c879f27f83252cd44fc97637f2a3]
azurerm_resource_group.azure_rg: Modifying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg]
data.template_cloudinit_config.jump: Reading... [id=3215457033]
data.template_cloudinit_config.jump: Read complete after 0s [id=3215457033]
azurerm_resource_group.azure_rg: Modifications complete after 0s [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg]
module.vnet.data.azurerm_resource_group.vnet: Reading... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg]
module.vnet.data.azurerm_resource_group.vnet: Read complete after 1s [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 10s elapsed]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/connect, 10s elapsed]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...g-iac-viya4aks-aks/agentPools/stateful, 10s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 10s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...-iac-viya4aks-aks/agentPools/stateless, 10s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 20s elapsed]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/connect, 20s elapsed]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...g-iac-viya4aks-aks/agentPools/stateful, 20s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 20s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...-iac-viya4aks-aks/agentPools/stateless, 20s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 30s elapsed]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/connect, 30s elapsed]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...g-iac-viya4aks-aks/agentPools/stateful, 30s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 30s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...-iac-viya4aks-aks/agentPools/stateless, 30s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 40s elapsed]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/connect, 40s elapsed]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...g-iac-viya4aks-aks/agentPools/stateful, 40s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 40s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...-iac-viya4aks-aks/agentPools/stateless, 40s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 50s elapsed]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/connect, 50s elapsed]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...g-iac-viya4aks-aks/agentPools/stateful, 50s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 50s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...-iac-viya4aks-aks/agentPools/stateless, 50s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 1m0s elapsed]
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/connect, 1m0s elapsed]
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...g-iac-viya4aks-aks/agentPools/stateful, 1m0s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 1m0s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...-iac-viya4aks-aks/agentPools/stateless, 1m0s elapsed]
module.node_pools["stateless"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destruction complete after 1m1s
module.node_pools["connect"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destruction complete after 1m1s
module.node_pools["stateful"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destruction complete after 1m1s
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 1m10s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 1m10s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 1m20s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 1m20s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 1m30s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 1m30s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 1m40s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 1m40s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 1m50s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 1m50s elapsed]
^@module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...pg-iac-viya4aks-aks/agentPools/compute, 2m0s elapsed]
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...myuser-iac-viya4aks-aks/agentPools/cas, 2m0s elapsed]
module.node_pools["compute"].azurerm_kubernetes_cluster_node_pool.static_node_pool[0]: Destruction complete after 2m1s
module.node_pools["cas"].azurerm_kubernetes_cluster_node_pool.autoscale_node_pool[0]: Destruction complete after 2m1s
module.aks.azurerm_kubernetes_cluster.aks: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourcegroups/myuser-iac-viya4aks-rg/providers/Microsoft.ContainerService/managedClusters/myuser-iac-viya4aks-aks]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 10s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 20s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 30s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 40s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 50s elapsed]
^@module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 1m0s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 1m10s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 1m20s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 1m30s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 1m40s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 1m50s elapsed]
^@module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 2m0s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 2m10s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 2m20s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 2m30s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 2m40s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 2m50s elapsed]
^@module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 3m0s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 3m10s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 3m20s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 3m30s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 3m40s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 3m50s elapsed]
^@module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 4m0s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 4m10s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 4m20s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 4m30s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 4m40s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 4m50s elapsed]
^@module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...anagedClusters/myuser-iac-viya4aks-aks, 5m0s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Destruction complete after 5m2s
module.vnet.azurerm_virtual_network.vnet: Destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet]
module.vnet.azurerm_virtual_network.vnet: Still destroying... [id=/subscriptions/c9-snipped--87f4-4d89-8724-...rtualNetworks/myuser-iac-viya4aks-vnet, 10s elapsed]
module.vnet.azurerm_virtual_network.vnet: Destruction complete after 10s
module.vnet.azurerm_virtual_network.vnet: Creating...
module.vnet.azurerm_virtual_network.vnet: Creation complete after 5s [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet]
data.azurerm_subnet.aks-subnet: Reading... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet/subnets/myuser-iac-viya4aks-aks-subnet]
data.azurerm_subnet.misc-subnet: Reading... [id=/subscriptions/c9-snipped--87f4-4d89-8724-a0da5fe4ad5c/resourceGroups/myuser-iac-viya4aks-rg/providers/Microsoft.Network/virtualNetworks/myuser-iac-viya4aks-vnet/subnets/myuser-iac-viya4aks-misc-subnet]

Error: Error: Subnet "myuser-iac-viya4aks-aks-subnet" (Virtual Network "myuser-iac-viya4aks-vnet" / Resource Group "myuser-iac-viya4aks-rg") was not found

  on main.tf line 142, in data "azurerm_subnet" "aks-subnet":
 142: data "azurerm_subnet" "aks-subnet" {



Error: Error: Subnet "myuser-iac-viya4aks-misc-subnet" (Virtual Network "myuser-iac-viya4aks-vnet" / Resource Group "myuser-iac-viya4aks-rg") was not found

  on main.tf line 150, in data "azurerm_subnet" "misc-subnet":
 150: data "azurerm_subnet" "misc-subnet" {



real	9m19.206s
user	0m0.122s
sys	0m0.089s

TASK [Setup storage folders] failing for viya4-deployment docker deployment

TASK [Setup storage folders] ***************************************************
task path: /viya4-deployment/playbooks/playbook.yaml:34
failed: [localhost -> jump_server_ip] (item=bin) => {"ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "bin", "mode": "0755", "msg": "chgrp failed: failed to look up group nobody", "owner": "nobody", "path": "/mnt/viya-share/sinmon/bin", "size": 4096, "state": "directory", "uid": 65534}
failed: [localhost -> jump_server_ip] (item=homes) => {"ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "homes", "mode": "0755", "msg": "chgrp failed: failed to look up group nobody", "owner": "nobody", "path": "/mnt/viya-share/sinmon/homes", "size": 4096, "state": "directory", "uid": 65534}
failed: [localhost -> jump_server_ip] (item=data) => {"ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "data", "mode": "0755", "msg": "chgrp failed: failed to look up group nobody", "owner": "nobody", "path": "/mnt/viya-share/sinmon/data", "size": 4096, "state": "directory", "uid": 65534}
failed: [localhost -> jump_server_ip] (item=astores) => {"ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": "astores", "mode": "0755", "msg": "chgrp failed: failed to look up group nobody", "owner": "nobody", "path": "/mnt/viya-share/sinmon/astores", "size": 4096, "state": "directory", "uid": 65534}

PLAY RECAP *********************************************************************
localhost : ok=16 changed=3 unreachable=0 failed=1 skipped=9 rescued=0 ignored=0

Documentation: Remove bracekts and replace with quotation marks in ENV files

The environment variable notation usage of brackets in https://github.com/sassoftware/viya4-iac-azure#docker-1 is slightly misleading. The use of brackets around the value will yield error messages when attempting to run the commands later. The Docker section should be updated to match the notation used in the Terraform section next to it.

Environment variable notation in the below linked sections should also be updated to use quotes instead of brackets:
https://github.com/sassoftware/viya4-iac-azure/blob/main/docs/user/TerraformAzureAuthentication.md#terraform
https://github.com/sassoftware/viya4-iac-azure/blob/main/docs/user/TerraformAzureAuthentication.md#docker

apply tags to netapp objects

The objects in the netapp module need to have the tags from the user "tags" variable applied to them.
On some sites, the IT department tags created resources with default tags that indicate who created the resource. That leads to the terraform state to get out of sync with the resources. So, on re-apply, the externally added tags show up as diffs.
Pre-emptively applying those tags avoids that issue.
A side-effect of the netapp resources changing their state is that the jump host gets recreated. Adding the correct tags here avoids that issue.

Warning: Interpolation-only expressions are deprecated -for subnet_names

Warning: Interpolation-only expressions are deprecated

on main.tf line 134, in module "vnet":
134: "${local.aks_subnet_name}" = ["Microsoft.Sql"],

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and one more similar warning elsewhere)

tfstate file bytes not flushed to nfs mounted directory

When attempting to create k8s cluster resources in azure using DOCKER approach I see tfstate get created by nfsnobody, but it remains a 0 byte file. If I ask it to write out tfstate to a "local filesystem directory" like /tmp/mydir/, it succeeds; so I have a workaround. I'm opening this issue because customers will run into this and either:

  1. at the least our install doc should advise the install person to put tfstate in a non-nfs dir
  2. the docker-dope container could write out tfstate to a local tmpdir in /tmp (or some confirmed local mount) and when done copy it to the dir the user specified. Each deployment IaC script that modifies the tfstate would need to implement this internal workaround.

----- details

cd /r/ge.unx.sas.com/vol/vol620/u62/zahelm/gitrepos/epd-deployment-automation/deploydir-azure-zahelm-01

df -PH .
Filesystem Size Used Avail Use% Mounted on
ge.unx.sas.com:/vol/vol620 3.0T 2.3T 659G 78% /r/ge.unx.sas.com/vol/vol620

export DOPEVERSION=0.3.0
export THISDIR=/r/ge.unx.sas.com/vol/vol620/u62/zahelm/gitrepos/epd-deployment-automation/deploydir-azure-zahelm-01
docker run
-e TF_VAR_subscription_id="$SUB_ID"
-e TF_VAR_tenant_id="$TENANT_ID"
-e TF_VAR_client_id="$SP_APPID"
-e TF_VAR_client_secret="$SP_PASSWD"
-e ARM_SUBSCRIPTION_ID="$SUB_ID"
-e ARM_TENANT_ID="$TENANT_ID"
-e ARM_CLIENT_ID="$SP_APPID"
-e ARM_CLIENT_SECRET="$SP_PASSWD"
-v $THISDIR:/data
docker-dope.cyber.sas.com/viya4-iac-azure:$DOPEVERSION
apply
-auto-approve
-var-file /data/terraform.tfvars
-state /data/terraform.tfstate

while that docker run is running I see the following tfstate files in $THISDIR

-rw-r--r-- 1 nfsnobody nfsnobody 0 Nov 3 12:04 terraform.tfstate
-rw------- 1 nfsnobody nfsnobody 209 Nov 3 12:00 .terraform.tfstate.lock.info

after that docker container is done running, I see 0 bytes written to tfstate:

-rw-r--r-- 1 nfsnobody nfsnobody 0 Nov 3 12:04 terraform.tfstate

Add dependency for kubernetes_config_map on AKS

When an error occurs on terraform (re)apply and destroy get this error -
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/sas-iac-buildinfo": dial tcp [::1]:80: connect: connection refused

Any kubernetes resources should have dependency on K8s cluster resource and cleaned up accordingly.

Using NFSv3 in Azure StorageAccount

Hi,

Do you have any plans of supporting (beta maybe) simply using NFSv3 in a storageaccount. (Currently in preview). This would potentially be a more attractive option than using a VM ( or NetApp) due to the native Azure features available.

Regards
Thomas

nfs disk creation timeout

In one deployment creation, I this this error:

Error: Error waiting for Virtual Machine "eurmae-tf-nfs-vm" (Resource Group "eurmae-tf-rg") to finish updating Disk "eurmae-tf-nfs-disk03": Code="RetryableError" Message="A retryable error occurred."

  on modules/azurerm_vm/main.tf line 45, in resource "azurerm_virtual_machine_data_disk_attachment" "vm_data_disk_attach":
  45: resource "azurerm_virtual_machine_data_disk_attachment" "vm_data_disk_attach" {

When I did a re-apply, I go this:

Error: A resource with the ID "/subscriptions/85704435-0cf9-4366-bf03-ef93f952145a/resourceGroups/eurmae-tf-rg/providers/Microsoft.Compute/virtualMachines/eurmae-tf-nfs-vm/dataDisks/eurmae-tf-nfs-disk03" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_virtual_machine_data_disk_attachment" for more information.

  on modules/azurerm_vm/main.tf line 45, in resource "azurerm_virtual_machine_data_disk_attachment" "vm_data_disk_attach":
  45: resource "azurerm_virtual_machine_data_disk_attachment" "vm_data_disk_attach" {

At this point, one can either go ahead and do the "terraform import", or destroy and apply again.
What seems to happen here is that the creation of the disk takes longer than the terraform timout. Terraform gives up, but Azure keeps creating the disk. On the subsequent apply the disk then is already there.

One approach may be to increase the tf timeout for the disk resources

Docker execution fails with multiple "data.external.iac_tooling_version is empty tuple" erros

ubuntu@ip-10-249-6-127:~/sas/viya4-iac-azure$ docker run --rm --env-file $HOME/sas/.azure_docker_creds.env -v $HOME/sas/viya4-iac-azure:/workspace viya4-iac-azure plan -var-file /workspace/terraform.tfvars -state /workspace/terraform.tfstate
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.nfs-cloudconfig: Refreshing state...
data.external.git_hash: Refreshing state...
data.template_cloudinit_config.nfs: Refreshing state...
data.azuread_service_principal.sp_client: Refreshing state...
data.azurerm_subscription.current: Refreshing state...


Warning: Interpolation-only expressions are deprecated

on main.tf line 134, in module "vnet":
134: "${local.aks_subnet_name}" = ["Microsoft.Sql"],

Terraform 0.11 and earlier required all non-constant expressions to be
provided via interpolation syntax, but this pattern is now deprecated. To
silence this warning, remove the "${ sequence from the start and the }"
sequence from the end of this expression, leaving just the inner expression.

Template interpolation syntax is still used to construct strings from
expressions when the template includes multiple interpolation sequences or a
mixture of literal strings and interpolations. This deprecation applies only
to templates that consist entirely of a single interpolation sequence.

(and one more similar warning elsewhere)

Error: Invalid index

on main.tf line 396, in data "template_file" "sas_iac_buildinfo":
396: terraform-version = lookup(data.external.iac_tooling_version.0.result, "terraform_version")
|----------------
| data.external.iac_tooling_version is empty tuple

The given key does not identify an element in this collection value.

Error: Invalid index

on main.tf line 397, in data "template_file" "sas_iac_buildinfo":
397: provider-selections = lookup(data.external.iac_tooling_version.0.result, "provider_selections")
|----------------
| data.external.iac_tooling_version is empty tuple

The given key does not identify an element in this collection value.

Error: Invalid index

on main.tf line 398, in data "template_file" "sas_iac_buildinfo":
398: terraform-revision = lookup(data.external.iac_tooling_version.0.result, "terraform_revision")
|----------------
| data.external.iac_tooling_version is empty tuple

The given key does not identify an element in this collection value.

Error: Invalid index

on main.tf line 399, in data "template_file" "sas_iac_buildinfo":
399: terraform-outdated = lookup(data.external.iac_tooling_version.0.result, "terraform_outdated")
|----------------
| data.external.iac_tooling_version is empty tuple

The given key does not identify an element in this collection value.

attribute "max_pod" is required

Using the latest main branch, it seems that the max_pods parameters has to be explicitly set (instead of using the default value).
I had to set the max_pods explicitly to 110 for each node_pool.

image

NFS service is required to start every time the Cluster is scaled up

We have  set up all retail solutions  on  azure  through  IAC , wherein  we have used

storage type as standard  
 nfs-client as provisioner 
Each time we scale up the cluster  in morning   we face issue where  in PVC  are not mounted  for most of pod and below error is getting displayed, though  kubectl get pvc shows they are bounded

"MountVolume.SetUp failed for volume "pvc-36b035c7-e970-4194-98c2-64fc2d632939" : mount failed: exit status 32" and other pods like arke/cas/postgres/cpspostgres not started.

To fix this issue every time we have to login into NFS VM and restart the NFS Service  and volumes are mounted

The stop sequence that we follow is scaling down Viya 4 deployment followed by AKS  and then shutting down nfs VM  followed by jump server VM and reverse to start up.

 Any resolution  to this? As most of time we get into issues where we have scale down and up   sas-cachelocator and sas-cacheserver  to get it working post NFS service restart.

Docker container error: failed to execute "git": fatal: not a git repository (or any of the parent directories): .git

I am trying to use the docker container. I was able to build it and run a terraform plan or apply inside the container.
However the container session ends with this error:

data.external.iac_tooling_version: Refreshing state...
data.external.git_hash: Refreshing state...
data.template_file.nfs-cloudconfig: Refreshing state...
data.template_cloudinit_config.nfs: Refreshing state...
data.azurerm_subscription.current: Refreshing state...
Error: failed to execute "git": fatal: not a git repository (or any of the parent directories): .git

Any ideas how to fix the error?

Creation with internal IP-addresses

A customer requires us to use internal network addresses for all ressources in their Azure tenant. They use their Azure tenant as part of their internal network. I cannot find documented variables to control the ip-address ranges that will be used for created ressources such as:

  • Azure PostgreSQL
  • AKS including nodes (AKS private cluster option?)

Add in support for Azure Keyvault

Secrets should be retrieved and stored in Azure Keyvault

For this deployment it would be helpful if an Azure Keyvault was either deployed as an optional resource to handle secrets (or reference to existing vault could be given). ( this could potentially be used for storing Sasviya secrets in workload deployments as well).

Thanks!

Upgrade provider version to hashicorp/azurerm 2.49.0

We have ran into an issue with v. 2.42 which is required in main.tf. It seems to be fixed with v 2.49.

There is a discussion about this bug on hashicorp/terraform-provider-azurerm#10292

The error we see is:

Error: Error making Read request on AzureRM Log Analytics workspaces 'mnr-sas-viya-log-dev': azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/bdd1ead1-0e94-4637-93a4-acea65b73391/resourcegroups/MarketsAndRisk-SAS-Viya-DEV/providers/Microsoft.OperationalInsights/workspaces/mnr-sas-viya-log-dev?api-version=2020-08-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":{"code":"bad_request_102","message":"Required metadata header not specified"}} Endpoint http://localhost:50342/oauth2/token?api-version=2017-09-01&resource=https%3A%2F%2Fmanagement.azure.com%2F

Move resource null check to root module

Some users are hitting this issue -

  11:   count                     = var.nsg == null ? 0 : 1

 

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.```

There is a recommended approach to perform null checks in root module to avoid this - https://github.com/hashicorp/terraform/issues/26755

Add tags to Azure Container Registry resource

Please add the tags argument to the ACR resource block so that tags defined in the variables file get added to the resource in Azure. Currently, re-running Terraform marks the ACR as requiring an update, because automatically added tags are targeted for removal.

NFS NIC error prevents Vnet/Subnet deletion on AKS updates

On AKS updates, NFS NIC resource does not allow Vnet update

module.aks.azurerm_kubernetes_cluster.aks: Still destroying... [id=/subscriptions/XXXX--...nerService/managedClusters/mnm-tst-aks, 4m0s elapsed]
module.aks.azurerm_kubernetes_cluster.aks: Destruction complete after 4m3s
module.vnet.azurerm_virtual_network.vnet: Destroying... [id=/subscriptions/XXXX/resourceGroups/mnm-tst-rg/providers/Microsoft.Network/virtualNetworks/mnm-tst-vnet]

Error: Error deleting Virtual Network "mnm-tst-vnet" (Resource Group "mnm-tst-rg"): network.VirtualNetworksClient#Delete: Failure sending request: StatusCode=400 -- Original Error: Code="InUseSubnetCannotBeDeleted" Message="Subnet mnm-tst-misc-subnet is in use by /subscriptions/XXXX/resourceGroups/mnm-tst-rg/providers/Microsoft.Network/networkInterfaces/mnm-tst-nfs-nic/ipConfigurations/mnm-tst-nfs-ip_config and cannot be deleted. In order to delete the subnet, delete all the resources within the subnet. See aka.ms/deletesubnet." Details=[]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.