Coder Social home page Coder Social logo

azure / k8s-deploy Goto Github PK

View Code? Open in Web Editor NEW
249.0 21.0 101.0 30.48 MB

GitHub Action for deploying to Kubernetes clusters

License: MIT License

JavaScript 0.11% TypeScript 94.30% Python 4.72% Mustache 0.87%
actions github-actions azure k8s kubernetes deploy action github-action

k8s-deploy's Introduction

Deploy manifests action for Kubernetes

This action is used to deploy manifests to Kubernetes clusters. It requires that the cluster context be set earlier in the workflow by using either the Azure/aks-set-context action or the Azure/k8s-set-context action. It also requires Kubectl to be installed (you can use the Azure/setup-kubectl action).

If you are looking to automate your workflows to deploy to Azure Web Apps and Azure Web App for Containers, consider using Azure/webapps-deploy action.

This action requires the following permissions from your workflow:

permissions:
   id-token: write
   contents: read
   actions: read

Action capabilities

Following are the key capabilities of this action:

  • Artifact substitution: Takes a list of container images which can be specified along with their tags or digests. They are substituted into the non-templatized version of manifest files before applying to the cluster to ensure that the right version of the image is pulled by the cluster nodes.

  • Object stability checks: Rollout status is checked for the Kubernetes objects deployed. This is done to incorporate stability checks while computing the action status as success/failure.

  • Secret handling: The secret names specified as inputs in the action are used to augment the input manifest files with imagePullSecrets values before deploying to the cluster. Also, checkout the Azure/k8s-create-secret action for creation of generic or docker-registry secrets in the cluster.

  • Deployment strategy Supports both canary and blue-green deployment strategies

    • Canary strategy: Workloads suffixed with '-baseline' and '-canary' are created. There are two methods of traffic splitting supported:

      • Service Mesh Interface: Service Mesh Interface abstraction allows for plug-and-play configuration with service mesh providers such as Linkerd and Istio. Meanwhile, this action takes away the hard work of mapping SMI's TrafficSplit objects to the stable, baseline and canary services during the lifecycle of the deployment strategy. Service mesh based canary deployments using this action are more accurate as service mesh providers enable granular percentage traffic split (via service registry and sidecar containers injected into pods alongside application containers).
      • Only Kubernetes (no service mesh): In the absence of service mesh, while it may not be possible to achieve exact percentage split at the request level, it is still possible to perform canary deployments by deploying -baseline and -canary workload variants next to the stable variant. The service routes requests to pods of all three workload variants as the selector-label constraints are met (KubernetesManifest will honor these when creating -baseline and -canary variants). This achieves the intended effect of routing only a portion of total requests to the canary.
    • Blue-Green strategy: Choosing blue-green strategy with this action leads to creation of workloads suffixed with '-green'. An identified service is one that is supplied as part of the input manifest(s) and targets a workload in the supplied manifest(s). There are three route-methods supported in the action:

      • Service route-method: Identified services are configured to target the green deployments.
      • Ingress route-method: Along with deployments, new services are created with '-green' suffix (for identified services), and the ingresses are in turn updated to target the new services.
      • SMI route-method: A new TrafficSplit object is created for each identified service. The TrafficSplit object is updated to target the new deployments. This works only if SMI is set up in the cluster.

      Traffic is routed to the new workloads only after the time provided as version-switch-buffer input has passed. The promote action creates workloads and services with new configurations but without any suffix. reject routes traffic back to the old workloads and deletes the '-green' workloads.

Action inputs

Action inputs Description
action

(Required)
Acceptable values: deploy/promote/reject.
Promote or reject actions are used to promote or reject canary/blue-green deployments. Sample YAML snippets are provided below for guidance.
manifests

(Required)
Path to the manifest files to be used for deployment. These can also be directories containing manifest files, in which case, all manifest files in the referenced directory at every depth will be deployed, or URLs to manifest files (like https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/controllers/nginx-deployment.yaml). Files and URLs not ending in .yml or .yaml will be ignored.
strategy

(Required)
Acceptable values: basic/canary/blue-green.
Default value: basic
Deployment strategy to be used while applying manifest files on the cluster.
basic - Template is force applied to all pods when deploying to cluster. NOTE: Can only be used with action == deploy
canary - Canary deployment strategy is used when deploying to the cluster.
blue-green - Blue-Green deployment strategy is used when deploying to cluster.
namespace

(Optional)
Namespace within the cluster to deploy to.
images

(Optional)
Fully qualified resource URL of the image(s) to be used for substitutions on the manifest files. This multiline input accepts specifying multiple artifact substitutions in newline separated form. For example:

images: |
  contosodemo.azurecr.io/foo:test1
  contosodemo.azurecr.io/bar:test2

In this example, all references to contosodemo.azurecr.io/foo and contosodemo.azurecr.io/bar are searched for in the image field of the input manifest files. For the matches found, the tags test1 and test2 are substituted.
imagepullsecrets

(Optional)
Multiline input where each line contains the name of a docker-registry secret that has already been setup within the cluster. Each of these secret names are added under imagePullSecrets field for the workloads found in the input manifest files
pull-images

(Optional)
Acceptable values: true/false
Default value: true
Switch whether to pull the images from the registry before deployment to find out Dockerfile's path in order to add it to the annotations
traffic-split-method

(Optional)
Acceptable values: pod/smi.
Default value: pod
SMI: Percentage traffic split is done at request level using service mesh. Service mesh has to be setup by cluster admin. Orchestration of TrafficSplit objects of SMI is handled by this action.
Pod: Percentage split not possible at request level in the absence of service mesh. Percentage input is used to calculate the replicas for baseline and canary as a percentage of replicas specified in the input manifests for the stable variant.
traffic-split-annotations

(Optional)
Annotations in the form of key/value pair to be added to TrafficSplit.
percentage

(Optional but required if strategy is canary)
Used to compute the number of replicas of '-baseline' and '-canary' variants of the workloads found in manifest files. For the specified percentage input, if (percentage * numberOfDesirerdReplicas)/100 is not a round number, the floor of this number is used while creating '-baseline' and '-canary'.

For example, if Deployment hello-world was found in the input manifest file with 'replicas: 4' and if 'strategy: canary' and 'percentage: 25' are given as inputs to the action, then the Deployments hello-world-baseline and hello-world-canary are created with 1 replica each. The '-baseline' variant is created with the same image and tag as the stable version (4 replica variant prior to deployment) while the '-canary' variant is created with the image and tag corresponding to the new changes being deployed
baseline-and-canary-replicas

(Optional and relevant only if strategy is canary and traffic-split-method is smi)
The number of baseline and canary replicas. Percentage traffic split is controlled in the service mesh plane, the actual number of replicas for canary and baseline variants could be controlled independently of the traffic split. For example, assume that the input Deployment manifest desired 30 replicas to be used for stable and that the following inputs were specified for the action

    strategy: canary
    trafficSplitMethod: smi
    percentage: 20
    baselineAndCanaryReplicas: 1


In this case, stable variant will receive 80% traffic while baseline and canary variants will receive 10% each (20% split equally between baseline and canary). However, instead of creating baseline and canary with 3 replicas each, the explicit count of baseline and canary replicas is honored. That is, only 1 replica each is created for baseline and canary variants.
route-method

(Optional and relevant only if strategy is blue-green)
Acceptable values: service/ingress/smi.
Default value: service.
Traffic is routed based on this input.
Service: Service selector labels are updated to target '-green' workloads.
Ingress: Ingress backends are updated to target the new '-green' services which in turn target '-green' deployments.
SMI: A TrafficSplit object is created for each required service to route traffic to new workloads.
version-switch-buffer

(Optional and relevant only if strategy is blue-green)
Acceptable values: 1-300.
Default value: 0.
Waits for the given input in minutes before routing traffic to '-green' workloads.
private-cluster

(Optional and relevant only using K8's deploy for a cluster with private cluster enabled)
Acceptable values: true, false
Default value: false.
force

(Optional)
Deploy when a previous deployment already exists. If true then '--force' argument is added to the apply command. Using '--force' argument is not recommended in production.
annotate-resources

(Optional)
Acceptable values: true/false
Default value: true
Switch whether to annotate the resources or not. If set to false all annotations are skipped completely.
annotate-namespace

(Optional)
Acceptable values: true/false
Default value: true
Switch whether to annotate the namespace resources object or not. Ignored when annotate-resources is set to false.
skip-tls-verify

(Optional)
Acceptable values: true/false
Default value: false
True if the insecure-skip-tls-verify option should be used

Usage Examples

Basic deployment (without any deployment strategy)

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     manifests: |
        dir/manifestsDirectory
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }}'
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2

Private cluster deployment

- uses: Azure/k8s-deploy@v5
  with:
     resource-group: yourResourceGroup
     name: yourClusterName
     action: deploy
     strategy: basic

     private-cluster: true
     manifests: |
        manifests/azure-vote-backend-deployment.yaml
        manifests/azure-vote-backend-service.yaml
        manifests/azure-vote-frontend-deployment.yaml
        manifests/azure-vote-frontend-service.yaml
     images: |
        registry.azurecr.io/containername

Canary deployment without service mesh

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }}'
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2
     manifests: |
        deployment.yaml
        service.yaml
        dir/manifestsDirectory
     strategy: canary
     action: deploy
     percentage: 20

To promote/reject the canary created by the above snippet, the following YAML snippet could be used:

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }}'
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2
     manifests: |
        deployment.yaml
        service.yaml
        dir/manifestsDirectory
     strategy: canary
     action: promote # substitute reject if you want to reject

Canary deployment based on Service Mesh Interface

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }}'
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2
     manifests: |
        deployment.yaml
        service.yaml
        dir/manifestsDirectory
     strategy: canary
     action: deploy
     traffic-split-method: smi
     percentage: 20
     baseline-and-canary-replicas: 1

To promote/reject the canary created by the above snippet, the following YAML snippet could be used:

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }} '
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2
     manifests: |
        deployment.yaml
        service.yaml
        dir/manifestsDirectory
     strategy: canary
     traffic-split-method: smi
     action: reject # substitute promote if you want to promote

Blue-Green deployment with different route methods

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }}'
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2
     manifests: |
        deployment.yaml
        service.yaml
        ingress.yml
     strategy: blue-green
     action: deploy
     route-method: ingress # substitute with service/smi as per need
     version-switch-buffer: 15

To promote/reject the green workload created by the above snippet, the following YAML snippet could be used:

- uses: Azure/k8s-deploy@v5
  with:
     namespace: 'myapp'
     images: 'contoso.azurecr.io/myapp:${{ event.run_id }}'
     imagepullsecrets: |
        image-pull-secret1
        image-pull-secret2
     manifests: |
        deployment.yaml
        service.yaml
        ingress.yml
     strategy: blue-green
     route-method: ingress # should be the same as the value when action was deploy
     action: promote # substitute reject if you want to reject

End to end workflows

Following are a few examples of not just this action, but how this action could be used along with other container and k8s related actions for building images and deploying objects onto k8s clusters:

Build container image and deploy to Azure Kubernetes Service cluster

on: [push]

jobs:
   build:
      runs-on: ubuntu-latest
      steps:
         - uses: actions/checkout@v4

         - uses: Azure/docker-login@v1
           with:
              login-server: contoso.azurecr.io
              username: ${{ secrets.REGISTRY_USERNAME }}
              password: ${{ secrets.REGISTRY_PASSWORD }}

         - run: |
              docker build . -t contoso.azurecr.io/k8sdemo:${{ github.sha }}
              docker push contoso.azurecr.io/k8sdemo:${{ github.sha }}

         - uses: azure/setup-kubectl@v4

         # Set the target AKS cluster.
         - uses: Azure/aks-set-context@v4
           with:
              creds: '${{ secrets.AZURE_CREDENTIALS }}'
              cluster-name: contoso
              resource-group: contoso-rg

         - uses: Azure/k8s-create-secret@v4
           with:
              container-registry-url: contoso.azurecr.io
              container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
              container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
              secret-name: demo-k8s-secret

         - uses: Azure/k8s-deploy@v5
           with:
              action: deploy
              manifests: |
                 manifests/deployment.yml
                 manifests/service.yml
              images: |
                 demo.azurecr.io/k8sdemo:${{ github.sha }}
              imagepullsecrets: |
                 demo-k8s-secret

Build container image and deploy to any Azure Kubernetes Service cluster

on: [push]

jobs:
   build:
      runs-on: ubuntu-latest
      steps:
         - uses: actions/checkout@v4

         - uses: Azure/docker-login@v1
           with:
              login-server: contoso.azurecr.io
              username: ${{ secrets.REGISTRY_USERNAME }}
              password: ${{ secrets.REGISTRY_PASSWORD }}

         - run: |
              docker build . -t contoso.azurecr.io/k8sdemo:${{ github.sha }}
              docker push contoso.azurecr.io/k8sdemo:${{ github.sha }}

         - uses: azure/setup-kubectl@v4

         - uses: Azure/k8s-set-context@v4
           with:
              kubeconfig: ${{ secrets.KUBE_CONFIG }}

         - uses: Azure/k8s-create-secret@v4
           with:
              container-registry-url: contoso.azurecr.io
              container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
              container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
              secret-name: demo-k8s-secret

         - uses: Azure/k8s-deploy@v4
           with:
              action: deploy
              manifests: |
                 manifests/deployment.yml
                 manifests/service.yml
              images: |
                 demo.azurecr.io/k8sdemo:${{ github.sha }}
              imagepullsecrets: |
                 demo-k8s-secret

Build image and add dockerfile-path label to it

We can use this image in other workflows once built.

on: [push]
env:
   NAMESPACE: demo-ns2

jobs:
   build:
      runs-on: ubuntu-latest
      steps:
         - uses: actions/checkout@v4

         - uses: Azure/docker-login@v1
           with:
              login-server: contoso.azurecr.io
              username: ${{ secrets.REGISTRY_USERNAME }}
              password: ${{ secrets.REGISTRY_PASSWORD }}

         - run: |
              docker build . -t contoso.azurecr.io/k8sdemo:${{ github.sha }} --label dockerfile-path=https://github.com/${{github.repo}}/blob/${{github.sha}}/Dockerfile
              docker push contoso.azurecr.io/k8sdemo:${{ github.sha }}

Use bake action to get manifests deploying to a Kubernetes cluster

on: [push]
env:
   NAMESPACE: demo-ns2

jobs:
   deploy:
      runs-on: ubuntu-latest
      steps:
         - uses: actions/checkout@master

         - uses: Azure/docker-login@v1
           with:
              login-server: contoso.azurecr.io
              username: ${{ secrets.REGISTRY_USERNAME }}
              password: ${{ secrets.REGISTRY_PASSWORD }}

         - uses: azure/setup-kubectl@v4

         # Set the target AKS cluster.
         - uses: Azure/aks-set-context@v4
           with:
              creds: '${{ secrets.AZURE_CREDENTIALS }}'
              cluster-name: contoso
              resource-group: contoso-rg

         - uses: Azure/k8s-create-secret@v4
           with:
              namespace: ${{ env.NAMESPACE  }}
              container-registry-url: contoso.azurecr.io
              container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
              container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
              secret-name: demo-k8s-secret

         - uses: azure/k8s-bake@v3
           with:
              renderEngine: 'helm'
              helmChart: './aks-helloworld/'
              overrideFiles: './aks-helloworld/values-override.yaml'
              overrides: |
                 replicas:2
              helm-version: 'latest'
           id: bake

         - uses: Azure/k8s-deploy@v5
           with:
              action: deploy
              manifests: ${{ steps.bake.outputs.manifestsBundle }}
              images: |
                 contoso.azurecr.io/k8sdemo:${{ github.sha }}
              imagepullsecrets: |
                 demo-k8s-secret

Traceability Fields Support

  • Environment variable HELM_CHART_PATHS is a list of helmchart files expected by k8s-deploy - it will be populated automatically if you are using k8s-bake to generate the manifests.
  • Use script to build image and add dockerfile-path label to it. The value expected is the link to the dockerfile: https://github.com/${{github.repo}}/blob/${{github.sha}}/Dockerfile. If your dockerfile is in the same repo and branch where the workflow is run, it can be a relative path and it will be converted to a link for traceability.
  • Run docker login action for each image registry - in case image build and image deploy are two distinct jobs in the same or separate workflows.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Support

k8s-deploy is an open source project that is not covered by the Microsoft Azure support policy. Please search open issues here, and if your issue isn't already represented please open a new one. The project maintainers will respond to the best of their abilities.

k8s-deploy's People

Contributors

ajinkya599 avatar davidgamero avatar dependabot[bot] avatar foxboron avatar ganeshrockz avatar hsubramanianaks avatar jaiveerk avatar josh-01 avatar koushdey avatar microsoftopensource avatar mklarsen avatar msftgits avatar n-usha avatar nv35 avatar oddsund avatar olivermking avatar otetard avatar parroty avatar punkeel avatar rgsubh avatar roehrijn avatar shashankbarsin avatar shigupt202 avatar sundargs2000 avatar tauhid621 avatar tbarnes94 avatar thesattiraju avatar thomastvedt avatar vidya2606 avatar zainuvk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-deploy's Issues

Deployment/update error map: map[] does not contain declared merge key: name

When i deploy the first time with k8s deploy everything works fine.
But when i deploy the second time i get the following error:

error calculating patch from openapi spec: map: map[] does not contain declared merge key: name

Strange though that when i deploy localy the firsttime with kubectl and update everything works fine.
But when k8s has deployed i even can not update with local kubectl i get the same error.

Any ideas what this could be?

This is the deployment yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  strategy:
    type: RollingUpdate
  template:
    metadata:
      name: test
      labels:
        app: test
    spec:
      securityContext:
        runAsGroup: 33
        fsGroup: 33
      volumes:
        - name: nfs-bucket
          persistentVolumeClaim:
            claimName: nfs-bucket
      containers:
        - name: test
          image: niek/test:develop
          imagePullPolicy: Always
          volumeMounts:
            - name: nfs-bucket
              mountPath: "/var/www/bucket"
          envFrom:
            - secretRef:
                name: test-secret
          ports:
            - name: 80tcp01
              containerPort: 80
              protocol: TCP
          stdin: true
          tty: true
      imagePullSecrets:
        - name: dockerhub

Job is stuck

Azure/k8s-deploy@v1 step doesn't terminate for some reason. Deployment is a success but the action keeps running until manual termination.

step config:

- name: Deploy to aks
   uses: Azure/k8s-deploy@v1
   with:
     namespace: 'dev'
     manifests: |
        manifests/artifact-vars.yaml
        manifests/artifact-deploy.yaml
        manifests/artifact-svc.yaml
     images: |
       somereg.azurecr.io/artifact:${{ github.sha }}
     imagepullsecrets: |
       k8s-secret
     kubectl-version: 'latest'

Ouputs:

   manifests/artifact-deploy.yaml
   manifests/artifact-svc.yaml
  
    images: ***/artifact:4d85xxxxxxx
  
    imagepullsecrets: k8s-secret
  

    kubectl-version: latest
    strategy: none
    traffic-split-method: pod
    baseline-and-canary-replicas: 0
    percentage: 0
    action: deploy
    force: false
  env:
    JAVA_HOME_13.0.4_x64: /opt/hostedtoolcache/jdk/13.0.4/x64
    JAVA_HOME: /opt/hostedtoolcache/jdk/13.0.4/x64
    JAVA_HOME_13_0_4_X64: /opt/hostedtoolcache/jdk/13.0.4/x64
    KUBECONFIG: /home/runner/work/_temp/kubeconfig_1595618095209
    DOCKER_CONFIG: /home/runner/work/_temp/docker_login_1595618099497
strategy:  none
/opt/hostedtoolcache/kubectl/1.18.6/x64/kubectl apply -f /tmp/ConfigMap_games-vars_1595618147017,/tmp/Deployment_artifact-pod-dev_1595618147017,/tmp/Service_svc-artifact_1595618147017 --namespace dev
configmap/artifact-vars unchanged
deployment.apps/artifact-pod-dev created
service/svc-artifact unchanged
/opt/hostedtoolcache/kubectl/1.18.6/x64/kubectl rollout status Deployment/artifact-pod-dev --namespace dev
##[error]The operation was canceled.

AKS version: 1.17.7

It doesn't matter how long we wait. The step just doesn't terminate. I can supply more logs if needed

Possibility to inform environment variable to be replaced in the container

As we already have the option to change the container name (images).
It would be nice if we have an option to change the environment variable values for containers.

Example of a YAML for Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deploy
spec:
  selector:
    matchLabels:
      app: web
  replicas: 2
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: mycontainer
          image: myregistry\mycontainer:v1
          ports:
            - containerPort: 80
          env:
            - name: MY_CONTAINER_ENV
              value: X

Example of a Github Action:

      - uses: Azure/k8s-deploy@v1
        with:
          manifests: |
            kubernetes.yaml
          env: |
            MY_CONTAINER_ENV: "${{ secrets.MY_CONTAINER_ENV }}"

Variable as Namespace

Maybe I'm missing something simple, but I cannot figure this out for the life of me. I'm trying to set the namespace to dev if the branch is not "master" and I have the branch detection working right now. But it looks like k8s-deploy will not take a variable as the namespace name. Maybe it's a syntax error I'm missing? Here is my yaml:

on:
  push:
    branches:
    - dev
    - master

name: CI/CD Workflow

jobs:
    build-and-deploy:
        runs-on: ubuntu-latest
        steps:
        # checkout the repo
        - name: 'Checkout GitHub Action'
          uses: actions/checkout@master
          
        - name: 'Login via Azure CLI'
          uses: azure/login@v1
          with:
            creds: ${{ secrets.AZURE_CREDENTIALS }}
        
        - name: 'Build and push image to ACR'
          uses: azure/docker-login@v1
          with:
            login-server: ${{ secrets.REGISTRY_LOGIN_SERVER }}
            username: ${{ secrets.REGISTRY_USERNAME }}
            password: ${{ secrets.REGISTRY_PASSWORD }}
        - run: |
            docker build . -t ${{ secrets.REGISTRY_LOGIN_SERVER }}/revio-calendar-backend:${{ github.sha }}
            docker push ${{ secrets.REGISTRY_LOGIN_SERVER }}/revio-calendar-backend:${{ github.sha }}

        - name: 'Set context for AKS'
          uses: Azure/aks-set-context@v1
          with:
            creds: ${{ secrets.AZURE_CREDENTIALS }}
            cluster-name: ${{ secrets.CLUSTER_NAME }}
            resource-group: ${{ secrets.RESOURCE_GROUP }}

        - name: 'Create AKS Secret'
          uses: Azure/k8s-create-secret@v1
          with:
            container-registry-url: ${{ secrets.REGISTRY_LOGIN_SERVER }}
            container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
            container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
            secret-name: c1-k8s-secret

        - name: 'Set Dev Branch'
          env:
            NAMESPACE: dev
          run: echo "$NAMESPACE"
            
        - name: 'Set Master Branch'
          if: endsWith(github.ref, '/master')
          env:
            NAMESPACE: prod
          run: echo "$NAMESPACE"
    
        - name: 'Deploy to Kubernetes'
          uses: Azure/k8s-deploy@v1
          with:
            namespace: $NAMESPACE
            manifests: |
              manifests/deployment.yml
              manifests/service.yml
            images: |
              ${{ secrets.REGISTRY_LOGIN_SERVER }}/revio-calendar-backend:${{ github.sha }}
            imagepullsecrets: |
              c1-k8s-secret

I succeed on all steps but the last, which says it cannot find a namespace named "$NAMEPSACE":

Error from server (NotFound): error when creating "/tmp/Service_revio-calendar-backend_1592439878791": namespaces "$NAMESPACE" not found

Is there a way to use a variable to determine the namespace so I can avoid one yaml for master and one for other branches?

Where should I put the manifest file?

A stupid question:

Where I should put the manifest file, when run a github action with azure/k8s-deploy@v1?

Since I got below error, no matter I put the manifests folder in root of repo, or .github folder etc.

Error: ENOENT: no such file or directory, open 'manifests/deployment.yaml'

Canary SMI strategy generates invalid TrafficSplit resource

When the task is used to do a canary deployment, it makes use of SMI (service mesh interface) for doing the TrafficSplit to manage which traffic goes to the stable vs. canary version of the service. As part of that, it automatically goes to get the SMI custom resource version currently deployed so it can generate its manifest.

The manifest it generates uses XXXXm format weights. That is, you might get a manifest like this:

apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
  name: my-service-azure-pipelines-rollout
  namespace: my-ns
spec:
  backends:
  - service: my-service-stable
    weight: 1000m
  - service: my-service-baseline
    weight: 0m
  - service: my-service-canary
    weight: 0m
  service: my-service

Unfortunately, while the XXXXm format is noted in the very first version of the TrafficSplit spec, as of version 2 of the spec it was removed. The official "SMI SDK for Go" has a detailed custom resource definition and it validates the weight as a number. Further, the current SMI adapter for Istio uses that SDK so it's entirely failing to read and validate TrafficSplit resources generated during canary.

The simplest solution is to stop post-fixing m on the weights. The weights being whole/relative numbers or percentages is compatible with all versions of the spec. A correct TrafficSplit should look like this:

apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
  name: my-service-azure-pipelines-rollout
  namespace: my-ns
spec:
  backends:
  - service: my-service-stable
    weight: 1000
  - service: my-service-baseline
    weight: 0
  - service: my-service-canary
    weight: 0
  service: my-service

The SMI Adapter for Istio generates logs like this to reflect that issue:

E0902 14:45:23.035319       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.TrafficSplit: v1alpha2.TrafficSplitList.Items: []v1alpha2.TrafficSplit: v1alpha2.TrafficSplit.Spec: v1alpha2.TrafficSplitSpec.Backends: []v1alpha2.TrafficSplitBackend: v1alpha2.TrafficSplitBackend.Weight: readUint64: unexpected character: �, error found in #10 byte of ...|"weight":"1000m"},{"|..., bigger context ...|":[{"service":"accounts-service-stable","weight":"1000m"},{"service":"accounts-service-baseline","we|...
E0902 14:45:24.038301       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.TrafficSplit: v1alpha2.TrafficSplitList.Items: []v1alpha2.TrafficSplit: v1alpha2.TrafficSplit.Spec: v1alpha2.TrafficSplitSpec.Backends: []v1alpha2.TrafficSplitBackend: v1alpha2.TrafficSplitBackend.Weight: readUint64: unexpected character: �, error found in #10 byte of ...|"weight":"1000m"},{"|..., bigger context ...|":[{"service":"products-service-stable","weight":"1000m"},{"service":"products-service-baseline","we|...
E0902 14:45:25.042071       1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha2.TrafficSplit: v1alpha2.TrafficSplitList.Items: []v1alpha2.TrafficSplit: v1alpha2.TrafficSplit.Spec: v1alpha2.TrafficSplitSpec.Backends: []v1alpha2.TrafficSplitBackend: v1alpha2.TrafficSplitBackend.Weight: readUint64: unexpected character: �, error found in #10 byte of ...|"weight":"1000m"},{"|..., bigger context ...|":[{"service":"accounts-service-stable","weight":"1000m"},{"service":"accounts-service-baseline","we|...

I'm guessing this logic came from the original KubernetesManifest@V0 Azure DevOps task, which is actually where I discovered it. I've filed a corresponding issue there. We're on AzDO right now but will shortly be moving to GitHub Actions (a few months?) and it'd be cool to see it fixed in both places.

Kubectl wrong architecture being downloading

The amd64 version of kubectl gets installed instead of the arm64 version when running on a self-hosted runner.

image

Repro steps:

  • Self-hosted action controller running on Raspberry Pi 4 Model B
  • Ubuntu 21.04
  • Running uname -m in the same action will return with the value aarch64 to validate the systems architecture.

Current behavior:

  • amd64 version of kubectl gets installed instead of the arm64.
  • kubectl execution errors
    image

Expected behavior:

  • arm64 version of kubectl should be installed on an arm64 architecture.
  • No execution error when running kubectl

Add support for TrafficSplit annotations

When SMI traffic handling is selected, the TrafficSplit that gets generated (both for blue/green and canary is pretty hard-coded with only weights and service names.

At least with Istio, this will only work for services accessed from within the cluster. To get this working with ingress traffic, the Istio VirtualService needs to have a gateway attached.

The Istio SMI adapter has support for setting this through annotations but there's no way to add annotations to the generated TrafficSplit.

It would be cool to be able to add annotations to the TrafficSplit to support cases like this. It could be a key/value list parameter to the task and just copied into the TrafficSplit when it's generated. If none are present, you get the same thing you get now. (Admittedly, I'm kind of new to GitHub Actions and I'm unclear if it'd need to be a multiline string or if you can have object style parameters like Azure DevOps. Either way is fine.)

Corresponding issue filed for Azure DevOps KubernetesManifest task.

Deploy fails with "error: must specify one of -f and -k"

Hey guys,

I'm having trouble making this simple example work:

    - name: Deploy extra manifests into K8s
      uses: azure/k8s-deploy@v1
      with:
        namespace: '${{env.RELEASE_NAME}}'
        manifests: manifests/prepared.yaml

The error in the action log is:

Run azure/k8s-deploy@v1
strategy:  none
/usr/bin/kubectl apply -f  --namespace letsencrypt-wildcard-cert
error: must specify one of -f and -k
##[error]Error: error: must specify one of -f and -k

What am I doing wrong? :)

D

using `azure/k8s-deploy@v1` errors

configuration:

      - uses: azure/k8s-deploy@v1
        with:
          manifests: |
            deploy/deployment.yaml
          namespace: 'bpowell'
          images: |
            myapp.azurecr.io/kube-notify:${{ steps.get_image_tag.outputs.image_tag }}

the deployment.yaml contents:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-notify-deployment
spec:
  selector:
    matchLabels:
      app: kube-notify
  replicas: 1
  template:
    metadata:
      labels:
        app: kube-notify
    spec:
      containers:
      - name: kube-notify
        image: myapp.azurecr.io/kube-notify

error message:

Run azure/k8s-deploy@v1
  with:
    manifests: deploy/deployment.yaml
  
    namespace: bpowell
    images: ***.azurecr.io/kube-notify:20200416-35db60a
  
    strategy: none
    traffic-split-method: pod
    baseline-and-canary-replicas: 0
    percentage: 0
    action: deploy
  env:
    KUBECONFIG: /home/runner/work/_temp/kubeconfig_******
    AZURE_HTTP_USER_AGENT: 
strategy:  none
##[error]TypeError: Cannot read property 'trim' of undefined

Add the ability to turn off annotations for one or more resource types

We have some shared namespaces and the user provided the action doesn't have permissions to modify the namespace. This causes each job run to have the warning

"Error from server (Forbidden): namespaces <namespace> is forbidden: User <user> cannot patch resource "namespaces" in API group "" "

It would great if I could provide a list of Kubernetes resource types (ex "namespaces") to skip being annotated.

Support running in an in-cluster self-hosted worker without separately setting context

Using a solution like summerwind/actions-runner-controller, it is possible to run a self-hosted worker in a Kubernetes cluster. This self-hosted worker can then use the Kubernetes service account the worker is running under to authenticate to the K8s API to eg. do deployment.

Azure/k8s-deploy instead insists on a "context" set by Azure/k8s-set-context. It cannot use or refuses to use the Kubernetes native service account authentication (ie. token and connection info mounted at /var/run/secrets/kubernetes.io/serviceaccount).

As a rule of thumb, I think if kubectl is able to run, Azure/k8s-deploy should be able to run. If you feel this is out of scope for k8s-deploy, alternatively k8s-set-context should be augmented to support the native way.

Deploying with latest tag?

Hiya.

I've got my Kubernetes config set up so it just uses the latest Docker image each time. I've found that this means running kubectl apply -f deployment.yml doesn't do anything, I'm assuming because Kubernetes can't recognise that there's a new image for it to deploy.

So instead, I've been running the kubectl rollout restart deployment/my-app command.

It seems that this Action uses the apply approach. Is there a way I can still use this Action while also only ever pushing my latest image to the Docker registry?

Trying to specify different image version in workflow yaml than in deployment.yaml

I'm running the azure-vote-front app with GitHub Actions and k8s-actions as a CI/CD pipeline with my AKS cluster. I want to have the k8s-deploy action to deploy the version of the image that I have just built using container-actions. However, I have a container image specified in the deployment
,yaml that is a different version. Can I override this using the images field in k8s-deploy?

I'm mostly confused about what this means (from the README):

Artifact substitution
The deploy action takes as input a list of container images which can be specified along with their tags or digests. The same is substituted into the non-templatized version of manifest files before applying to the cluster to ensure that the right version of the image is pulled by the cluster nodes.

imagePullSecret becomes the image

Hi,

I'm not sure, but I think commit 1cae8df broke some things on Azure/k8s-deploy@v1. Basically the generated Kubernetes manifest seem to have wrong imagePullSecret value. Instead of the image pull secret, it's the image value.

This is what I see in the deployed manifest in Kubernetes (the actual deployed).

image: cr.example.com/sydnod-ci
imagePullSecrets:
  - name: 'cr.example.com/sydnod-ci:web-dev-20200305-<retracted>'

This is where I expect imagePullSecret to contain sydnod-ci-container-registry and not cr.example.com/sydnod-ci:web-dev-20200305-<retracted>.

Creation of registery secret

Run Azure/k8s-create-secret@v1
/usr/bin/kubectl create secret docker-registry sydnod-ci-container-registry --docker-username *** --docker-password *** --docker-server *** --docker-email   -n sydnod-ci
secret/sydnod-ci-container-registry created

Deployment to Kubernetes

Run Azure/k8s-deploy@v1
  with:
    namespace: sydnod-ci
    manifests: ./manifests.yaml
    images: ***/sydnod-ci:web-dev-20200305-<retracted>
    imagepullsecrets: sydnod-ci-container-registry
 
    strategy: none
    traffic-split-method: pod
    baseline-and-canary-replicas: 0
    percentage: 0
    action: deploy

Hope anyone can shred some light on this issue.

Thanks!

The action input "images" not work

I hava use "images" option in the github workflow. The README said it will searched for in the image field of the input manifest files. For the matches found, the tags will be substituted. But when i use describe to check the version for the deploymeny, it shows that the deployment use the old version or the default version whic defined in the yaml in my repository. The correct version should be latest-da5602b7265634cef48c8749e2ed0b6494eb4e61, in workflow log it shows the correct version
image,but when i describe the pods it shows the
image

The manifest stays in:https://github.com/wuhan-support/frontend/tree/dev/manifests
The workflow logs in:https://github.com/wuhan-support/frontend/runs/467145391?check_suite_focus=true

I am not very sure wherther i have missed something in my config file?I have a guess if it is the error caused by the latest tag

Thanx for your help.

Add support for an optional flag --force

Add support for an optional flag --force

Our team's (Azure dev Spaces) customer scenario requires us to deploy when a previous deployment already exists. It cannot be a clean deployment. And in this case, I get an error "file is immutable".

Please add it as an optional flag whose value is false by default. We will add a value true as part of our workflow.

Additional Detail: If a job has been created already, you cannot replace without using --force.
One cannot replace jobs since Kubernetes adds labels on create. This is by design.
Specifically this error occurs for: https://github.com/Azure/dev-spaces/blob/master/samples/BikeSharingApp/PopulateDatabase/charts/populatedatabase/templates/job.yaml

Thanks!

Traceability Schema

                                                  Traceability Schema

Abstract
Traceability involves ability to track the source for issue, code, commit, artifact, workflow, and the user from the package deployed on the resource, through documented records. This document proposes the traceability schema which captures the traceability data for any deployment which can be used by any "CI/CD" providers and cloud providers to organize the traceability data and consume it in various forms.

Format:
The schema organizes the information in the following way

  • Categorization of various sections :

    • How did it get deployed ?
      Captures the commit and workflow details was the trigger and resulted in the deployment of a generated
      artifact to a resource

    • What was deployed ?
      Captures the artifacts’ details that has been deployed

    • Where did it get deployed ?
      Captures the target resource where the deployment has happened.

      The “How” and “Where” part can come from various providers which the schema factors in by having a provision to have
      any new source “type/provider” to use the schema to generate the traceability data or a cloud provider to deploy the
      data onto its respective cloud. This definition of the schema makes it agnostic to both cloud providers
      (Azure, AWS, GCP ,Kusto) as well as source control providers (GitHub, ADO, Bitbucket).

  • Representation of a specific section:
    Data is organized into the following in each of the section/categorizes within the traceability data -

    • "Category of data"
      • "Provider of the data"
        • Associated properties specific to the provider
          • The data is organized into multiple subsections/objects as appropriate.
          • The data is organized into objects factoring in the hierarchy. Will restrict to a depth
            of two. Building the tree is outside the scope and goal of this schema.
          • The data in the property bag does not follow the paradigm of include all rather follows the
            paradigm of include relevant/important data which will help consumer to trace deployment
            of an artifact on a resource and the trace the change in resource that resulted due to the act
            of deployment
          • We will continuously review the deployment traceability needs on various resources and
            artifacts across cloud types, involve the community for feedback and review the property
            bag schema accordingly.
{
   "id": " " /* The unique identifier of the traceability object. The intent is for this to be unique, 
                     filtering on this is not an ask as yet */

   "timestamp" : " " /* The timestamp of the creation of traceability object */  

   "schema-version": " " /* Schema version of the traceability object */

   "data":{  /* This section captures all the traceability data split into three subsections of devops, artifacts 
                     and target resource */

       "devops": {  /* This section captures the information related to the code hosting or CI/CD platform. */

           "type": " " /* This is the provider/type of the CI/CD or code hosting platform. GitHub, ADO, BB Pipelines etc. 
                              can be the possible types along with many others. */

           "properties" :{ 
                /*  The contents of this property bag would depend on the constructs and entities pertaining
                    to the type of CI/CD provider specified in the above field.  This will broadly encompass
                    information on commits, pipelines, workflow runs, Issues etc. 
               */ 
           }
       },

       "artifact": { /* This section captures the information of the associated artifacts */

           "type" : " " /* This is the type of artifact. It can be a build artifact or an Image artifact. 
                                 For ex. Docker file, Manifest, container Image, JAR etc.*/

           "properties":{
                        /* This property bag captures detailed information of the source and other properties 
                            of the associated artifact.
                            A broad classification of the usage of artifact type will be as follows - 
                                       For target resource as the Image registery the artifact section would contain the 
                                       build artifacts that were used to produce the Image.
                                       For target resource as the AKS/WebApp/DB the artifact section would contain the 
                                       Information on the source of the Image and other data associate to Image that is
                                       deployed on the resource. 
                           For example, 
                           Docker file - This property bag would contain associated information like repository, path, 
                                         commit associated with the same 
                           Image       - This property bag would contain associated information like repository, registry, 
                                          Image SHA etc. associated with the Image.
                      */
           },
       },
       "target resource":{ 
                         /* This is target resource to which the deployment has happened. It would typically be the 
                            cloud provider like ACR, Azure, AWS and GCP.  The definition is the prerogative of the cloud
                            provider who is consuming this traceability object */

           "type": " ", /* This will be the cloud provider type like Azure,AWS, GCP etc. Can also extend to any other
                                 traceability stores like Kusto, AI and others */

             "StartTimestamp":"" /* Start time stamp of the deployment */

             "EndTimestamp":""  /* End time stamp of the deployment */

               "properties":{
                               /*  */
                   },
               }
       }  
   } 

What should be the image name in the deployment

What should be the specified image name in manifest/deployment.yml file as it is going to get replaced by the latest computed name. What should i put in place of the image

# Deployment for the app
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-depl
  labels:
    app: app-depl
spec:
  selector:
    matchLabels:
      app: app-pod
  template:
    metadata:
      name: app-pod
      labels:
        app: app-pod
    spec:
      containers:
        - name: app-c
           image: *************
           ports:
            - containerPort: 3000
              name: port-c-port

what should i put in place of starts so that they get replaced by the new image link.
right now it says image name is invalid.

[Blue-Green Bug Bash] Promote SMI deletes workloads first and then the TrafficSplit

Repro steps:

  • Deploy with route-method as smi
  • Promote the deployment

Current behaviour:

  • Deployment and temporary services are deleted first, and then the TrafficSplit resource is deleted
    image

Expected behaviour:

  • TrafficSplit should be deleted first. Otherwise, during the transition, there would be broken references of services in TrafficSplit

Use github secrets in manifest files

Currently i am setting up k8s deployment of my containers via the k8s-deploy action.
Huge thnx for creating this by the way.
But for now i am having troubles with using github secrets in the manifest yaml files.
I am trying to setup a mysql database container for example and want to set the username and password with my Github secrets. But in the manifest yaml files i cant use the github secrets.

Is this possible or is there an other way to achieve this?

Thanx for your help.

blue-green does not work with latest ingress api version

Repro steps:

  • Create a blue-green deployment pipeline configured for ingress strategy.
  • Add an ingress yaml that uses the latest api version 'networking.k8s.io/v1'

Current behavior:

  • During deployment, a green ingress is not created
  • During the promotion step, the ingress is not updated
  • the green deployment is destroyed as expected but the traffic never changes to the new pods

Expected behavior:

  • I expect a new ingress and traffic to change to new pods

Potential issue in Annotations

The k8s-deploy would annotate details required for traceability. This is a two step process

  1. Deployment to resource
  2. Annotate to resource - Which overwrites any existing annotations on the resource.

Since the steps are not transaction, parallel deployments to same resource can lead race conditions, with second annotation winning which need not be the same that is deployed.

Canary deployment fails if deployments already exist.

In case a deployment is already present, the canary deployment fails if the labels that the action expects are not present (deployed previously, not through this action).
We should unblock this scenario to make the on-boarding seamless.

@DS-MS Captured what we discussed. Please feel free to add/rectify any more details.

Need more explaination about the canary strategy

I tried create a new fresh github actions with deploy snippet like below

- name: "Deploy azure vote front"
  uses: Azure/k8s-deploy@v1
  timeout-minutes: 5
  with:
    namespace: ${{env.NAMESPACE}}
    manifests: |
      manifests/front.yml
      manifests/service.yml
    images: |
      ${{secrets.REGISTRY_URL}}/${{env.IMAGE_NAME}}:${{ github.sha }}
    imagepullsecrets: |
      acr-secret
    strategy: canary
    percentage: 50

front.yml already set replica to 2, but the result I get from AKS cluster only shows on I have canary deployment, no baseline deployment
k get pod -n=github
NAME READY STATUS RESTARTS AGE
azure-vote-back-7d6d77f4ff-c8zf2 1/1 Running 0 48s
azure-vote-front-canary-7ffd6884cd-nnq7j 1/1 Running 0 39s

Is it an expected behavior?

ARM and other arch's support

Hello,

I see in code that kubectl arch is hard-coded to amd64. Is it possible to make it download kubectl depending on agent arch? As I understand os.arch() should return correct architecture and if it used, correct kubectl will be downloaded.

Actions - Traceability Schema

                                                  Traceability Schema

Abstract
Traceability involves ability to track the source for issue, code, commit, artifact, workflow, and the user from the package deployed on the resource, through documented records. This document proposes the traceability schema which captures the traceability data for any deployment which can be used by any "CI/CD" providers and cloud providers to organize the traceability data and consume it in various forms.

Format:
The schema organizes the information in the following way

  • Categorization of various sections :

    • How did it get deployed ?
      Captures the commit and workflow details was the trigger and resulted in the deployment of a generated
      artifact to a resource

    • What was deployed ?
      Captures the artifacts’ details that has been deployed

    • Where did it get deployed ?
      Captures the target resource where the deployment has happened.

      The “How” and “Where” part can come from various providers which the schema factors in by having a provision to have
      any new source “type/provider” to use the schema to generate the traceability data or a cloud provider to deploy the
      data onto its respective cloud. This definition of the schema makes it agnostic to both cloud providers
      (Azure, AWS, GCP ,Kusto) as well as source control providers (GitHub, ADO, Bitbucket).

  • Representation of a specific section:
    Data is organized into the following in each of the section/categorizes within the traceability data -

    • "Category of data"
      • "Provider of the data"
        • Associated properties specific to the provider
          • The data is organized into multiple subsections/objects as appropriate.
          • The data is organized into objects factoring in the hierarchy. Will restrict to a depth
            of two. Building the tree is outside the scope and goal of this schema.
          • The data in the property bag does not follow the paradigm of include all rather follows the
            paradigm of include relevant/important data which will help consumer to trace deployment
            of an artifact on a resource and the trace the change in resource that resulted due to the act
            of deployment
          • We will continuously review the deployment traceability needs on various resources and
            artifacts across cloud types, involve the community for feedback and review the property
            bag schema accordingly.
{
   "id": " " /* The unique identifier of the traceability object. The intent is for this to be unique, 
                     filtering on this is not an ask as yet */

   "timestamp" : " " /* The timestamp of the creation of traceability object */  

   "schema-version": " " /* Schema version of the traceability object */

   "data":{  /* This section captures all the traceability data split into three subsections of devops, artifacts 
                     and target resource */

       "devops": {  /* This section captures the information related to the code hosting or CI/CD platform. */

           "type": " " /* This is the provider/type of the CI/CD or code hosting platform. GitHub, ADO, BB Pipelines etc. 
                              can be the possible types along with many others. */

           "properties" :{ 
                /*  The contents of this property bag would depend on the constructs and entities pertaining
                    to the type of CI/CD provider specified in the above field.  This will broadly encompass 
                    information on commits, pipelines, workflow runs, Issues etc. 
               */ 
           }
       },

       "artifact": { /* This section captures the information of the associated artifacts */

           "type" : " " /* This is the type of artifact. It can be a build artifact or an Image artifact. 
                                 For ex. Docker file, Manifest, container Image, JAR etc.*/

           "properties":{
                        /* This property bag captures detailed information of the source and other properties 
                            of the associated artifact.
                            A broad classification of the usage of artifact type will be as follows - 
                                       For target resource as the Image registery the artifact section would contain the 
                                       build artifacts that were used to produce the Image.
                                       For target resource as the AKS/WebApp/DB the artifact section would contain the 
                                       Information on the source of the Image and other data associate to Image that is
                                       deployed on the resource. 
                           For example, 
                           Docker file - This property bag would contain associated information like repository, path, 
                                         commit associated with the same 
                           Image       - This property bag would contain associated information like repository, registry, 
                                          Image SHA etc. associated with the Image.
                      */
           },
       },
       "target resource":{ 
                         /* This is target resource to which the deployment has happened. It would typically be the 
                            cloud provider like ACR, Azure, AWS and GCP.  The definition is the prerogative of the cloud
                            provider who is consuming this traceability object */

           "type": " ", /* This will be the cloud provider type like Azure,AWS, GCP etc. Can also extend to any other
                                 traceability stores like Kusto, AI and others */

             "StartTimestamp":"" /* Start time stamp of the deployment */

             "EndTimestamp":""  /* End time stamp of the deployment */

               "properties":{
                               /*  */
                   },
               },
       "strategy" : {
           "type":  "./* This is the type of deployment strategy getting used for current deployment. Example can be canary, blue-green etc". 
           "properties": {
                     /* Specific details of the deployment strategy. This is a set of key value pair"
               }
           }
       }  
   } 

Examples: AWS

   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", 
   "timestamp" : "2020-06-04T07:59:40Z",
   "schema-version": "alpha-1", 
   "data":{
       "devops": {
           "type": "GitHubActions", 
           "properties":{
               "run":{ 
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push"
                       "properties": {  
                           "user": "user", 
                           "timestamp": "timestamp", 
                           "commit": "commitSHA"
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "containerImage", 
           "properties":{
               "image":{ 
                   "imageSHA":"SHA1",
                   "registryProvider":"ECR",
                   "image-repository": "account-service",
                   "container-registry": "myRegistry" 
               },
           },
       },
    "resource": { 
        "type": "AWS",
        "properties": {
            "resourcetype": "AWS::EC2::Instance",
            "Resources": {
                "Ec2Instance": {
                    "Type": "AWS::EC2::Instance",
                    "Properties": {
                        "SecurityGroups": [
                            {
                                "Ref": "InstanceSecurityGroup"
                            },
                            "MyExistingSecurityGroup"
                        ],
                        "AvailabilityZone": "us-east-1a",
                        "KeyName": "mykey",
                        "ImageId": "ami-7a11e213"
                    }
                },
                "InstanceSecurityGroup": {
                    "Type": "AWS::EC2::SecurityGroup",
                    "Properties": {
                        "GroupDescription": "Enable SSH access via port 22",
                        "SecurityGroupIngress": [
                            {
                                "IpProtocol": "tcp",
                                "FromPort": "22",
                                "ToPort": "22",
                                "CidrIp": "0.0.0.0/0"
                            }
                        ]
                    }
                }
            }
        },
        "updatestartTimestamp": "",
        "updateendTimestamp": ""
    }  

   } 
}





Example2:  DockerFile to ACR
{
   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", // ID for tracebility mapping object 
   "timestamp" : "2020-06-04T07:59:40Z", // timestamp when traceability mapping object was created 
   "schema-version": "alpha-1", // Schema version for traceability mapping object to accomodate breaking changes
   "data":{
       "devops": {
           "type": "GitHubActions", // Others being ADO, BB pipelines, we have prioritized GH Actions. for other CI/CD provider property bag will be updated accordingly
           "properties":{
               "run":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns.  
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push",// ( others can be issueResolved, onCommit ... and property bag will be updated accordingly )
                       "properties": { // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                           "user": "user", // triggered By
                           "timestamp": "timestamp", // triggered at 
                           "commit": "commitSHA" // trigerring commit
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "dockerfile", // ( others can be  containerImage, JAR, k8sartifact .... and property bag will be updated accordingly )
           "properties":{
               "file":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                   "repository": "contoso1/contoso-app",
                   "path": "/application",
                   "name": "DockerFile",
                   "ref": "refs/heads/release",
                   "commit": "commitSHA"
               },
           },
       },
       "resource":{// will be contributed by cloud providers
           "type": "Azure", //( others can be AWS, GCP, Digital Ocean .. and property bag will be updated accordingly)
               "properties":{
                   "id": "/subscriptions/c00d16c7-6c1f-4c03-9be1-6934a4c49682/resourceGroups/k8sRg/providers/Microsoft.ACR/4a4c49adaada",
                   "resourcetype": "Microsoft.ACR",
                   "properties":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                       "imageSHA":"SHA1",
                       "historicalImageSHAs": "",
                       "registryProvider":"ACR",
                       "image-repository": "account-service",
                       "container-registry": "myRegistry" 
                   },
                   "updatestartTimestamp":"",
                   "updateendTimestamp":""
               }
       }  
   } 
}



Example3:  ACR to AKS
{
   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", // ID for tracebility mapping object 
   "timestamp" : "2020-06-04T07:59:40Z", // timestamp when traceability mapping object was created 
   "schema-version": "alpha-1", // Schema version for traceability mapping object to accomodate breaking changes
   "data":{
       "devops": {
           "type": "GitHubActions", // Others being ADO, BB pipelines, we have prioritized GH Actions. for other CI/CD provider property bag will be updated accordingly
           "properties":{
               "run":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns.  
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push",// ( others can be issueResolved, onCommit ... and property bag will be updated accordingly )
                       "properties": { // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                           "user": "user", // triggered By
                           "timestamp": "timestamp", // triggered at 
                           "commit": "commitSHA" // trigerring commit
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "containerImage", // ( others can be  JAR, k8sartifact .... and property bag will be updated accordingly )
           "properties":{
               "image":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                   "imageSHA":"SHA1",
                   "registryProvider":"ACR",
                   "image-repository": "account-service",
                   "container-registry": "myRegistry" 
               },
           },
       },
       "resource":{// will be contributed by cloud providers
           "type": "Azure", //( others can be AWS, GCP, Digital Ocean .. and property bag will be updated accordingly)
               "properties":{
                   "ID": "/subscriptions/c00d16c7-6c1f-4c03-9be1-6934a4c49682/resourceGroups/k8sRg/providers/Microsoft.ContainerService/managedClusters/contoso-k8s-cluster",
                   "resourcetype": "Microsoft.ContainerService/ManagedClusters",
                   "properties":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                       "namespace": "prod-wcus",
                       "kubernetesObjects": [
                           {
                               "kind": "services",
                               "name": "contoso-service",
                               "version": "v1"
                           },
                           {
                               "kind": "deployments",
                               "name": "contoso-deployment",
                               "version": "v1"
                           }
                       ]
                   },
                   "updatestartTimestamp":"",
                   "updateendTimestamp":""
               }
       },
       "strategy": {
           "type" : "canary" 
           "properties": {
                "percentage": 25
           }
      }
   } 
}


Example4:  Node to Webapp 
{
   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", // ID for tracebility mapping object 
   "timestamp" : "2020-06-04T07:59:40Z", // timestamp when traceability mapping object was created 
   "schema-version": "alpha-1", // Schema version for traceability mapping object to accomodate breaking changes
   "data":{
       "devops": {
           "type": "GitHubActions", // Others being ADO, BB pipelines, we have prioritized GH Actions. for other CI/CD provider property bag will be updated accordingly
           "properties":{
               "run":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns.  
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push",// ( others can be issueResolved, onCommit ... and property bag will be updated accordingly )
                       "properties": { // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                           "user": "user", // triggered By
                           "timestamp": "timestamp", // triggered at 
                           "commit": "commitSHA" // trigerring commit
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "nodefile", // ( others can be  containerImage, JAR, k8sartifact .... and property bag will be updated accordingly )
           "properties":{
               "file":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                   "repository": "contoso1/contoso-app",
                   "path": "/application",
                   "name": "package.json",
                   "ref": "refs/heads/release",
                   "commit": "commitSHA"
               },
           },
       },
       "resource":{// will be contributed by cloud providers
           "type": "Azure", //( others can be AWS, GCP, Digital Ocean .. and property bag will be updated accordingly)
               "properties":{
                   "id": "/subscriptions/afc11291-9826-46be-b852-70349146ddf8/resourcegroups/contosoRG/providers/Microsoft.Web/sites/contosoApp/appServices",
                   "resourcetype": "Microsoft.AppService",
                   "properties":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                       "slot-name":"staging",
                   },
                   "updatestartTimestamp":"",
                   "updateendTimestamp":""
               }
       }  
   } 
}


Example5:  NPM to Webapp 
{
   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", // ID for tracebility mapping object 
   "timestamp" : "2020-06-04T07:59:40Z", // timestamp when traceability mapping object was created 
   "schema-version": "alpha-1", // Schema version for traceability mapping object to accomodate breaking changes
   "data":{
       "devops": {
           "type": "GitHubActions", // Others being ADO, BB pipelines, we have prioritized GH Actions. for other CI/CD provider property bag will be updated accordingly
           "properties":{
               "run":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns.  
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push",// ( others can be issueResolved, onCommit ... and property bag will be updated accordingly )
                       "properties": { // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                           "user": "user", // triggered By
                           "timestamp": "timestamp", // triggered at 
                           "commit": "commitSHA" // trigerring commit
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "npmpackage", // ( others can be  containerImage, JAR, k8sartifact .... and property bag will be updated accordingly )
           "properties":{
               "package":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 

               },
           },
       },
       "resource":{// will be contributed by cloud providers
           "type": "Azure", //( others can be AWS, GCP, Digital Ocean .. and property bag will be updated accordingly)
               "properties":{
                   "id": "/subscriptions/afc11291-9826-46be-b852-70349146ddf8/resourcegroups/contosoRG/providers/Microsoft.Web/sites/contosoApp/appServices",
                   "resourcetype": "Microsoft.AppService",
                   "properties":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                       "slot-name":"staging",
                   },
                   "updatestartTimestamp":"",
                   "updateendTimestamp":""
               }
       }  
   } 
}




Example6:  2 images to AKS : we will create/send 2 traceability objects. One for each image. 
{
   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", // ID for tracebility mapping object 
   "timestamp" : "2020-06-04T07:59:40Z", // timestamp when traceability mapping object was created 
   "schema-version": "alpha-1", // Schema version for traceability mapping object to accomodate breaking changes
   "data":{
       "devops": {
           "type": "GitHubActions", // Others being ADO, BB pipelines, we have prioritized GH Actions. for other CI/CD provider property bag will be updated accordingly
           "properties":{
               "run":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns.  
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push",// ( others can be issueResolved, onCommit ... and property bag will be updated accordingly )
                       "properties": { // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                           "user": "user", // triggered By
                           "timestamp": "timestamp", // triggered at 
                           "commit": "commitSHA" // trigerring commit
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "containerImage", // ( others can be  JAR, k8sartifact .... and property bag will be updated accordingly )
           "properties":{
               "image":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                   "imageSHA":"SHA1",
                   "registryProvider":"ACR",
                   "image-repository": "account-service",
                   "container-registry": "myRegistry" 
               },
           },
       },
       "resource":{// will be contributed by cloud providers
           "type": "Azure", //( others can be AWS, GCP, Digital Ocean .. and property bag will be updated accordingly)
               "properties":{
                   "ID": "/subscriptions/c00d16c7-6c1f-4c03-9be1-6934a4c49682/resourceGroups/k8sRg/providers/Microsoft.ContainerService/managedClusters/contoso-k8s-cluster",
                   "resourcetype": "Microsoft.ContainerService/ManagedClusters",
                   "properties":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                       "namespace": "prod-wcus",
                       "kubernetesObjects": [
                           {
                               "kind": "services",
                               "name": "contoso-service",
                               "version": "v1"
                           },
                           {
                               "kind": "deployments",
                               "name": "contoso-deployment",
                               "version": "v1"
                           }
                       ]
                   },
                   "updatestartTimestamp":"",
                   "updateendTimestamp":""
               }
       }  
   } 
}
{
   "id": "cdfed638-6530-40cb-8d26-ce503cddb207", // ID for tracebility mapping object 
   "timestamp" : "2020-06-04T07:59:40Z", // timestamp when traceability mapping object was created 
   "schema-version": "alpha-1", // Schema version for traceability mapping object to accomodate breaking changes
   "data":{
       "devops": {
           "type": "GitHubActions", // Others being ADO, BB pipelines, we have prioritized GH Actions. for other CI/CD provider property bag will be updated accordingly
           "properties":{
               "run":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns.  
                   "workflow":{
                       "repository": "contoso/contoso-app",
                       "path": "./github/workflow/cd.yml",
                       "name": "Contoso Deployment",
                       "ref": "refs/heads/release",
                       "commit": "commitSHA"
                   },
                   "trigger":{
                       "type": "on push",// ( others can be issueResolved, onCommit ... and property bag will be updated accordingly )
                       "properties": { // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                           "user": "user", // triggered By
                           "timestamp": "timestamp", // triggered at 
                           "commit": "commitSHA" // trigerring commit
                       },
                   },
                   "id": 123,
               }
           }
       },
       "artifact": {
           "type" : "containerImage", // ( others can be  JAR, k8sartifact .... and property bag will be updated accordingly )
           "properties":{
               "image":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                   "imageSHA":"SHA2",
                   "registryProvider":"ACR1",
                   "image-repository": "hello-service",
                   "container-registry": "yourRegistry" 
               },
           },
       },
       "resource":{// will be contributed by cloud providers
           "type": "Azure", //( others can be AWS, GCP, Digital Ocean .. and property bag will be updated accordingly)
               "properties":{
                   "ID": "/subscriptions/c00d16c7-6c1f-4c03-9be1-6934a4c49682/resourceGroups/k8sRg/providers/Microsoft.ContainerService/managedClusters/contoso-k8s-cluster",
                   "resourcetype": "Microsoft.ContainerService/ManagedClusters",
                   "properties":{ // These are key fields which we have identified based on our understandng of traceability needs. Will expand them as more scenarios uneartherns. 
                       "namespace": "prod-wcus",
                       "kubernetesObjects": [
                           {
                               "kind": "services",
                               "name": "contoso-service",
                               "version": "v1"
                           },
                           {
                               "kind": "deployments",
                               "name": "contoso-deployment",
                               "version": "v1"
                           }
                       ]
                   },
                   "updatestartTimestamp":"",
                   "updateendTimestamp":""
               }
       }  
   } 
}

/* Open Items that is being worked on
   Deployment Strategy - Factor it in the Schema , as part of the Resource as a reflection and as a deployment base object as configuration is one way.
   Workflow Status -  Workflow may have a cascade of actions and at any point the status may not have relevance. It is more a command executed as part of actions itself
*/

Parsing of image substitution incomplete

This is related to #23 in the sense that the symptom is that replacements don't work.

The regex used to parse an image identifier in the yaml is const imageKeyword = line.match(/^ *image:/);.
I had an issue where it was not working for me, because I had a config like this:

..
  containers:
    - image: test
      name: test

Which isn't correctly parsed by the regex.

The clean solution would be to use a real yaml parser, but an ugly quick fix could be to allow for this case.

If you want to have the quick fix I can make a PR, just let me know.
The correct regex is: /^ *(?:- )?image:/

Canary deployment error

Repro steps:

  • Version 0.0.1 deployment without canary (5 replicas; in namespace meshed with linkerd)

  • Version 0.0.2 deployment with canary using the following YAML:

  • uses: azure/k8s-deploy@v1
    with:
    namespace: go-template
    manifests: ${{ steps.bake.outputs.manifestsBundle }}
    images: |
    ghcr.io/USER/IMG:${{ env.IMAGE_TAG }}
    strategy: canary
    percentage: 20
    traffic-split-method: smi
    baseline-and-canary-replicas: 1

Current behaviour:

  • Canary deployment of 0.0.2 fails with the following error: StableSpecSelectorNotExist : deploymentname

Expected behaviour:

  • Canary deployment succeeds with SMI TrafficSplit

Extra information:

  • canary deployment succeeds without SMI
  • namespace of the deployment has the following linkerd annotation - linkerd.io/inject: enabled
  • also tried without baseline-and-canary-replicas - same issue

Kubectl parse error

Hello there,

I wanted to automate my deployments with github actions and to now it worked just fine. But now I came to the path where I have to deploy the manifest and there is an error. (https://github.com/raphaelbernhart/exchange/blob/master/.github/workflows/production.yml)

I get the error: Bildschirmfoto 2021-01-07 um 21 54 18

As I am understanding this message GoLang cannot parse the apiVersion: v1 field. I am using a Rancher cluster as Kubernetes backend. I hope somebody can help me.

Repro steps:

Current behaviour:

  • Throwing an error while reading the kubeconfig file

Expected behaviour:

  • Finish without an error.

Image substitution only handles tag substitution

This is more of an enhancement request than a bug (or perhaps a question if there is a workaround).

We are using terraform to create container registries with generated names. Deployments with this action currently require that the the registry in the yaml is known. That won't be the case in our scenario and we were expecting this action to be able to update the image name as well as the tag. Example:

- name: Deploy to Kubernetes
        uses: Azure/[email protected]
        with:
          namespace: 'demo'
          manifests: |
            ./deploy/deployment.yaml
          images: |
            ${{env.REGISTRY_NAME}}/${{env.IMAGE_NAME}}:${{github.sha}}
          imagepullsecrets: |
            demo-acr-secret

In my deployment.yaml:

kind: Deployment
metadata:
  name: demo
  namespace: demo
spec:
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
        - name: demo
          image: mycontainerregistry.azurecr.io/demo

If the "REGISTRY_NAME" environment variable is set to "mycontainerregistry.azurecr.io" then the deployment works fine - the github.sha is used. However, if another value is used then the deployment fails - the value from the .yaml file is used and the deployment fails.

This is very limiting.

Support different manifest delimiters

I'm working in an environment where I went to set the manifests dynamically.
Specifically I have an environment variable that contains the list of manifests, but passing this into the action does not work:

      - name: Set manifests
        run: |
          echo 'MANIFESTS<<EOF' >> $GITHUB_ENV
          find k8s/shared k8s/staging -type f >> $GITHUB_ENV
          echo 'EOF' >> $GITHUB_ENV
      - run: echo "$MANIFESTS"
      - name: Authenticate to kubernetes
        uses: azure/k8s-set-context@v1
        with:
          method: service-account
          k8s-url: ${{ secrets.KUBERNETES_URL }}
          k8s-secret: ${{ secrets.STAGING_K8S_SECRET }}
      - uses: Azure/[email protected]
        with:
          namespace: 'staging'
          manifests: |
            ${{ env.MANIFESTS }}
          images: |
            ghcr.io/herams-who/herams-backend/app:${{ github.sha }}
          kubectl-version: 'latest'      

There is no valid syntax that'll work here (afaik) because Github Actions will not properly put in multiline values here.
This could be easily solved if a different separator would be accepted.

Interestingly, current code splits by \n.
https://github.com/Azure/k8s-deploy/blob/main/src/run.ts#L62

 let manifests = manifestsInput.split('\n');

A clean solution would be to make the separator configurable as an input in the action. I can make a PR.

Error after annotating

An error occurs during the latest v1 release, causing the workflow step to fail:

2020-06-25T12:06:24.4083949Z [command]/usr/bin/kubectl annotate pod xxxxx-64448c46b5-sqjgw run=**** repository=xxx/yyyy workflow=Deploy jobName=build-and-deploy createdBy=torgeirsk runUri=https://github.com/xxxx/yyyy/actions/runs/*** commit=**** branch=refs/heads/master deployTimestamp=1593086776087 provider=GitHub --overwrite --namespace test
2020-06-25T12:06:25.0401251Z pod/xxxx-64448c46b5-sqjgw annotated
2020-06-25T12:06:25.0407128Z ##[error]TypeError: Cannot read property 'stderr' of undefined
2020-06-25T12:06:25.0641500Z Post job cleanup.

Workflow step:

      - name: Deploy test
        uses: azure/k8s-deploy@v1
        with:
          namespace: test
          images: |
            xxx.azurecr.io/xxx:${{ github.sha }}
          manifests: |
            manifests/test/deployment.yml
            manifests/test/service.yml
            manifests/test/ingress.yml

How to specify the previous workload to promote/reject.

In the section “To promote/reject the green workload created by the above snippet, the following YAML snippet could be used:” It is not clear how I can specify the previous workload to promote/reject. Could you please answer and elaborate in the Readme.

Is ${{ event.run_id }} system generated value or something that developer need to specify.
Should developer somehow ensure that ${{ event.run_id }} for create and corresponding reject yaml spec?

Another minor typo in https://github.com/Azure/k8s-deploy#to-promotereject-the-canary-created-by-the-above-snippet-the-following-yaml-snippet-could-be-used-1
In line action: reject #substitute reject if you want to reject
Should be action: promote

Remove tag v1.4

There is a tag v1.4 in the repo that doesn't have a release. Also it was created before the 1.3 tag..

I think the tag should be removed, or, if you don't want to break clients, release a new version v1.5 so that people will get the correct version when they just look for the "biggest tag"

Canary deployment not working with OSM

I need to configure canary deployment with OSM (SMI compatible) https://github.com/openservicemesh/osm
When I configure this task:
- uses: Azure/[email protected] with: manifests: ${{ steps.bake.outputs.manifestsBundle }} namespace: 'jjweb' strategy: canary traffic-split-method: smi action: deploy #deploy is the default; we will later use this to promote/reject percentage: 20 baseline-and-canary-replicas: 1

Getting this error, seems some converting problem:
error: error validating "/tmp/TrafficSplit_jjwebcore_1635240648649": error validating data: [ValidationError(TrafficSplit.spec.backends[0].weight): invalid type for io.smi-spec.split.v1alpha2.TrafficSplit.spec.backends.weight: got "string", expected "number", ValidationError(TrafficSplit.spec.backends[1].weight): invalid type for io.smi-spec.split.v1alpha2.TrafficSplit.spec.backends.weight: got "string", expected "number", ValidationError(TrafficSplit.spec.backends[2].weight): invalid type for io.smi-spec.split.v1alpha2.TrafficSplit.spec.backends.weight: got "string", expected "number"]; if you choose to ignore these errors, turn validation off with --validate=false Error: Error: error: error validating "/tmp/TrafficSplit_jjwebcore_1635240648649": error validating data: [ValidationError(TrafficSplit.spec.backends[0].weight): invalid type for io.smi-spec.split.v1alpha2.TrafficSplit.spec.backends.weight: got "string", expected "number", ValidationError(TrafficSplit.spec.backends[1].weight): invalid type for io.smi-spec.split.v1alpha2.TrafficSplit.spec.backends.weight: got "string", expected "number", ValidationError(TrafficSplit.spec.backends[2].weight): invalid type for io.smi-spec.split.v1alpha2.TrafficSplit.spec.backends.weight: got "string", expected "number"]; if you choose to ignore these errors, turn validation off with --validate=false

Split source and compiled files

While there could be arguments for having both the source TS and the compiled JS files in the same repository, I do not believe that there are arguments for having them in PRs / manual commits.

I suspect that code reviews will mostly focus on the TS part which means manual changes to the JS could sneak past reviewers which is a security concern. Furthermore I cannot imagine any sane person enjoys comparing TS and JS code just to see if they do the same thing..

If we need to have both in the same repo, the least we could do is have CI compile the files and commit them back every time a change is made to the TS files. This also guarantees that a uniform compiler configuration is used.

Just to be clear: I don't believe there is a good argument to have the JS files in this repo at all, they could either be compiled when the action runs or be compiled just for the tagged commit and added as a release artifact. I'm not sure how Github Actions uses the release though, so someone with a bit more expertise should look at that first.

Images command doesn't work in k8s-deploy when using an image from a private repository with a specified port

When trying to update the image reference in my deployment script, there's an issue.

In my action file I have

- uses: azure/k8s-actions/k8s-deploy@master
       with:
         manifests: |
           deployment.yml
         images: |
           my.repo.com:1234/helloworld:${{ github.sha }}

When I check the "k8s-deploy/src/kubernetes-utils.ts", I can see the to find which image should be replace there's a

let imageName = container.split(':')[0];
Which can't work in this case. I think it should be:

let imageName = container;
if (imageName.indexOf(':') > 0) {
   imageName = imageName.substring(0, imageName.lastIndexOf(':'));
}

deploying to micro k8s

I'm trying to deploy to a single cluster micro k8s instance. I can run the various actions needed in my pipeline and I don't get any errors but the deployment doesn't happen on my cluster. When I run kubectl apply against my deployment manifest, it does deploy to the cluster. My pipeline code:

on: [push]

jobs:
hello_world_job:
runs-on: ubuntu-latest
name: build container and deploy it
steps:
- uses: actions/checkout@master

  - name: dockerhub login
    uses: azure/docker-login@v1
    with:
      username: ${{ secrets.REGISTRY_USERNAME }}
      password: ${{ secrets.REGISTRY_PASSWORD }}
  - name: Build and Push container to dockerhub      
    run: |
      docker build . -t <my docker hub>/k8sdemo:${{ github.sha }}
      docker push <my docker hub>/k8sdemo:${{ github.sha }}

  - uses: azure/setup-kubectl@v1
    with:
      version: 'v1.20.0' 
    id: install 

  - uses: Azure/k8s-set-context@v1
    with:
      method: kubeconfig
      kubeconfig: ${{ secrets.MYKUBECONFIG }}
      context: microk8s
    id: setcontext

  - uses: Azure/[email protected]
    with:
      namespace: default
      images: '<my docker hub>/k8sdemo:${{ github.sha }}'
      manifests: |
        deployment.yaml 
      kubectl-version: 'v1.20.0'   
    id: deploy  

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.