Coder Social home page Coder Social logo

microsoft / azure-pipelines-orchestrator Goto Github PK

View Code? Open in Web Editor NEW
9.0 5.0 1.0 81 KB

Horizontally scaleable, on-demand agent pools backed by Kubernetes

License: MIT License

Dockerfile 3.03% Shell 5.98% PowerShell 6.42% C# 84.57%
azure-pipelines azure-pipelines-agent azure-container-instances kubernetes

azure-pipelines-orchestrator's Introduction

Archived

This project is going to be archived. We recommend using KEDA for automaticalyl scaling agents https://keda.sh/docs/2.8/scalers/azure-pipelines/

Azure Pipelines - Kubernetes Orchestrator

Continuous Integration

Many enterprise customers run their own Kubernetes clusters either on-premise or in managed kubernetes environments in the cloud. Azure DevOps Services and Server agents can run from containers hosted in these Kubernetes clusters, but what if you do not want to run your agents 24/7? What if you need to be able to scale the number of agents dynamically as pipelines jobs are queued?

This project provides an application that can monitor a configurable set of agent pools, when pipeline jobs are queued up it will automagically provision Kubernetes Jobs for each job that is queued up. The Kubernetes Jobs will run and process only a single Pipelines Job and then be cleaned up by Kubernetes.

This allows for horizontally scaleable, on-demand agent pools backed by Kubernetes!

Getting Started

You can first build the docker image:

# Build Orchestrator Container
docker build -t ado-agent-orchestrator

# Build Linux Pipelines Agent
cd linux
docker build -t ado-pipelines-linux

Run with Docker

docker run -d --name ado-agent-orchestrator \
    --restart=always \
    --env ORG_URL=https://dev.azure.com/yourorg \
    --env ORG_PAT=12345 \
    --env AGENT_POOLS=Pool1,Pool2 \
    --env JOB_IMAGE=ghcr.io/akanieski/ado-pipelines-linux:latest \
    --env JOB_NAMESPACE=ado \
    ado-agent-orchestrator:latest

Run with Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ado-orchestrator-deployment
  labels:
    app: ado-orchestrator
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ado-orchestrator
  template:
    metadata:
      labels:
        app: ado-orchestrator
    spec:
      containers:
      - name: ado-orchestrator
        image: ghcr.io/akanieski/ado-orchestrator:latest
        env:
        - name: ORG_URL
          value: "https://dev.azure.com/yourorg"
        - name: ORG_PAT
          value: "1234"
        - name: AGENT_POOLS
          value: "Pool1,Pool2"
        - name: JOB_IMAGE
          value: "ghcr.io/akanieski/ado-pipelines-linux:latest"
        - name: JOB_NAMESPACE
          value: "ado"

Additionally you can configure the following options environment variables.

POLLING_DELAY=1000                           # Milliseconds to wait between runs

RUN_ONCE=1                                   # Only run once - use this to switch a cron job instead of 24/7 monitor run

JOB_PREFIX=agent-job-                        # Customize the agent job's prefix

JOB_DOCKER_SOCKET_PATH=/var/run/docker.sock  # Set this to allow for docker builds within your docker container

JOB_DEFINITION_FILE=job.yaml                 # Provide a template for the k8s Jobs the orchestrator creates
INITIALIZE_NAMESPACE=true                    # Allows you to optionally disable namespace initialization

MINIMUM_AGENT_COUNT=1                        # The minimum number of agents (regardless of Busy/Idle) to keep running at all times

MINIMUM_IDLE_AGENT_COUNT=0                   # The minimum number of IDLE agents to keep running at all times

Customizing the Kubernetes Job

In many scenarios you will want to specify additional configurations to the Job that the orchestrator creates in your k8s cluster. For example, perhaps your pipelines require a custom mounted set of secrets from a CSI, or you would like to reserve memory/cpu for each job, or mount a cached set of build assets. To allow for this level of customization you can now specify the JOB_DEFINITION_FILE env variable which will provide you a way of define all the bells and whistles you need for you pipeline agents.

A sample custom job file might look like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: custom-job
spec:
  template:
    spec:
      containers:
      - name: custom-job
        image: ghcr.io/akanieski/ado-pipelines-linux:latest
        resources:
          requests:
            memory: "100Mi"
            cpu: "1"
          limits:
            memory: "200Mi"
            cpu: "2"
        env:
          - name: AZP_URL
            value: https://dev.azure.com/your-org
          - name: AZP_TOKEN
            value: xxxqhugutbqvpoxxxicdab2ojaipkw6kexxxau57bybmvksp5jpq
          - name: AZP_POOL
            value: Default
      restartPolicy: Never

Running Serverless with Azure Container Instances

You can also choose to avoid the work of setting up Kubernetes and simply run on Azure Container Instances, as shown below:

AZ_SUBSCRIPTION_ID=     # Your Azure Subscription ID
AZ_RESOURCE_GROUP=      # The existing resource group that you will place provisioned container group/instances
AZ_TENANT_ID=           # Your Azure Tenant ID
AZ_REGION=EastUS        # The Azure region your resources are located
AZ_ENVIRONMENT=         # The Azure environment, specify AzurePublicCloud, or other sovereign clouds like AzureChina

This feature uses the "DefaultAzureCredentials" API for Azure SDK. This allows for a variety of supported Azure credential scenarios. See these docs for more information on how to configure a scenario that works for you.

Improving build times through Persistent Volumes

Kubernetes provides users with a convenient mechanism for sharing a disk between multiple containers, or in our case multiple agents and between multiple pipeline runs. We can use this to our advantage. By mounting persistent volumes at key locations you can carry cached data to all of the agents in your pool.

For example, mounting a persistent volume to the /root/.nuget/package path as shown below will make sure you don't have to re-download nuget packages on every single pipeline run.

apiVersion: batch/v1
kind: Job
metadata:
  name: custom-job
spec:
  template:
    spec:
      containers:
      - name: custom-job
        image: ghcr.io/akanieski/ado-pipelines-linux:0.0.1-preview
        volumeMounts:
          - mountPath: "/root/.nuget/packages"
            name: nuget-cache
      volumes:
        - name: nuget-cache
          persistentVolumeClaim:
            claimName: nuget-cache-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nuget-cache
spec:
  capacity:
   storage: 10Gi
  accessModes:
   - ReadWriteMany
  hostPath:
    path: "/tmp/nuget-cache"
  storageClassName: slow

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nuget-cache-claim
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: slow

Other examples of commonly cached paths:

  • /azp/_work/_tasks - ADO Pipeline tasks that are downloaded every single time will be cached here - saves time on every run!
  • /azp/_work/_tool - ADO Tools installer tasks like .NET Tools, NodeJS Tools, etc - saves time on most runs!
  • /root/.npm - Npm packages are notoriously numerous, mounting a cache here will save lots of time for JS builds
  • /root/.nuget/packages - Save time on .NET builds

Note: the /root path above is based on the user/homepath of the user your docker agent runs under. In the examples I use root user (not ideal in real world scenarios) for my agent containers. Also note for windows they will also have different paths. The key for both scenarios is that these /root paths are the container user's homepath, on Windows it may be c:\users\agent\ etc.

Contributing

This project welcomes contributions and suggestions.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Issues

We accept issue reports both here (file a GitHub issue) and in Developer Community.

Do you think there might be a security issue? Have you been phished or identified a security vulnerability? Please don't report it here - let us know by sending an email to [email protected].

azure-pipelines-orchestrator's People

Contributors

akanieski avatar akanieski-msft avatar microsoft-github-policy-service[bot] avatar noamichael avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

noamichael

azure-pipelines-orchestrator's Issues

k8s.Autorest.HttpOperationException 'Forbidden'

Hi everyone,

I'm trying to run ado-orchestrator on a EKS and I get the error on pod:

Unhandled exception. k8s.Autorest.HttpOperationException: Operation returned an invalid status code 'Forbidden'

I create a exclusive namespace, but I'm not using a Service Account. It's necessary?

Full log below:

Starting Agent Orchestrator .. ORG_URL: https://dev.azure.com/<REMOVED>/ Unhandled exception. k8s.Autorest.HttpOperationException: Operation returned an invalid status code "Forbidden" at k8s.Kubernetes.SendRequestRaw(String requestContent, HttpRequestMessage httpRequest, CancellationToken cancellationToken) at k8s.AbstractKubernetes.k8s.ICoreV1Operations.ListNamespaceWithHttpMessagesAsync(Nullable"1 allowWatchBookmarks, String continueParameter, String fieldSelector, String labelSelector, Nullable"1 limit, String resourceVersion, String resourceVersionMatch, Nullable"1 timeoutSeconds, Nullable"1 watch, Nullable"1 pretty, IReadOnlyDictionary"2 customHeaders, CancellationToken cancellationToken) at k8s.CoreV1OperationsExtensions.ListNamespaceAsync(ICoreV1Operations operations, Nullable"1 allowWatchBookmarks, String continueParameter, String fieldSelector, String labelSelector, Nullable"1 limit, String resourceVersion, String resourceVersionMatch, Nullable"1 timeoutSeconds, Nullable"1 watch, Nullable"1 pretty, CancellationToken cancellationToken) at KubernetesAgentHostService.Initialize() in /app/KubernetesAgentHostService.cs:line 64 at Program.<Main>$(String[] args) in /app/Program.cs:line 58 at Program.<Main>(String[] args)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.