Coder Social home page Coder Social logo

terraboot's Introduction

terraboot

Some DSL on top of terraform for our use.

  • core.clj: contains some fairly general-purpose functions to generate terraform json
  • vpc.clj and elasticsearch.clj: our VPC module which generates a VPC, some private and public subnets, and some monitoring infrastructure (logstash - elasticsearch - kibana, a box for influxdb - grafana, a box for alerts with Icinga2)
  • cluster.clj: dependent on vpc module. Creates a mesos cluster within the vpc.

Usage

Create your own (private) repository, to contain functions calling the terraboot files (TODO: example repo).

The terraform json is generated by calling the relevant functions and saving them in a file.

For the VPC module:

(to-file (vpc-vpn-infra {:vpc-name vpc-name
                         :vpc-cidr-block vpc-cidr-block
                         :key-name key-name
                         :account-number account-number
                         :region region
                         :azs [:a :b]
                         :default-ami default-ami
                         :subnet-cidr-blocks {:a {:public "172.20.0.0/24"
                                                  :private "172.20.8.0/24"}
                                              :b {:public "172.20.1.0/24"
                                                  :private "172.20.9.0/24"}}}) "vpc/vpc.tf")

This generates the relevant terraform json in the vpc/vpc.tf file (adapt as required). It's advisable to have every module in their own separate directory.

For the cluster:

(to-file (cluster-infra {:vpc-name vpc-name
                         :vpc-cidr-block vpc-cidr-block
                         :account-number account-number
                         :key-name key-name
                         :region region
                         :azs azs
                         :bucket bucket-name
                         :profile aws-profile
                         :cluster-name "staging"
                         :min-number-of-masters 3
                         :max-number-of-masters 3
                         :master-disk-allocation 20
                         :master-instance-type "t2.large"
                         :min-number-of-slaves 4
                         :max-number-of-slaves 4
                         :slave-instance-type "t2.xlarge"
                         :slave-disk-allocation 50
                         :min-number-of-public-slaves 3
                         :max-number-of-public-slaves 3
                         :elb-azs [:a :b]
                         :public-slave-instance-type "t2.micro"
                         :public-slave-alb-sg [{:port 9501 :cidr_blocks [all-external]}
                                               {:port 80 :cidr_blocks [vpc-cidr-block]}]
                         :public-slave-alb-listeners [{:name "witan-gateway" :port 80 :protocol "HTTP" :health_check {:protocol "HTTP" :path "/health"}}
                                                      {:name "sebastopol" :port 9501 :protocol "HTTP" :health_check {:protocol "HTTP" :path "/"}}]
                         :application-policies [{:name "witan-s3-data"
                                                 :policy (bucket-policy "witan-staging-data")}
                                                {:name "access-heimdall-bucket"
                                                 :policy (bucket-policy "kixi-vault")}
                                                {:name "kixi-jenkins-backup"
                                                 :policy (bucket-policy "kixi-jenkins-backup")}
                                                {:name "staging-witanforcities-com"
                                                 :policy (bucket-policy "staging.witanforcities.com")}
                                                {:name "kixi-data-store-file-store-jenkins"
                                                 :policy (bucket-policy "kixi-data-store-file-store-jenkins")}
                                                {:name "kixi-data-store-file-store-staging"
                                                 :policy (bucket-policy "kixi-data-store-file-store-staging")}]
                         :slave-alb-listeners [{:name "elasticsearch" :port 31105 :protocol "HTTP" :health_check {:protocol "HTTP" :path "/"}}]
                         :slave-alb-sg [{:port 31105 :cidr_blocks [vpc-cidr-block]}]
                         :slave-sg [{:port 10011 :cidr_blocks [vpc-cidr-block]}] ;; addition for heimdall repl

                         :mesos-ami mesos-ami
                         :subnet-cidr-blocks {:a {:public "172.20.2.0/24"
                                                  :private "172.20.10.0/24"}}}) "staging/staging.tf")

This generates a cluster with the name "staging" in a staging directory. Probably adviseable to stick all this in a nice config file.

Once the *.tf files are generated, the next step is to use terraform.

Go to the appropriate directory:

cd vpc
terraform plan .

This should give an idea of which AWS entities will be created by applying the plan, and highlight some possible errors.

terraform apply .

Does the actual application. Same thing for the cluster, but in the staging/ directory.

Credentials

We use AWS profiles to deal with credentials, which saves us from having actual AWS creds anywhere near the repository. To do this, add a creds.tf file in each terraform *.tf directory.

provider "aws" {
  region = "aws-region"
  profile = "my-profile"
}

How terraform works

Explained in extensive detail in https://www.terraform.io/, but the cliff notes: terraform DSL or terraform json (*.tf) files -> terraform apply . -> AWS (or other) infrastructure and *.tfstate

The Terraform state files (*.tfstate) reflect what the current state is in AWS, and establishes the mappings between the terraform names and the underlying AWS ids. When destroying or applying (or planning), the state is checked first to see where the deployment is at.

Using terraform in a team

There are some challenges to using terraform in a team, in that terraform only deals well with a strictly linear chain of events.

We use a remote state (where the state is saved in an S3 bucket):

terraform remote config -backend=s3 -backend-config="profile=my-profile" -backend-config="bucket=my-bucket" -backend-config="key=vpc.tfstate" -backend-config="region=eu-central-1" -backend-config="relevant-region-aws-s3-endpoint"

When done applying, do a

terraform remote push

Which means people can pull the latest state from S3 when they have to do something. Still be advised that merging on the state is a strict no-no, and it's better to be in agreement on who does what when. As a rule, the latest serial number (top item in the tfstate) wins.

terraboot's People

Contributors

elisehuard avatar mattford63 avatar otfrom avatar sw1nn avatar tcoupland avatar thattommyhall avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.