- Clone: checks out the source code to jenkins workspace.
- Build & Test: Installs the dependencies and run the unit test.
- Build Docker image: Docker Container image is created.
- Push Image to Hub: Docker image is now pushed to Dockerhub.
- Deploy to K8s: Deployment.yaml file deploys image to the K8s using by validdating kubeconfig.
- Sample App for demo:
- Pipeline Implimentation:https://github.com/doddabasappa94/devops-automation
- Checkout: checks out the source code to jenkins workspace.
- Build & Test: Installs the dependencies and run the unit test.
- Build Docker image: Both Docker Container image is created.
- Push Image to Hub: Both Docker image is now pushed to Dockerhub.
- Deploy to K8s: Both Deployment.yaml file deploys image to the K8s Cluster.
Developer pushes the source code to his git branch and immediately triggers the jenkins job. Jenkins builds, tests and containerizes the app and at last it will deploy it to K8s cluster. This is the path path, This is happens if all criterias are met .
- Sample App for demo:
- pipeline implementation: https://github.com/doddabasappa94/parallel-devops
Following Modules are used to create Infrastructure
- vpc
- Ec2
- Rds
Note
- These modules are tailored to suite my requirement
├── module #--------------------------> module definition
│ ├── Ec2
│ │ ├── instance.tf
│ │ ├── variable.tf
│ │ ├── securityGroup.tf
│ │ └── variable.tf
│ ├── Rds
│ │ ├── mysql.tf
│ │ ├── securityGroup.tf
│ │ └── variable.tf
│ └── vpc
│ └── vpc.tf
│ ├── public_subnet.tf
│ ├── private_subnet.tf
| ├── route_table.tf
│ ├── internet_gateway.tf
│ ├── nat.tf
│ ├── variable.tf
├── main.tf #------------------------> module declaration
├── provider.tf #------------------------> Provider definition
├── backend.tf #------------------------> Backend configuration
├── variables.tf #------------------------> Variable declaration
└── Readme.md #------------------------> Command Reference for terraform script execution
- Template Creates following things using vpc module.
- vpc
- 3 Public subnets and 3 Private subnets
- Route table & Route association
- internet and Nat Gateways
- Template Creates following things using Ec2 module.
- 3 EC2 instances created
- security group created
- Template Creates following things using Rds module.
- Rds instance created
- Template is workspaced, same template can be used to bootstrap multiple cluster using multiple workspaces.
- Cluster and application monitoring is crucial for any organization whose applications run on clusters. Any problem with the cluster can lead to a huge loss to the organization.
- For current implementation, Prometheus Opertor and associated tools like kube-state metrics, node-exporter , alert manager are used to monitor the cluster components.
- Applcations related metrics can also be monitored through prometheus if the instrumentation is implemented at the microservice level.
-
Prometheus Operator uses CRD (Custom Resource Definitions) to generate configuration files and identify Prometheus resources.
- alertmanagers – defines installation for Alertmanager
- podmonitors – determines which pods should be monitored
- prometheuses – defines installation for Prometheus
- prometheusrules – defines rules for alertmanager
- servicemonitors – determines which services should be monitored
-
The operator monitors Prometheus resources and generates Deployments (Prometheus and Alertmanager) and configuration files (prometheus.yaml, alertmanager.yaml).
-
Current operator chart deploys following components
- Prometheus Operator
- Prometheus
- Alertmanager
- Prometheus node-exporter
- kube-state-metrics
- Grafana
- All configuration are stored in declarative manner.
Cluster is monitored using the Prometheus stack. Deployment is done through helm chart
- Service monitors that describe and manage alerts to the targets like email and slack to be scraped by Prometheus. The Prometheues describe Rules to trigger a alert based Conditions..
- Alert Manager ConfigMap for app
kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager-config
namespace: monitoring
data:
config.yml: |-
global:
templates:
- '/etc/alertmanager/*.tmpl'
route:
receiver: alert-emailer
group_by: ['alertname', 'priority']
group_wait: 10s
repeat_interval: 30m
routes:
- receiver: prometheusalert
# Send severity=slack alerts to slack.
match:
severity: slack
group_wait: 10s
repeat_interval: 1m
receivers:
- name: alert-emailer
email_configs:
- to: [email protected]
send_resolved: false
from: [email protected]
smarthost: smtp.gmail.com:587
auth_username: "[email protected]"
auth_identity: "[email protected]"
auth_password: "abc@123"
require_tls: false
- name: prometheusalert
slack_configs:
- api_url: https://hooks.slack.com/services/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
channel: '#prometheusalert'
- ELK stack is a combination of three open-source tools that form a log management platform that specializes in searching, analyzing, and visualizing logs generated from kubernets pods.
- Prometheus is mainly we used for metrics while Elk Stack is mainly to aggregate logs from all your applications, analyze these logs, and create visualizations for application.
- Elastic Search, Kibana, and Logstash make up the ELK stack. Log aggregation is the major goal of this. The development of micro-service architecture needs a better method for collecting and searching logs for debugging purposes. These logs can be gathered and explored using the ELK stack.
- Elastic Search: This is the database which stores all the logs,he indexer and search engine used to store the data gathered by Beats and Logstash.
- Kibana: an interface to Elasticsearch providing many types of visualization to help analyse and understand the data.
- Logstash: collects and processes data in many formats, allowing parsing and enrichment.
- Filebeat: Filebeat is very important component and works as the log exporter. It exports and forwards the log to Logstash.
192.53.65.32 12.3.33.98.56
198.26.36.26 13.2332.6565.56
192.22.23.63 25.36.12.3
10.26.16.12 8.8.8.8
0.0.0.0 1.1.1.1
#!/bin/bash
#replace all ip with 127.0.0.1
sed -r 's/[0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}/127.0.0.1/g' $1
Name,Age,Gender,Salary
A,12,M,3000
B,12,M,2000
C,12,M,1000
#! /bin/bash
while IFS="," read -r rec_column1 rec_column2 rec_column3 rec_column4
do
if [ $rec_column1 == "C" ];then
sed -i -r "s/${rec_column4}/3000/g" abc.csv
echo "Displaying Record-$rec_column1"
echo "Age: $rec_column2"
echo "Gender: $rec_column3"
echo "Salary: $rec_column4"
fi
done < <(tail -n +2 abc.csv)
awk 'BEGIN{FS=OFS=","} $1=="C"{$4="3000"} 1' abc.csv
#!/bin/bash
kubectl get pods --all-namespaces | grep 'CrashLoopBackOff' | awk '{print $2 " --namespace=" $1}' | xargs kubectl delete pod echo "$2 is deleted"