Kubernetes Deployment — DevOps Project 09
Links: Docker Image. GitHub Repo. Manifests (project 9)
Let’s deploy our app on Kubernetes cluster
We have containerized our application: EventApp, now we will deploy it on Kubernetes
What we need:
- Containerized application
- Production grade kubernetes cluster (we’ll use Kops for it)
- Volume (EBS) for postgres database pod
- To make sure nodes run in same zone as EBS volume, we will LABEL nodes (or use a single zone for deployment)
- Then we will write Kubernetes definition files for:
- Secret
- Deployment
- Service
- Volume
Setup for Kops:
Prereqs:
DNS
s3, IAM, route53 hosted zone for ns records
Kops isn’t part of kubernetes cluster, we are just using it to create a kubernetes cluster.
Next we’re setting up S3 bucket
Next, we are creating a user with admin privs, create access key.
Now to Route53
We are going to create a hosted zone
We get the ns server entries to be added to DNS zone.
Now we’ll login to our instance and set up everything
Steps:
- Generate ssh keys using
ssh-keygen
- Install aws cli
- Configure awscli:
aws configure
with kops-admin creds - Install kubectl: https://kubernetes.io/docs/tasks/tools/
5. Install kops: https://kops.sigs.k8s.io/getting_started/install/
Check our ns resolution
nslookup -type=ns kubeapp.diversepixel.com
All good
Now we will create configuration for creating cluster and store it in s3 bucket
kops create cluster --name=kubeapp.diversepixel.com \
--state=s3://eappbucket --zones=us-east-1a \
--node-count=2 --node-size=t3.small --control-plane-size=t3.medium \
--dns-zone=kubeapp.diversepixel.com --node-volume-size=8 \
--control-plane-volume-size=8
Everytime you run the kops command, specify the bucket
Whenever you make changes, run kops update
kops update cluster --name kubeapp.diversepixel.com --state=s3://eappbucket --yes --admin
Wait for 15 mins
kops validate cluster --state=s3://eappbucket
.kube/config file gets created by kops which can be used to:
kubectl get nodes
3 ASGs are created
A separate VPC will be setup
New records in route53 will be created
Right, so the next step after creating our cluster:
Create an EBS volume for data to be stored
aws ec2 create-volume --availability-zone=us-east-1a --size=3 --volume-type=gp2
take note of volume id (adding this tag is important, the application fails to attach with this volume otherwise)
aws ec2 create-tags --resources <your-vol-id> --tags Key=KubernetesCluster,Value=kubeapp.diversepixel.com
We want to make sure our db pod runs in the same availability zone as the node, happens using labels, but since we are creating nodes in only one availability zone, it is not mandatory
kubectl get nodes --show-labels
kubectl get nodes (get the node name)
kubectl describe node <name> | grep us-east-1
kubectl label nodes <name> zone=us-east-1b (based on the output)
kubectl describe node <name> | grep us-east-1
kubectl label nodes <name2> zone=us-east-1a
Completed with the prerequisites, now we will be writing definition files
Images we will be using:
- EventApp: https://hub.docker.com/repository/docker/bhavyansh001/eapp/general
- Postgres:14 alpine
First let’s create a file for storing encoded secrets:
All Files:
- postgres-secrets.yml
- postgres-pv.yml
- postgres-pvc.yml
- postgres-deployment.yml
- postgres-service.yml
- rails-deployment.yml
- rails-service.yml
`postgres`, the service name, should match the host name when connecting to the application
kubectl apply -f .
kubectl get pods
kubectl get deploy
kubectl get svc (get the endpoint from here and run it)
Then
route53 hosted zone simple record
app.kubeapp.diversepixel.com
sub-subdomain configured
And we have it working
K8S acquired!
At last we will delete the cluster, s3 bucket, route53 hosted zone, and the iam user, for complete cleanup.
Delete Cluster:
kops delete cluster --name=kubeapp.diversepixel.com --state=s3://eappbucket --yes
Questions, suggestions always welcome.
Connect with me on X @ bhavyansh001, I am learning and building in public!