Kubernetes, often referred to as K8s, is a freely available platform designed to automate the deployment, scaling, and administration of containerized applications. It organizes the containers constituting an application into coherent units, facilitating seamless management and identification. Drawing from 15 years of Google’s expertise in operating production workloads, Kubernetes incorporates top-notch concepts and practices from the broader community.
- Automated rollouts and rollbacks :
- Service Discovery and load balancing
- Storage orchestration
- Self healing
- Secret and configuration management
- Horizontal scaling
A Kubernetes cluster comprises a collection of worker machines known as nodes, responsible for executing containerized applications. Each cluster includes a minimum of one worker node. These worker nodes serve as the hosts for Pods, the essential components of the application workload. The control plane oversees both the worker nodes and the Pods within the cluster.
The components of the control plane are responsible for making overarching decisions concerning the cluster, such as scheduling, and they are also adept at identifying and addressing cluster events.
- Kube-apiserver : The API server is the front end for the Kubernetes control plane.
- etcd : key value store for storing data
- kube-schedular : Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
- kube-controller-manger : runs the controllers processes
- cloud-controller-manager : link your cluster into your cloud provider’s API
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
- kubelet : An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
- kube-proxy : kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
- Container runtime : A fundamental component that empowers Kubernetes to run containers effectively. It is responsible for managing the execution and lifecycle of containers within the Kubernetes environment.
Fully managed Kubernetes control plane, Amazon EKS is a managed service that makes it easy for you to use Kubernetes on AWS without needing to install and operate your own Kubernetes control plane.
How it works
Amazon EKS exposes a Kubernetes API endpoint. Your existing Kubernetes tooling can connect directly to EKS managed control plane. Worker nodes run as EC2 instances in your account.
- Hybrid container deployments : Run highly available and scalable Kubernetes clusters on AWS, while maintaining full compatibility with your Kubernetes deployments running anywhere else.
- Microservices : Easily run microservices applications with deep integrations to AWS services, while getting access to the full suite of Kubernetes functionality and popular open source tooling
- Application migration : Easily containerize and migrate existing applications to Amazon EKS without needing to refactor your code or tooling.
Deploy Spring Boot APP
Step 1 : Create APP
In my earlier blog post, I introduced a Spring Boot application. In this tutorial, we’ll leverage the same application but with a twist – instead of deploying it to EC2, we’ll be deploying it to Kubernetes.
Step 2 : Install Kubectl
Install Kubectl : https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
Step 3: Create your Amazon EKS cluster & Nodes
An existing Kubernetes cluster with at least one node is needed for deploying your application.
Creating EKS cluster & node group : https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
1 ) Create Cluster – give name and give eks role
2) Choose all the default values in Next, and finally create the cluster
1 Create Node Group
2 Select Role with Policies – AmazonEKSWorkerNodePolicy, AmazonEC2ContainerRegistryReadOnly, AmazonEKS_CNI_Policy, pls refer doc above
3 Next – next choose defaults and create
Step 4: Create a RDS Aurora DB
1) Select PostgreSQL
2) Choose dev
3) Give the password
4) Give the public access
5) Choose IAM Authentication, and turn off monitoring
Step 5: Create a KubeConfig file
Create KubeConfig file : https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
Step 6: Deploy Spring Boot Application
Create a namespace
kubectl create namespace eks-sample-app
Create a Kubernetes deployment yaml
Generate a Kubernetes deployment. This example deployment retrieves a container image from a public repository and allocates three replicas (individual Pods) across your cluster. The nodeSelector configured as kubernetes.io/os: linux implies that in a scenario where your cluster encompasses both Linux and Windows nodes, the image would exclusively deploy to Linux nodes.
- key: kubernetes.io/arch
- name: tradesman-container
- name: URL
- name: USERNAME
- name: PASSWORD
- name: http
Apply the deployment manifest to your cluster.
kubectl apply -f tradesman-deployment.yaml
Create a service
Establish a service. This enables access to all replicas using a single IP address or name. Kubernetes provides the service with a dedicated IP address that is exclusively reachable from within the cluster. To access the service externally, deploy the AWS Load Balancer Controller, which will load balance application or network traffic to the service.
- protocol: TCP
Apply the service manifest to your cluster.
kubectl apply -f tradesman-service.yaml
View the details of the deployed service
kubectl -n tradesman describe service tradesman-service
Step 7 : Testing the Application
Node1 : 1 instance of the container
Node2 : 2 instance of the container
API Service : Create a tradesman request
API Service : Get Tradesman Response
Step 8 : Cleanup
1 Delete the Service :
2 Delete the Cluster
3 Delete the RDS
Step 9 : Github Repo
ECS is an AWS managed Container Orchestrator. Functionalities like auto deployment, scale up, scale down, fault tolerance are provided by Orchestrator, traditional Orchestrator like K8, Apache Mesos are highly complex tools and for some companies or projects, probably the best alternative is an managed orchestrator like ECS.
- ECS has two launch types and ECS manages the containers.
- EC2 : The Ec2 instances are managed by us like Docker, ECS Agent, Firewall , upgrades, but less cost.
- Fargate: AWS managed Service’s.
ECS Task Definition file
The docker image is uploaded to Docker hub or any other repository, then we define the task definition file in ECS, volumes attached, ports open, basically blueprint of your container, or the spec seen similar in a compose file.
Task is an instance of task definition, task is a running container with settings defined in task definition file.
Ensures that certain number of container’s running all the time based on the configurations defined in task definition file. It helps us to start a container if any of them goes down.
Makes sure traffic is evenly routed to various instances running on ECS.