Seamless CI/CD for EKS: Dockerized App Deployment with GitHub Actions, ArgoCD, and Terraform
Introduction
In this guide, we will build CI/CD pipelines using GitHub Actions, ArgoCD for GitOps, Amazon EKS (Elastic Kubernetes Service) to run our container and utilise Terraform to automate and streamline the entire process.
We’ll start with an overview of the tools and technology. This will follow a meticulously detailed, step-by-step walkthrough for dockerising your application, establishing robust CI/CD pipelines, and deploying seamlessly to EKS. Throughout this journey, we’ll underscore best practices and crucial considerations to guarantee a secure, high-performance, and maintainable deployment process.
Prerequisites
Before getting into the implementation, ensure that you have the following tools set up:
- GitHub Account: For storing your code and managing GitHub Actions workflows.
- AWS Account: For creating an EKS cluster.
- Terraform: For provisioning the infrastructure on AWS.
- kubectl: For managing Kubernetes clusters using the command line.
- ArgoCD: For continuous deployment to the provision EKS cluster.
If you are working with Terraform for the first time, please check out my previous article on “Getting started with Terraform and AWS”.
Additionally, ensure you have a basic understanding of Docker, AWS, Kubernetes, and Github.
Step 1: Set Up and Clone the Repository
To get started, we’ll set up and clone the repository that contains all the necessary files and configurations for this project.
Open your terminal or command prompt and Use the following command to clone the repository.
git clone https://github.com/SlyCreator/Seamless-CI-CD-for-EKS-Dockerized-App-Deployment-with-GitHub-Actions-ArgoCD-and-Terraform.git
You’ll find two main folders within the project: complete-project
and starter-project
.
- The
complete-project
folder contains the full codebase and can be used as a reference. - The
starter-project
folder is where you'll be working and making changes.
Move the starter-project
folder outside of the cloned repository if you choose to implement this guide.
Open the starter-project
folder in Your Text Editor
Step 2: Explore the Starter Project Structure
The starter-project
has the following structure.
(you can install Tree command on your computer but it is not required for this guide)
➜ starter-project git:(main) ✗ tree -L 2 -a
.
├── .github
│ └── workflows
├── .gitignore
├── .idea
│ └── workspace.xml
├── README.md
├── app
│ ├── .editorconfig
│ ├── .gitignore
│ ├── .prettierignore
│ ├── .prettierrc.json
│ ├── README.md
│ ├── craco.config.js
│ ├── package-lock.json
│ ├── package.json
│ ├── public
│ ├── src
│ ├── tsconfig.json
│ └── yarn.lock
└── infrastructure
├── k8s-manifest
└── terraform-manifest
.github
directory: Contains the workflows, includingci.yaml
, which has the instructions to run the CI pipeline.app
directory: Contains the application code that we will be dockerizing.infrastructure
directory: Contains two directories;terraform-manifest
will house infrastructure code andk8s-manifest
will hold Kubernetes manifest files.
Step 3: Dockerizing Your Application
Next, we need to create a Dockerfile
inside the app
directory. Your setup should look like this after creating the Dockerfile.
A quick tree -L 1
command from the app directory reveals the modular components.
➜ app git:(main) ✗ tree -L 1
.
├── .editorconfig
├── .gitignore
├── .prettierignore
├── .prettierrc.json
├── Dockerfile
├── README.md
├── craco.config.js
├── package-lock.json
├── package.json
├── public
├── src
├── tsconfig.json
└── yarn.lock
To build the docker image, copy the below command into the Dockerfile.
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
The Dockerfile sets up a Node.js environment, installs project dependencies, copies the application files, builds the application, exposes port 3000 for external access, and starts the application server using the npm start command
Step 4: Infrastructure as Code with Terraform
We’re adopting a Terraform module approach to ensure streamlined and reusable code, aligning with the DRY (Don’t Repeat Yourself) principle. To facilitate this, we’ll leverage the AWS Terraform community module to implement each resource, and they’ve been conveniently organized in the modules
directory for easy importing and use. The backend. tf, kubernetes.tf, output.tf, providers.tf, terraform.tfvars, variables.tf has been configured and will be explained in the next section.
A quick tree -L 2
command from the terraform-manifest directory reveals the modular components.
➜ terraform-manifest git:(main) ✗ tree -L 2
.
├── backend.tf
├── kubernetes.tf
├── modules
│ ├── alb
│ ├── ecr
│ ├── eks
│ ├── securitygroup
│ └── vpc
├── output.tf
├── providers.tf
├── terraform.tfvars
└── variables.tf
To use Terraform for provisioning our infrastructure, we must first setup a remote backend to manage the state of our infratructure. In this guide we’ll use AWS S3 and AWS DynamoDB to facilitate this state management. To begin, we’ll create the necessary AWS S3 bucket and DynamoDB table using the AWS CLI.
(Note: While it’s possible to create these resources via the AWS console, the terminal provides a faster alternative)
Open your terminal and execute the following commands to create the S3 bucket and DynamoDB table:
Replace your-terraform-bucket
with your desired S3 bucket name. Please note that S3 bucket names must be globally unique across all existing bucket names in AWS. If you attempt to create a bucket with a name that is already in use by another AWS account, you will receive an error.
aws s3api create-bucket --bucket your-terraform-bucket --region us-east-1
aws dynamodb create-table --table-name eks-terraform-state --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
The backend.tf
file in Terraform specifies the backend configuration, which determines where Terraform stores its state data. Open the backend.tf and replace the s3 bucket name and dynamodb table according to how you have created them
terraform {
backend "s3" {
bucket = "my-eks-terraform-state-bucket"// replace with your bucket
key = "terraform/terraform.state"
region = "us-east-1"
dynamodb_table = "eks-terraform-state" // replace with the dynamo table name
}
The provider.tf
file configures Terraform to interact with various cloud providers and services. In this case, it specifies the AWS provider and sets the region dynamically based on a variable. Additionally, it defines the required providers and their versions, ensuring Terraform uses the specified versions for managing infrastructure
## provider.tf
provider "aws" {
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">=2.7.1"
}
random = {
source = "hashicorp/random"
version = "~> 3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.14.0"
}
}
}
The output.tf
file in Terraform is used to define the outputs of the infrastructure that Terraform has provisioned. We have defined an output named ecr_repository_url
, which describes the URL of the Elastic Container Registry (ECR) repository that will be created.
output "ecr_repository_url" {
description = "The URL of the ECR repository"
value = module.ecr.ecr_repository_url
}
The variables.tf
file in Terraform is used to declare input variables that can be passed to the Terraform configuration. Open variables.tf you should have the full content
.........summarize version........
############# VPC ############
variable "vpc_cidr" {
type = string
description = "The CIDR block for the VPC."
}
..................
############# EKS ############
variable "cluster_name" {
type = string
description = "The name of the EKS Kubernetes cluster."
}
..................
The terraform.tfvars
file in Terraform is used to set values for the variables defined in the variables.tf
file. We have provided specific values for each variable without modifying the Terraform configuration files directly. Open terraform.tfvars
to see the full script
## Provider
region = "us-east-1"
availability_zones = ["us-east-1a", "us-east-1b"]
###### VPC
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr_blocks = ["10.0.1.0/24","10.0.2.0/24"]
private_subnet_cidr_blocks = ["10.0.3.0/24","10.0.4.0/24"]
##### EKS
cluster_version="1.29"
desired_capacity = 2
instance_type = "t3.medium"
max_capacity =5
min_capacity =1
Throughout this project, we will append suffixes to our resources to indicate their creation hierarchy. In a real-world scenario, specifying such numeric order may not be necessary. For example, other resources will be named 1_vpc.tf, 2_eks-cluster.tf, etc.
We will start by creating a file named 0_ecr.tf in the terraform-manifest
directory and inserting the configuration detailed below.
module "ecr" {
source = "./modules/ecr"
name = var.ecr_repo_name
}
Before proceeding, ensure you’re in the correct directory by navigating to starter-project/infrastructure/terraform-manifest
. Then, within this directory, initialize the Terraform module by executing terraform init
, followed by terraform apply -auto-approve
.
terraform init
terraform apply -auto-approve
Please note the ECR url that was output on the terminal
Step 5: Setting Up GitHub Actions
We use GitHub Actions to build the Docker image and push it to AWS ECR. This workflow is implemented in .github/workflow/ci.yaml.
Begin by creating a new GitHub repository with an appropriate name. Next, extract your AWS CLI credentials and add them as secrets in your GitHub repository. You can do this by opening your terminal and running the following command:
cat ~/.aws/credentials
This will display your AWS credentials, as shown below:
(Note: For production, create a new user with limited access to push to ECR)
Next, add these credentials to your GitHub repository secrets. Navigate to your GitHub repository, go to Settings > Secrets and variables > Actions, and add new repository secrets. You need to add the following secrets: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, ECR_REPOSITORY
. Copy the ECR Url from the terraform output
Now, it’s time to push your local code to the new GitHub repository. Follow these instructions to push your local changes to the GitHub repository, ensure you’re in the root directory which is /starter-project :
git init
git add .
git commit -m "initial commit"
//replace the below with the link to your repo
git remote add origin git@github.com:username/your-repo.git. //replace with yours
git branch -M main
git push -u origin main
Triggers: The workflow is initiated on pushes to the main
branch, but only when there are changes in the app/
directory or the workflow file itself (ci.yaml
).
Environment Variables: We define environment variables such as AWS_REGION
,ECR_REPOSITORY
,AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
needed for deployment. For security reasons, we use GitHub secrets to add these variables. The AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
used in this guide are the ones configured in our terminal.
Jobs: Contains a single job named build-and-deploy
, which runs on an Ubuntu latest runner.
Steps:
- Checkout code: Checks out the repository’s code.
- Install Node: Sets up Node.js for building the application.
- Install dependencies: Installs application dependencies and builds the project.
- Configure AWS credentials: Configures AWS credentials to interact with EKS and ECR.
- Login to Amazon ECR: Logs into Amazon ECR using AWS credentials.
- Build, tag, and push image to Amazon ECR: Builds the Docker image, tags it with the commit SHA and ‘latest’, then pushes it to Amazon ECR.
During the build and tag step, we use a dual-image approach. One image is tagged as ‘latest’, while another is tagged with the commit SHA. This ensures that we always have the latest image available for deployment, while also allowing us to preserve specific versions and easily rollback when necessary.
Step 6: Deploying to EKS
we will deploy a VPC, EKS cluster, ALB, and ArgoCD using Terraform. Utilizing Terraform modules has several benefits, such as promoting reusable, maintainable, and organized code.
- Creating the VPC: First, we will import the VPC module to create a VPC with two subnets. We will also specify the CIDR block. Create a file named
1_vpc.tf
and paste the following configuration:
# 1_vpc.tf
module "vpc" {
source = "./modules/vpc"
vpc_name = "react-app-2048-vpc"
cluster_name = local.cluster_name
vpc_cidr = var.vpc_cidr
availability_zones = var.availability_zones
public_subnet_cidr_blocks = var.public_subnet_cidr_blocks
private_subnet_cidr_blocks = var.private_subnet_cidr_blocks
environment = var.environment
}
- Creating the EKS Cluster: Next, create the EKS cluster using the following script. Create a file named
2_eks.tf
and paste the content:
# 2_eks.tf
module "eks-cluster" {
source = "./modules/eks"
cluster_name = local.cluster_name
cluster_version = var.cluster_version
desired_capacity = var.desired_capacity
instance_type = var.instance_type
max_capacity = var.max_capacity
min_capacity = var.min_capacity
private_subnet_ids = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
depends_on = [module.vpc]
}
- Creating the ALB: To set up the Application Load Balancer Controller, create a file named
3_alb.tf
and use the following script. In other to grant eks permission to create AWS resource on our behalf,we have also created IAM ROLE, check the detailed in alb/main.tf:
# 3_alb.tf
module "alb" {
source = "./modules/alb"
cluster_name = local.cluster_name
environment = var.environment
oidc_arn = module.eks-cluster.oidc_provider_arn
oidc_url = module.eks-cluster.cluster_oidc_issuer_url
depends_on = [module.eks-cluster]
}
- Deploying ArgoCD: Finally, set up ArgoCD by creating a file named 4
_argoCD.tf
and pasting the following script:
# 4_argoCD.tf
resource "helm_release" "argocd" {
name = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
namespace = "argocd"
version = "4.9.7"
create_namespace = true
depends_on = [module.eks-cluster]
}|
We have defined the resource we will be creating. Now, open your terminal, ensure you’re in the terraform-manifest directory, and then apply the following command.
terraform init
terraform apply -auto-approve
After approximately 20 minutes depending on your network, the EKS cluster will be created. Run the command below to configure your terminal to access the cluster using kubectl. Make sure you’re using the correct cluster name defined in terraformvars.tf.
aws eks update-kubeconfig --name my-eks-app --region us-east-1
We’ve successfully deployed our EKS cluster. To confirm that we’ve created a cluster with two nodes, run the command below which will return the number of nodes.
kubectl get nodes
Next, let’s create the dev namespace using the command below.
kubectl create namespace dev
7. Writing Kubernetes Manifests for Your Application
We will now write Kubernetes manifests to deploy our application. We’ll create a deployment.yaml
for running the Docker application image, a service.yaml
to expose the application, and an ingress.yaml
to route traffic based on the URL. These manifests will ensure seamless traffic routing to the application and ArgoCD.
Create the Kubernetes manifest files in the k8s-manifest directory and paste the following configurations:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app-2048
namespace: dev
spec:
replicas: 2
selector:
matchLabels:
app: react-app
template:
metadata:
labels:
app: react-app
spec:
containers:
- name: react-app-container
image: replace-with-your-ecr-url.us-east-1.amazonaws.com/react-2048-app:latest
ports:
- containerPort: 3000
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: react-app-2048-service
namespace: dev
spec:
selector:
app: react-app
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: react-app-2048-ingress
namespace: dev
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: react-app-2048-service
port:
number: 80
8. Continuous Deployment with ArgoCD
Remember that we have already installed ArgoCD using the terraform script, 5_argocd.tf. We need to convert argocd-service to a load balancer to make it accessible.
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
This command will convert argocd-server to a load balancer, enabling us to fetch all services in the argocd namespace using the following command:
kubectl get svc -n argocd
Copy the URL you see under the external-ip for the argocd-server and launch it in your browser. click on advanced.
To login to ArgoCD dashboard. The defualt username is admin, while the password is random and can be gotten by running the below command. (Note: Ignore the “%” symbol)
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
9. Integrating GitHub Actions with ArgoCD
Click on setting by the left menu and select repositories.
Click on CONNECT REPO USING HTTPS
Fill the form,putting in the repository,user and passowrd. For the password please create Github Persoanal Access Token.
You can create Github Personal Access Token by clicking on your profile picture on Github > Settings > Developer Settings > Personal Access token.
After that click on connect and you should receive a success message.
Now click on Create Application and fill the details
We have added the repository previously so it will appear in the dropdown. For the Path field, we want to point ArogoCD to where we have the k8s-manifest files.
After that Click on create and ArgoCD will create and deploy our application.
You can access the user using the url that was created by the ingress.
Don’t forget to delete the resource to avoid cost.
- Delete the Image from the ECR using AWS console
- Tear down the resource using the below
terraform destroy -auto-approve
10. Conclusion
This article guides you through deploying applications using GitHub Actions, ArgoCD, Terraform, and Amazon EKS. We automate, manage, and scale application lifecycles efficiently with these tools. GitHub Actions handles CI/CD pipelines, ArgoCD ensures cluster synchronization, Terraform manages infrastructure, and Amazon EKS supports scalable Kubernetes applications.
If you found this insightful, 👏🏻 *clap* 👏🏻 your hands (up to 50x),It encourages me to keep writing and don’t forget to share this post !
Follow me on LinkedIn