Run Serverless Containers Using Amazon EKS & AWS Fargate

May 13, 2022

Introduction

In this article, we will take a look at a use-case that represents how to Run Serverless Containers Using Amazon EKS and AWS Fargate.

Using Amazon EKS to run Kubernetes on AWS gives your team more time to just focus on core product development instead of managing the infrastructure of core Kubernetes. Kubernetes on AWS has good scalability, is easily upgradable, has the AWS Fargate option to run Serverless containers, and more.

Architecture

Source

The above architecture represents running Kubernetes on AWS using Amazon EKS.

What is Amazon EKS?

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. [1]

What is Amazon EKS Cluster?

Amazon EKS Cluster consists of two primary components:

  • The Amazon EKS Control Plane - Configure and Manage Kubernetes Services
  • Amazon EKS Worker Nodes - Configure and Manage User ApplicationsEKS provides a different way to configure worker nodes that execute application containers like Self-Managed, Managed, Fargate

The EKS Cluster consists of the above 2 components deployed in two separate VPCs.

Source

What is AWS Fargate?

AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). [2]

Recently, AWS announced that AWS Fargate now delivers faster scaling of applications. Now AWS Fargate enables customers to scale applications faster, improving performance and reducing wait time. AWS has made several improvements over the last year that enable you to scale applications up to 16X faster, making it easier to build and run applications at a larger scale on Fargate, along with that AWS Fargate increases task launch rates.

How does Amazon EKS work?

Source

Amazon EKS Workflow:

  1. Create EKS Cluster
  2. Deploy Compute - Launch Amazon EC2 nodes or Deploy your workloads to AWS Fargate
  3. Connect to EKS - Configure Kubernetes tools (such as kubectl) to communicate with your cluster
  4. Deploy - Manage Workloads on your Amazon EKS cluster using the AWS Management Console
Source

Setup Environments - Must Have Installed below:

  • AWS CLI – CLI tool for working with AWS services, including Amazon EKS
  • kubectl - CLI tool for working with Kubernetes Clusters
  • eksctl - CLI tool for working with EKS clusters
  • Docker Engine - For building and containerizing applications
  • NodeJs - For application development JavaScript runtime

Setup AWS CLI Default Configuration - AWS configure

Make sure to create a relevant IAM User with programmatic access and relevant credentials with this AWS configuration step. To learn about how to create this AWS IAM user, see this article.

  
aws configure

AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Default region name [None]: ap-south-1
Default output format [None]:
  

Create Amazon EKS Cluster

There are multiple ways to create an Amazon EKS Cluster (like AWS Console, AWS SDK, & more).

Create an EKS Cluster with Fargate Nodes using the eksctl command below:

  
eksctl create cluster --name sg-fargate-eks-cluster --fargate
  

This single command will provision an EKS Cluster along with VPC, Subnet, IAM Roles, RouteTable, Fargate Profile, Fargate Nodes, and more with the CloudFormation stack.

This command will take 15-25 minutes and your terminal will display all the resources created as part of creating the EKS Cluster.

Amazon EKS Cluster

Amazon EKS Cluster Fargate Profiles

CloudFormation Stack

Final Project Folders and Files Setup (For Reference)

Below is what your final project will look like by following each section of this article. I’m sharing it now so you can check what I have versus what you have as the article continues.

  
index.js <-- server
Dockerfile <-- instructions for container start-up
sg-sample-deployment.yaml <-- kubectl deploy spec
sg-sample-service.yaml <-- kubectl service spec
  

Create a NodeJs Application

Create a node project folder and initialize npm with express to run the server

  
npm init
npm install express
  

Create index.js file with this code

It will create a server that listens on port 80 using node express

  
const express = require('express')
const app = express()
app.get('/', (req, res) => {
    res.send('NodeJs App Running on Amazon EKS Fargate!\n')
})
app.listen(80, () => {
    console.log("server connected")
})
  

Dockerize the NodeJs Application

Create Dockerfile file in the root of the same project

  
FROM node:latest
WORKDIR /usr/src/app
RUN npm install express
COPY index.js .
CMD ["node", "index.js"]
  

Build and Run Docker image

  
docker build -t sg-fargate-eks .
docker run -p 80:80 -d sg-fargate-eks
  

In local try to access http://localhost; it should return the below response if all is good.

  
NodeJs App Running on Amazon EKS Fargate
  

Container Registries

Amazon Elastic Container Registry (Amazon ECR) is a fully managed container registry that easily stores, shares, and deploys container images.

Source

Create Amazon ECR Repository

Amazon ECR → Repositories → Create Repository

This newly created ECR Repository is used to store the nodejs application docker image, which EKS will use.

Amazon ECR Login

Do docker login to ECR for doing operation with the ECR repository. I’m using ap-south-1, feel free to change commands to match your region.

  
aws ecr get-login-password --region ap-south-1 | docker login --username AWS --password-stdin xxxxxxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com
Login Succeeded
  

Push Docker Image to Amazon ECR Repository

Push already created the nodejs app docker image in the Amazon ECR Repository.

  
docker image push xxxxxxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/sg-fargate-eks-app:latest
  

Deploy to Amazon EKS Fargate

For deploying the application we are using kubectl, a CLI tool, which will communicate with the Kubernetes Control Plane.

Create Deployment

Creating a Deployment type Kubernetes workload for deploying the app.

sg-sample-deployment.yaml

This file is used to create a deployment-type workload within the cluster. The container image used as part of this file is the same which we pushed into the Amazon ECR Repository in the previous steps. Container listens on port 80 for HTTP requests and has 1 replica.

Asking EKS to launch 1 replica of the sg-fargate-eks-app container to run across the cluster. According to needs, it is able to change that number on the fly- it will allow scaling application up or down. Having many replicas assures high availability since pods are scheduled across multiple nodes. Replica's purpose is to maintain the specified number of Pod instances running in a cluster for high availability.

  
apiVersion: apps/v1
kind: Deployment
metadata:
 name: sg-fargate-eks-deployment
 labels:
   app: sg-fargate-eks-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sg-fargate-eks-app
  template:
    metadata:
      labels:
        app: sg-fargate-eks-app
    spec:
     containers:
     - name: sg-fargate-eks-app
       image: xxxxxxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/sg-fargate-eks-app:latest
       ports:
       - containerPort: 80
  

The above deployment file is used in the below command.

  
kubectl create -f sg-sample-deployment.yaml
  

Using kubectl, launch this deployment. It will create ‘Deployment’ type Kubernetes ‘Workloads’ with the name ‘sg-fargate-eks-deployment’ under the ‘default’ namespace. Application deployed to Fargate instance type node within relevant pods using provided container image of the app.

The above image show EKS Kubernetes Deployment & relevant Pods.

Below, the kubectl command helps to get the ‘pods’ list running on your cluster within the default namespace.

  
kubectl get pods

NAME                                         READY   STATUS    RESTARTS   AGE
sg-fargate-eks-deployment-74b9d6f5b7-pzhh9   1/1     Running   0          25h
  

Below, the kubectl command helps to get the ‘deployments’ list running on your cluster within the default namespace.

  
kubectl get deployments

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
sg-fargate-eks-deployment   1/1     1            1           25h
  

The above image show EKS Fargate Node running Pods.

Create Service

It is not a secure way to expose the internal components of a cluster to the outside world directly, so it is always better to deploy some services in front of the cluster.

For this, we will create a Kubernetes service. Kubernetes support different types of services; see here for more details. For the current article, we are using a Load Balancer type service.

Overall, we are using the Fargate instance for nodes, ELB for exposing services to the outside world, and VPC for networking inside the EKS cluster.

sg-sample-service.yaml

We are able to create a Kubernetes service of the type LoadBalancer by creating sg-sample-service.yaml file and adding the following contents inside that:

  
apiVersion: v1
kind: Service
metadata:
  name: sg-fargate-eks-service
spec:
  type: LoadBalancer
  selector:
    app: sg-fargate-eks-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  

The below command will create a Kubernetes service of the type LoadBalancer.

  
kubectl create -f sg-sample-service.yaml
  

Within a few minutes, the Kubernetes Service will be up & running and able to connect to the applications. We are able to access the application using the “EXTERNAL-IP” of the service.

  
kubectl get svc

NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP                          PORT(S)        AGE
sg-fargate-eks-service   LoadBalancer   10.XXX.XXX.XX   xxxxx.ap-south-1.elb.amazonaws.com   80:30953/TCP   28h
kubernetes               ClusterIP      10.XXX.X.X                                     443/TCP        31h
  

Above, the kubectl command gives Kubernetes services details along with EXTERNAL-IP.

  
curl xxxxx.ap-south-1.elb.amazonaws.com

NodeJs App Running on Amazon EKS Fargate
  

Service should be accessible using a browser or command line curl like above using EXTERNAL-IP.

Cleanup Resources on AWS

  
kubectl delete -f sg-sample-service.yaml
kubectl delete -f sg-sample-deployment.yaml
eksctl delete cluster --name sg-fargate-eks-cluster
  

Above kubectl & eksctl commands useful to remove different resources EKS Cluster, Kubernetes Deployment, Kubernetes Services.

Conclusion

Using Amazon EKS to run Kubernetes on AWS gives your team more time to just focus on core product development instead of managing the infrastructure of core Kubernetes. Kubernetes on AWS has good scalability, is easily upgradable, has the AWS Fargate option to run Serverless containers, and more.

Amazon EKS with AWS Fargate allows for Serverless Containers to be run. We are able to provision, manage, and deploy Amazon EKS resources using different tools like eksctl, kubectl, and awscli.

The scope of this article covers the basic ideas around Amazon EKS with AWS Fargate, which will allow you to easily explore it after reading this article.

Sources

[1]https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html
[2]https://aws.amazon.com/fargate/

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones - Founder
Ryan Jones
Founder
Speak to a Guru
arrow
Edu Marcos
Chief Technology Officer
Speak to a Guru
arrow
Mason Toberny
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.