Project: Refactor A Monolith Application to Microservices

Jonah Apagu
7 min readAug 1, 2022

Project Overview

This was the third project I completed while undertaking the Udacity cloud developer Nanodegree program sponsored by ALX-T. The project application, Udagram is an Image filtering application that allows users to register and log into a web client, post photos to the feed, and process photos using an image filtering microservice. It has two components:

  1. Frontend — Angular web application built with Ionic framework.
  2. Backend RESTful API — Node-Express application.

In this project, the backend was split into two microservices backend-feed and backend-user, a revereseproxy through which the frontend will communicate with the backend services was also deployed as a Microservice.

Prerequisites

Project steps

In this project, I refactored the monolith Udagram application into microservices using the following steps;

  • All project prerequisite software and CLIs were installed.
  • I created an S3 bucket with no versioning and disabled encryption to store the user-uploaded pictures.
  • CORS (Cross-Origin Resource Sharing) configuration was added to the S3 bucket to allow the application running outside of AWS to interact with the bucket using POST, GET, PUT, DELETE and HEAD methods.
  • I created a PostgreSQL database using AWS RDS, this database is used by the project to store user metadata, such as user credentials. The database will be accessed from the application running either locally or on the cloud.
  • Environment variables were then set up Locally to store all sensitive information such as Postgres username and password as well as AWS S3 bucket details. The environment variables were also saved in “set_env.sh” file in the project directory.
  • Running the project locally as a monolith
  1. The environment variables were set using the command “source set_env.sh”.
  2. The backend dependencies specified in the package.json file were installed using the “npm i” command.
  3. The backend server was started using the “npm run dev” command after which the backend was verified to be running using the URL “http://localhost:8080/api/v0/feed”.
  4. The package dependencies for the frontend were installed using the “npm install -f” command.
  5. The frontend application was then built by compiling it into a static file using the command “ionic build”.
  6. The frontend was then run locally using the command “ionic serve”.The frontend was accessed using the URL http://localhost:8100.
  • Running the project locally in a multi-container environment.
  1. The current backend application code of the project contains logic for both /users/ and /feed/ endpoints. I then decomposed the API code into two separate services that can be run independently of one another and created two new directories for the backend-feed and backend-user services.
  2. I then created docker files for the backend-feed, backend-user and frontend directories. The Docker files contain the information needed by docker to build images for the two backend and frontend microservices.
  3. A new directory called reverseproxy was created, a docker file and an nginx.conf file were created in the directory, this will be used to create another container named reverseproxy running the Nginx server. The reverseproxy service will help add another layer between the frontend and backend APIs so that the frontend only uses a single endpoint and doesn’t realize it’s deployed separately. The Nginx file will associate all the service endpoints, the Nginx container will expose 8080 port, it will route the http://localhost:8080/api/v0/feed requests to the backend-user:8080 container, the same applies to the http://localhost:8080/api/v0/users requests.
Structure of the Microservices application
  • A docker-compose-build.yml file was created in the project directory. The docker-compose command will use this YAML file to configure the application’s services in one go. Meaning, that all the services are created and started from the configuration file, with a single command, otherwise, the containers will have to be built one-by-one for each of the services.
  • Images for the microservices were then created locally using the following commands
  1. “docker image prune — all” to remove any dangling image
  2. “docker-compose -f docker-compose-build.yaml build — parallel” to build all the mages at one go.
  • Another YAML file, docker-compose.yaml was created in the project’s parent directory. This file is configured to use the existing images and create containers. While creating containers, it defines the port mappings, and container dependencies.
  • After the Images for the microservices were successfully built, the application was started using the “docker-compose up” command.
  • The application running locally was accessed using the URL “http://localhost:8100”.
  • Setting up Travis Continous Integration Pipeline to build and push the application code as docker images to dockerHub.
  1. DockerHub repositories were created for each of the four microservices (Backend-feed, Backend-user, Frontend and Reverseproxy) using the dockerhub web console.
  2. The GitHub repository of the project to be used for the Travis CI pipeline was integrated with Travis, the docker name and password were set as environment variables on the repository settings on Travis so that they can be used inside of the .travis.yml file while pushing the images to Dockerhub.
  3. A “Travis.yml” configuration file was created in the project directory (locally). In addition to the mandatory sections, the Travis file will automatically read the Dockerfiles, build images, and push images to DockerHub.
  4. The Travis build process was then triggered by pushing the travis.yml file to the GitHub repository.
Travis build successfully completed
Travis Build Successfully Completed
DockerHub Repositories
  • Orchestrating the containers using Kubernetes.
  1. An Elastic Kubernetes Service (EKS) cluster and node group was created with eksctl using the command “eksctl create cluster — name myCluster — region=us-east-1 — nodes-min=2 — nodes-max=3”.
  2. Kubeconfig file was then created to connect the newly created EKS with kubectl using the following commands

“aws sts get-caller-identity “ to verify the IAM credentials the Kubernetes is using.

“aws eks update-kubeconfig — region region-code — name my-cluster”

“kubectl get svc” to verify the configuration.

  • Deployment of the microservices.
  1. A configMap file named “env-configmap.yaml” was created in the project directory containing all your configuration values (non-confidential environments variables).
  2. A secret file named env-secret.yaml was created in the project directory to store the PostgreSQL username and passwords encoded in Base64.
  3. A secret file named “aws-secret.yaml” was created in the project directory to store the AWS login credentials encoded encoded in Base64. The credentials were encoded on https://www.base64encode.org/
  4. Deployment configuration YAML files were created individually for each of the service. The configuration files contains configuration details such as image location and resources to be used.
  5. Service configuration YAML files defining the right services/ports mapping were also created for each of the microservices.
  6. After Creating all the necessary variables, secret, deployment and service YAML files, the variable and seecret files were applied using the following commands

“kubectl apply -f aws-secret.yaml”

“kubectl apply -f env-secret.yaml”

“kubectl apply -f env-configmap.yaml”

7. Each of the services were then deployed using the following command on the deployment and service files for all the microservices.

  • “kubectl apply -f backend-feed-deployment.yaml”
  • “kubectl apply -f backend-feed-service.yaml”

8. External IP addresses for the reverseproxy and frontend services were then exposed using the following commands;

  • “kubectl get deployments” to check the deployment names and pod status.
  • “kubectl expose deployment frontend — type=LoadBalancer — name=publicfrontend” to ceate an external load balancer and assign a fixed, external IP to the service.
  • “kubectl get services” to check the name, ClusterIP, and External IP of all deployments.
  • “kubectl get pods” to get the pods and their status.

9. The API endpoints in the environment.ts and environment.prod.ts files in the frontend directory were then changed from the local endpoint to the exposed ip address of the reverseproxy. Here the external IP is used to connect the reverseproxy to the frontend.

10. A new frontend image was then built and pushed to the Dockerhub using the following command.

  • “docker build . -t [Dockerhub-username]/udagram-frontend:v2”
  • “docker push [Dockerhub-username]/udagram-frontend:v2”

11. The frontend was then re-deployed to the kubernetes cluster after updating the frontend-deployment.yaml file with the new frontend image tag using the following command.

  • “kubectl set image deployment frontend frontend=[Dockerhub-username]/udagram-frontend:v2”
Pods Showing running status
Deployed Services
  • The deployed application was then tested using the external IP of the frontend in a browser.
  • A Horizontal Pod Autoscaler (HPA) was then set-up for the microservices using the following command

“kubectl autoscale deployment backend-user — cpu-percent=70 — min=3 — max=5”.

  • Kubernetes Metric server was then installed using the command

“kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

This will enable the HPA to pull the resource metrics from the server. The command “ kubectl get deployment metrics-server -n kube-system” was used to verify the deployed metrics server

  • A role was assigned to one of the cluster nodes using the command

“ kubectl label node node-id node-role.kubernetes.io/worker=worker “

“Kubectl get nodes” was used to verify that a role was assigned to one of the nodes.

Horizontal Pod Autoscaler
Logs showing activities in the microservices
  • After the project was reviewed and passed all running resources were shut down and deleted.

Google Docs Link

Project’s Github Repository

--

--

Jonah Apagu

A student of technology i am leveraging on the power of my imaginations given to me by God to add value to my world