Continuous Deployment High Availability With Kubernetes Database

Tomas Fernandez

Tomas is an independent developer and a writer at Semaphore. He studied electronics engineering at Buenos Aires University. Before joining Semaphore, he worked as a web developer, sysadmin and database administrator for 18 years. When he's not working, he enjoys reading, swimming and sailing.

A long time ago, in a job far, far away, I was tasked with switching our old-school LAMP stacks over to Kubernetes. My boss at the time, always starry-eyed for new technologies, announced the change should only take a few days — a bold statement considering we didn't even have a grasp on how containers worked yet.

After reading the official docs and Googling around, I began to feel overwhelmed. There were too many new concepts to learn: there were the pods, the containers and the replicas. To me, it seemed Kubernetes was reserved for a clique of sophisticated developers.

I then did what I always do in these cases: I learned by doing. Going through a simple example goes a long way in understanding intricate subjects. So, I walked through the deployment process on my own.

In the end, we did it, albeit nowhere near the prescribed week — it took us almost a month to create three clusters, including their development, testing and production). This is not that bad when you consider the upgrade team I was part of consisted of three complete neophytes in Kubernetes. It was hard but well worth the effort.

This post is what I would have liked to read at that time: a detailed step by step account on how to deploy an app to Kubernetes. By the end of this article, you'll have a working Kubernetes deployment and continuous delivery workflow.

Continuous Integration and Delivery

Continuous Integration (CI) is the practice of building and testing the application on each update. By working in small increments, errors are detected earlier and promptly resolved.

Once integration is complete and all tests have passed, we can add Continuous Delivery (CD) to automate the release and deployment process. A project that uses CI/CD can make more frequent and reliable releases.

We'll use Semaphore, a fast, powerful and easy-to-use Continuous Integration and Delivery (CI/CD) platform to automate the whole process:

  1. Install project dependencies.
  2. Run unit tests.
  3. Build a Docker image.
  4. Push the image to Docker Hub.
  5. Provide a one-click Kubernetes deployment.

For the application, we have a Ruby Sinatra microservice that exposes a few HTTP endpoints. The project already includes everything needed for the deployment, but some assembly is required.

Getting Ready

Before doing anything, you'll need to sign up for a GitHub and a Semaphore account. Additionally, create a Docker Hub login for your Docker images.

Next, you should install some tools on your machine as well:

  • Git: to handle the code.
  • curl: the Swiss Army knife of networking.
  • kubectl: to control your cluster remotely.

Of course, let's not forget Kubernetes. Most cloud providers offer this service in one form or another, so shop around and see what fits your needs. The lowest-end machine and cluster size is enough to run our example app. I like starting from a three-node cluster, but you can get away with just one node.

After the cluster ready, download the kubeconfig file from your provider. Some let you download it directly from their web console, while others require a helper program. We'll need this file to connect to the cluster.

With that out of the way, we're ready to get started. The first thing to do is to fork the repository.

Fork the Repository

Fork the demo app we'll be using throughout this post.

  1. Go the semaphore-demo-ruby-kubernetes repository and click the Fork button on the top right side.
  2. Click the Clone or download button and copy the address.
  3. Clone the repository: $ git clone https://github.com/your_repository_path…

To connect your new repository with Semaphore:

  1. Log in to your Semaphore
  2. Follow the link in the sidebar to create a new project.
  3. Click on the Add Repository button next to your repository.

Testing With Semaphore

Continuous Integration turns testing fun and effective again. A well-thought-out CI pipeline creates a short feedback loop to catch errors early before they can do any harm. Our project comes with some ready-made tests.

Open the initial pipeline file located at .semaphore/semaphore.yml to take a quick look. This pipeline describes all the steps that Semaphore must follow to build and test the application. It starts with a name and version:


Next comes the agent, which is the virtual machine that powers the jobs. We have three types to choose from:


Blocks, tasks, and jobs define what to do at each step of the pipeline. On Semaphore, blocks run sequentially, while jobs within a block run in parallel. The pipeline contains two blocks — one for the libraries installation and the other for running tests.

The first block downloads and installs the Ruby gems.


Checkout clones the code from GitHub. Since each job runs in a fully isolated machine, we must rely on the cache to store and retrieve files between job runs.


The second block is for testing. Notice that we repeat checkout and cache to get the initial files into the job. The final command starts the RSpec test suite.


The last part declares a promotion. Promotions can conditionally connect pipelines to create complex workflows. We use auto_promote_on to start the next pipeline once all the jobs have been completed.


The workflow continues with the next pipeline.

Building Docker Images

We can run anything in Kubernetes — as long as it has been packaged in a Docker image. In this section, we'll learn how to build it.

Our Docker image will include Ruby, the app code, and all its libraries. Take a look a the Dockerfile:


The Dockerfile, like a recipe, has all the steps and commands needed to build the container image:

  1. Start from a pre-built ruby image.
  2. Install the build tools with apt-get.
  3. Copy Gemfile since it has all the dependencies.
  4. Install them with bundle.
  5. Copy the app source code.
  6. Define the listening port and the start command.

We'll bake our production image in the Semaphore environment. However, if you wish to do a quick test on your machine, type:


To start the server locally, use docker run and expose the internal port 4567:


You can now test one of the available HTTP endpoints:

Add Docker Hub Account to Semaphore

Semaphore provides a secure mechanism to store sensitive information such as passwords, tokens, or keys. In order to push the image to your Docker Hub registry, create a Secret with your username and password:

  1. Open your Semaphore.
  2. On the left navigation bar, under Configuration, click on Secrets.
  3. Click on the Create New Secret
  4. The name of the secret should be dockerhub. Type in your login details as shown and Save.

The Docker Pipeline Build

This pipeline builds and pushed the image to Docker Hub. It only has one block and one job:


This time, we can use a bit more power as Docker tends to be more resource-intensive. We'll pick the middle-end machine e1-standard-4 with four CPUs, 8GB of RAM and 35GB disk:


The build block starts by signing in to Docker Hub. The username and password are imported from the secret we just created. Once logged in, Docker can directly access the registry.

The next command is docker pull which attempts to pull the image latest image. If the image is found, Docker may be able to reuse some of its layers and speed up the build. If there isn't any latest image, that's fine. It just takes a little bit longer to build.

Finally, we push the new image. Notice here we're using the SEMAPHORE_WORKFLOW_ID variable to uniquely tag the image:


With the image ready, we are entering the delivery phase of our project. We'll extend our Semaphore pipeline with a manual promotion:


To make your first automated build, make a push:


With the image ready, we can jump to the deployment phase.

Deploying to Kubernetes

Automatic deployment is Kubernetes' strong suit. All we need is to tell the cluster our final desired state and it will take care of the rest.

Before doing the deployment, however, you have to upload the kubeconfig file to Semaphore.

Add Kubeconfig to Semaphore

We'll need a second secret: the kubeconfig for the cluster. The file grants administrative access to it. As such, we don't want the file checked into the repository.

Create a secret called do-k8s and upload the kubeconfig file to /home/semaphore/.kube/dok8s.yaml:

Deployment Manifest

In spite of Kubernetes being a container orchestration platform, we don't manage containers directly. In truth, the deployment unit is the pod. A pod is like a group of merry friends that always go together to the same places. Containers in a pod are guaranteed to run on the same node and have the same IP. They always start and stop in unison and, since they run on the same machine, they can share its resources.

The problem with pods is that they can start and stop at any time, and we can't know for sure what IPs they'll get assigned. To route HTTP traffic from our users we'll also need a load-balancing service; it will be responsible for keeping track of the pods and forwarding incoming connections so, from the client point of view, there is always a single public IP.

Open the file located at deployment.yml. This is the manifest for deploying our app. It has two resources separated by three dashes. First, the deployment:


There are several concepts to unpack here:

  • Resources can have a name and several labels, which are convenient to organize things.
  • Spec defines the desired final state and template is the model used to create the pods.
  • Replicas sets how many copies of the pod to create. We usually set this to the number of nodes in the cluster. Since I'm using three nodes, I'll change this line to replicas: 3.

The second resource is the service. It binds to port 80 and forwards the HTTP traffic to the pods in the deployment:


Kubernetes matches up the selector with the labels to connect services with pods. Thus, we can have many services and deployments in the same cluster and wire them as required.

Deployment Pipeline

We're entering the last stage of CI/CD configuration. At this point, we have a CI pipeline defined in semaphore.yml, and the Docker pipeline defined in docker-build.yml. In this one, we deploy to Kubernetes.

Open the deployment pipeline located at .semaphore/deploy-k8s.yml. It starts as usual:


Two jobs make up the last pipeline.

Job number 1 starts the deployment. After importing the kubeconfig file, envsubst replaces the placeholder variables in deployment.yml with their actual values. Then, kubectl apply sends the manifest to the cluster.


Job number two tags the image as latest so we can use it as a cache on the next run.


This is the end of the workflow. We're ready to try it out.

Deploy the App

Let's teach our Sinatra app to sing. Add the following code inside the App class in app.rb:


Push the modified files to GitHub:


Wait until the docker build pipeline completes; you can check the progress on Semaphore:

It's time to deploy. Hit the Promote button. Did it work?

We're off to a good start. Now it's up to Kubernetes. We can check the deployment status using kubectl. The initial status is three pods desired and zero available:


A few seconds after, the pods have started and reconciliation is complete:


To get a general status of the cluster use get all. It shows pods, services, deployments and replicas:


The service IP is shown after the pods. For me, the load-balancer was assigned the external IP 35.232.70.45. Replace it with the one your provider has assigned to you. Let's try the new server.

Would you sing for us?


And now, the end is near… and so I face the final curtain…

The End Is Near

Deploying to Kubernetes doesn't have to be hard or painful, less so when backed with the right CI/CD solution. You now have a fully automated continuous delivery pipeline to Kubernetes.

Feel free to fork and play with semaphore-demo-ruby-kubernetes on your Kubernetes instance. Here are some ideas:

  • Create a staging cluster.
  • Build a development container and run tests inside it.
  • Extend the project with more microservices.

Group Created with Sketch.

betancourtwhoove.blogspot.com

Source: https://thenewstack.io/a-step-by-step-guide-to-continuous-deployment-on-kubernetes/

0 Response to "Continuous Deployment High Availability With Kubernetes Database"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel