Back to all blogposts

How to automate deployment and SSL certification of your Kubernetes microservices with cert-manager and Garden

Dawid Stężycki

Dawid Stężycki

Node.js Developer

In this tutorial, I want to introduce you to the process of obtaining SSL certificates for Kubernetes microservices and show you some tools that will enable you to turn it into an automated workflow. I will walk you through the configuration of cert-manager and Garden which will make your Kubernetes experience much nicer and more streamlined. I’ve used this method in a pretty extensive project for our partner Obligate (formerly FQX) so you can be sure it’s reliable. 

 

What did I learn from the Obligate project?

About Obligate

Obligate (formerly known as FQX) is a new era for DeFi. The company combines legal and technological knowledge with financial expertise to build a new blockchain-based financial system. With its fully-regulated approach, Obligate offers a decentralized platform for on-chain financing using bonds and commercial paper. With the Obligate platform, companies can issue on-chain bonds and commercial paper to obtain funding from a diverse range of investors who, on the other hand, get access to regulated digital debt assets, secured with on-chain collateral. Obligate reduces the time needed for issuance from weeks to hours., therefore lowering costs associated with a bond issuance by 80%. 

obligate the software house

Project goal

I’m a Node.js developer and it was the first time I had so much responsibility for the DevOps aspect of the project. I have found that:

  • despite the rise of DevOps culture, SSL certification can still be difficult to navigate,
  • most DevOps-targeted tutorials assume a lot of prior knowledge and most developer-targeted information covers very simplistic uses,
  • even though the documentation of different tools tries its best to document how they integrate with the rest of the stack, it’s often based on obsolete best practices, limited to narrow use cases, or simply confusing.

Since encryption is so fundamental to web security (especially for institutions like Obligate involved with finances, fragile data, government, etc.),  I wanted to address all those problems and fill the gap by creating a guide that introduces you to the topic from the ground. At the same time, I didn’t want to settle for teaching you anything less than a fully capable and versatile setup, from start to finish.

Although in DevOps, you often lose track of the project as a whole, the big picture involved building an infrastructure for blockchain-authenticated and legally-binding promissory notes.

The goal: DevOps must obtain SSL certificates for the website. 

The Obligate project we worked on is hosted on the Google Cloud Platform. We set up microservices on the Google Kubernetes Engine clusters, and we use the Cloud DNS to manage all the subdomains. 

Fortunately, our client already had experience with a similar setup but with a different cloud provider. In the past, they have used Azure, and they switched to GCP which offers a better Kubernetes service

The certification stack Obligate used — that we needed to readapt — offered a free and automated certification process using Let’s Encrypt as certificate authority and a cert-manager as ACME client for sharing certificates across pods. 

Garden was the last, the most novel, and very attractive addition which abstracted away many of Kubernetes intricacies related to both initial configuration and the development process. These processes are notoriously not developer-friendly and often require a designated DevOps engineer. 

garden.io

As a lowly Node.js developer, I was thrilled to offset the complexities of Kubernetes by any means. Garden turned out to be a great software for that.

Unfortunately, despite our client’s experience and well-written documentation of the tools in the tech stack, the configuration turned out to require a lot of trial and error to put together and we wanted to share the results in a form of a tutorial. I’ll start by covering some SSL and HTTPS basics before we take a deeper dive into the specifics of the project and give you a step-by-step close and personal view of my process.

A rundown of SSL and HTTPS 

Secure Sockets Layer (SSL) is a security protocol that enables encrypted communication over a computer network. It prevents malicious agents from intercepting sensitive data such as credit card details, health documentation, and any other information exchanged between, most commonly, a user and a website. 

For years, it has been perceived as just an added value and a feature that’s only necessary for a narrow group of applications such as online banking.

In light of security breaches across all business niches and website types, it has become obvious that the consequences of unencrypted traffic are too significant and widespread to be left to the discretion of website owners. 

Attackers learned to exploit the lack of security regulation to the detriment of the users by stealing their passwords, personal information, and important documents, or by following their online activity. They also hack websites by injecting code containing ads or spyware. 

In order to protect both users and website owners from those dangers, many security experts and tech companies have pushed for adapting HTTPS – HTTP over SSL.

 

ssl and https secure icon

How does SSL work?

In order to understand how SSL makes networks safe, we need to talk about encryption. The most popular type of encryption is based on passwords. If you have a document with sensitive information you want to share, you should encrypt that document with a password. It’s like closing a padlock with a key. 

The other person needs to know the password to open that document. This type of encryption is called symmetric because you use the same password — or key — to encrypt and decrypt the document, which is your padlock. 

Since the key needs to be shared, that means it is likely it will be intercepted as the information itself. 

This is why SSL is based on asymmetric encryption. With SSL, there are two keys where -information encrypted by one of them can only be decrypted by the other one. The keys cannot be derived from each other. 

One of the SSL keys is called a public key. When someone uses it to encrypt a message addressed to you, other people with the public key cannot decrypt the information. 

Decryption is only possible with the other key — called the private key — that you keep to yourself. Asymmetric encryption is a bit harder to grasp,  but you can compare it to how traditional mail works. Everybody knows your address, but nobody can access your mailbox without the mailbox key. In this scenario, your address is the public key and your mailbox key is the private key.

In HTTPS protocol, the connection is established using an SSL certificate that contains the public key. Upon request, the server shares that certificate with the browser which extracts the key and uses it to encrypt the traffic. 

The private key stays on the server to decrypt the communication, but it’s not accessible to the outside, so no third party can eavesdrop or inject anything.

The history of HTTPS

One of the most notable milestones for HTTPS was when Google announced in 2014 that their search engine going to favor websites using encryption. Unfortunately, obtaining SSL certificates back then was still difficult and costly. 

The companies that provide them are called certificate authorities (CA). Before granting the certificate, they verify the real-life business affiliated with the domain. This certificate type is called an Extended Validation, and it’s considered the most trusted and the most expensive, requiring much paperwork and processing time.

This may include presenting original paperwork like:

  • Government-issued business license that contains an address 
  • Copies of recent bank statements 
  • Copies of a recent phone bill or utility bill (power, water, etc.)

Let’s Encrypt created a big breakthrough when it was established. It’s a non-profit certificate authority founded, by Cisco Systems and Mozilla Foundation. They provide websites with free certificates, which do not offer Extended Validation but only Domain Validation. 

This type of free certificate doesn’t verify the entire business but only the ownership of the domain in an automated process. 

The creators of Let’s Encrypt believe that CAs are not in the position to reliably police the content of websites. It indirectly compromises the security of users by making encrypted connections difficult to obtain by websites, while not giving any reasonable edge in other aspects of security. 

kubernetes and let's encrypt logos

In their opinion, organizations such as Google and Microsoft are much better equipped to identify malicious sites, and we should trust services responsible for that like Google Safe Browsing.

Since then, the push for HTTPS has been increasingly more aggressive and the current state of affairs is that all major browsers flag the websites and warn users that their connection is “not secure”. Obviously, that’s not a label any company would want their brand to be associated with.

There are now many tools and a lot of information about how to enable encryption on any website to earn a certificate, which is now easier than ever. 

Still, there are some technological circumstances where clean and robust solutions are not that well-documented, forcing developers to rely on SSL certification processes that are buggy, expensive, or difficult to maintain.

An example of that would be launching cloud-hosted microservices that run on Kubernetes as we have done in our project. You’d face many challenges— from having to manage the numerous services, certificates, ingresses, to keeping track of all the service accounts and abiding by the principle of least privilege.

Now, back to the action!

Automating deployment and SSL certification — full tutorial 

For this recipe, you’ll need a domain and a GCP project with Cloud DNS public zone, Artifact Registry, and a GKE cluster setup.

The initial Garden project setup 

The first tool of the tech stack is Garden, which allows for creating a Kubernetes infrastructure as code that includes the setup of different target environments and the definitions of deployment, testing, and development processes. 

Most importantly, Garden unifies configuration formats and behaviors of different types of resources — such as pure k8s resources, Helm charts, Terraform stacks, and Docker containers — into a consistent interface by assuming some opinionated defaults and handling differing operations behind the scenes. The tool is proven to save time by allowing developers to skip learning different syntaxes and workflows of those various resource types while providing out-of-the-box solutions for basic configurations.

Let’s start by creating a Garden project and discussing the configuration file briefly. First, install Garden on your computer according to this guide. Then, set up an empty project directory and create the following file inside:

In line 1, we define the ‌Garden resource that will be configured in this file. In this case, it’s a project and a top-level configuration ‌we’re going to be expanding as we go along. We name the project in line 2.

Lines 3-5 establish environments, which are a way of grouping different sets of variables and settings that are applied to the modules making them reusable. We only have one environment, but usually, you would at least add staging and production.

Lines 7-17 describe our provider which points to the target cluster and sets up some of the tools that take part in those behind-the-scenes operations that Garden performs on different modules. Line 9 names the environments in which a given provider is going to be used.

The build mode defined in line 10 is something that might require more thorough research if you’re looking for high optimization. In brief, it’s a tool used to build Docker containers on the cluster (thus avoiding overloading your own machine). The current Garden recommendation is to use kaniko as we did because it works well for most scenarios.

Lines 11-17 provide Garden the access to GCP resources. “Context” in line 11 is the name of kubectl context for the GKE cluster that will host all the microservices. There’s a difference between how GCP and kubectl name clusters, so ‌use the one you’ll find in the results of this command:

kubectl config get-contexts

Lines 13-14 require the hostname and namespace of your GCP Artifact Registry that will store kaniko-built images to unclutter your cluster. You’ll find the required values in the results of this command:

gcloud artifacts repositories list

Lines 16-17 point to the name and namespace of the Kubernetes secret that contains the credentials of the GCP service account with permissions to access that registry. You can create both the service account and the secret with these commands:

gcloud iam service-accounts create gar-config

gcloud projects add-iam-policy-binding PROJECT_ID --member=serviceAccount:gar-config@PROJECT_ID.iam.gserviceaccount.com --role=roles/artifactregistry.writer

gcloud iam service-accounts keys create key.json \

--iam-account gar-config@PROJECT_ID.iam.gserviceaccount.com

kubectl --namespace default create secret docker-registry gar-config --docker-server=LOCATION-

docker.pkg.REPOSITORY --docker-username=_json_key --docker-password="$(cat key.json)"

Cert-manager and ACME challenges

In order to verify domains, Let’s Encrypt uses ACME (Automated Certificate Management Environment) protocol, so an ACME client is required to communicate with their server. We’ll use a cert-manager because it can share certificates across pods and that’s obviously going to be useful for a microservice architecture.

Obtaining the certificate requires solving an ACME challenge, which is a task you can only perform if you’re a domain owner. The ownership must be verifiable by the ACME server because it’s used to validate that you control the domain. 

There are a few different types of challenges with original properties. The most common type is HTTP-01 and there is even a cert-manager implemented in Garden that uses that challenge. 

Unfortunately, this type doesn’t support wildcard subdomains that are used in our project, so we needed another type-DNS-01 challenge which requires a specific value in a TXT record under a domain name – and had to implement cert-manager manually.

In short, the process goes as follows: the cert-manager receives a Let’s Encrypt token and uses it to create a TXT record via a GCP service account with permissions to manipulate domains in Cloud DNS.

Let’s Encrypt queries the DNS system for that record and if it matches, the organization issues the certificate.

Installing cert-manager on the cluster

Now let’s set up cert-manager on the cluster.

The recommended way to install it is through a Helm chart. Fortunately, Garden makes it quite straightforward. All you need to know is that the Helm chart is a package of Kubernetes resources and it’s just another module type as far as Garden is concerned. To keep things tidy, let’s create a cert-manager folder inside our project and add in there another Garden file:

As stated in line 2, it’s a module. Modules are the most basic and diverse element of the Garden system. What they represent depends on the type of the module. You’ll get to know a few of them shortly. 

Apart from the name in line 1, you can also add a description explaining what the module does as in line 3. Line 4 is where you define the type of module, which will affect how the module is handled and what other information you need to supply.

Line 5 is Garden-specific. You use it to name the files you want to include in the building process. Since Helm charts are self-contained, you don’t need to include anything. Since the default setting is to add all the files, we set it to an empty array. Line 6 names the cluster namespace for the cert-manager.

Lines 7-9 are where you provide the details of the chart you want to install. For Helm, that means the repo link plus the name and the version of the chart. You can access the officially supported source in cert-manager documentation.

Additional flags make up lines 10-11. There, we enabled an option to install CRDs (Custom Resource Definition) to allow the cert-manager to extend Kubernetes with custom entities that are necessary for it to function.

Let’s now deploy cert-manager to the cluster with the following garden command:

garden deploy

Configuration of ACME server and DNS-01 challenge

The Cert-manager is a general ACME client that supports multiple certificate authorities and uses a custom Kubernetes resource called the Issuer to represent them. In order to use Let’s Encrypt, we need to configure that resource. 

There are two variants of issuers. The regular Issuer is namespaced, which means it can only issue certificates in its Kubernetes namespace, and the ClusterIssuer works cluster-wide. The latter seems more convenient, so we’re grabbing that to go. 

The configuration described in cert-manager’s documentation is in the form of a Kubernetes manifest, so let’s create one. Later on, we’ll use Garden to plug it into our setup smoothly. This is going to be another file in the cert-manager folder:

Because it’s a native Kubernetes manifest, the format is a little different. Fortunately, most of it is a boilerplate. There are just two factors that affect the final form of this file — the certificate authority and the ACME challenge type. We’ve already decided on Let’s Encrypt and the DNS-01 challenge, so it’s a matter of configuring the ACME part in lines 6-10 as shown in this example, and the challenge solver part in lines 11-14 as in this example

Nevertheless, there are a couple of customizable properties in there that could use explanation. Line 4 allows you to name the Issuer as it’s going to appear on the cluster. Line 7 requires you to enter an email address that Let’s Encrypt is going to use to contact you about expiring certificates and other problems. My advice here is to set up a group email for all the developers in case of an emergency.

In line 10, you need to provide a name for a secret resource that will be used to store your ACME account’s private key on the cluster. Now that might surprise you because we didn’t set up an account. That’s because it’s handled automatically by the cert-manager for any Issuer deployed. We don’t have to worry about it  — just set and forget.

The solver part requires only one parameter — the GCP project ID — and some command line work. We’ll use gcloud to create a GCP service account (GSA) that will enable the cert-manager to manipulate our domain and create the TXT record.  Normally, Garden gets the credentials of this GSA asf a JSON key, but that’s a less secure way to do it which Google discourages.

The recommended method is to use a workload identity, so we’re going to link our GSA and cert-manager’s Kubernetes service account (KSA). This will allow cert-manager’s pods to access GCP API with the permissions of the linked GSA. First, enable workload identity on your cluster by following Google’s guide.

The KSA is already created by the cert-manager, so you only need to create the GSA and grant the necessary permissions. You can do that with the following commands:

gcloud iam service-accounts create dns01-solver --display-name "dns01-solver"

gcloud projects add-iam-policy-binding PROJECT_ID --member serviceAccount:dns01-solver@PROJECT_ID.iam.gserviceaccount.com --role roles/dns.admin

Now, we will create the link between GSA and KSA. It requires configuration on both sides of this connection which you can do with these commands:

gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member "serviceAccount:PROJECT_ID.svc.id.goog[cert-manager/cert-manager]" dns01-solver@PROJECT_ID.iam.gserviceaccount.com

kubectl annotate serviceaccount --namespace=cert-manager

cert-manager "iam.gke.io/gcp-service-account=dns01-solver@PROJECT_ID.iam.gserviceaccount.com"

The configuration of the certificate request

The last entity required by the cert-manager is called a Certificate — not to be confused with the SSL certificate. It’s another one of the cert-manager’s custom resources. The information from the manifest is used to create a certificate request that the Issuer attempts to honor. If the process is successful, the private key pair and the SSL certificate get stored in Secret. This file should also be added to the cert-manager folder:

The name in line 4 is arbitrary and so is the namespace, because we’re using a ClusterIssuer instead of a regular Issuer. For clarity, I’d recommend using default.

The most important part of this configuration is the spec field in lines 6-12. Line 7 defines the name of the Secret to be created that will store the SSL certificate when one is issued. We’re going to reference this Secret later on. Lines 8-10 point to the Issuer we created in the previous step to let the cert-manager know where to send the request. Lines 11-12 specify the domains to which you want the SSL certificate to apply.

As you can see, wildcards are available.

Incorporating Kubernetes manifests into Garden

Now that we have Kubernetes’ files, let’s turn them into a Garden module, so we can easily deploy them to our cluster. Let’s create this file next to the manifests in the cert-manager folder:

Note that in line 4, we introduce Kubernetes as another module type. The certificate and Issuer are separate Kubernetes resources, but they’re so closely related that we can treat them as two components of a single module called “cert-manager-resources” (see line 1).

We’re going to build this module based on the previously defined manifests, so we obviously need to include them in the Garden build context in lines 6-8. Then, we’ll point to them in the “files” array and Garden is going to take care of the rest. 

One last thing to specify is that these resources rely on cert-manager already being installed on the cluster. We added lines 12-13 to guarantee that.

Now, we can take advantage of our Garden set up and in a few moments, you should have a brand new certificate on your cluster after running this command:

garden deploy

Turning a containerized app into a Garden Service

The Cert-manager and its resources seem to be deployed the right way. To make use of the certificate, we need an application. If you’re working on a real project with this tutorial, this is where you can plug in your own microservice. If not, you can use our sample application available on Docker Hub as we’ll do in the rest of this tutorial. The following file needs to be created in the top-level project directory:

As you can see in line 4, there’s yet another module type — a container. For our sample app, we’re going to use a remote image in line 5, but in your own project, you probably want to indicate a dockerfile.

For Garden to create a running instance of this container, we need lines 6-16 to define a Service. If you want other services to ‌reach it, you need to define a port as in lines 8-10. For the most basic configuration, you just need to provide the port exposed by your container.

If you want your Service to be reachable from outside of the cluster, you also need to define an Ingress as in lines 11-16. The path and hostname properties in lines 12-13 make up a full URL of https://HOSTNAME/PATH. Line 14 references the port defined in lines 9-10.

In order to understand lines 15-16, you need to be aware that Ingress is a Kubernetes resource that Garden creates behind the scenes that define the routing of the traffic from outside the cluster to the Services inside.

However, it’s only making the rules, while the enforcement falls under the responsibility of an Ingress Controller. In the next step, we’ll come back to our project configuration and set up an Ingress Controller. To make Ingress visible to the controller, we need this annotation from lines 15-16.

The Ingress Controller

We made a lot of progress since we last visited our project configuration. Let’s bring it up to speed.

We’ve got two important additions. Line 18 instructs Garden to create an NGINX Ingress Controller that’s responsible for fulfilling the Ingresses Controller we defined in the previous step.

In lines 19-23, we tell Garden how to access the SSL certificates by providing the name of the cert-manager’s Certificate resource in line 19 and the name and the namespace of the Secret containing the SSL key pair in lines 22-23. This is how the Ingress Controller knows what to use when managing outside traffic.

You can now deploy NGINX and our sample application by rerunning the following command:

garden deploy

After it goes through, you should be able to access your application through an encrypted connection under the link we’ve defined in Ingress.

Last steps and last thoughts 

When I first started working on this task, my DevOps experience was limited to writing and editing Dockerfiles deployed locally with Docker Compose. Whenever I wanted to make use of Kubernetes in my personal projects I was quickly confronted with Helm, Terraform, or other parts of that ecosystem that would stop me in my tracks.

​​With the popularity of DevOps culture, I think that’s a common experience for developers but having a chance to work with Garden allowed me to get that out of the way and focus on the problem itself.

It turned out that the ecosystem opens many possibilities to simplify your workflow and I can guarantee you that the initial effort is well worth it. Without cert-manager, the process of obtaining SSL certificates would be manual and disruptive but now, we hardly ever think about it.

I have also learned how essential SSL protocol is to web security and how much effort the industry is putting into making it easy and accessible, so the responsibility is on us, web developers, not to dismiss it when building our websites.

Do you need a development team that takes care of all the safety features? Schedule a free consultation!

Interviews
CTO vs Status Quo

Find the exact rules Tech Managers used to organize IT & deliver solutions the Board praised.

Read here

The Software House is promoting EU projects and driving innovation with the support of EU funds

What would you like to do?

    Your personal data will be processed in order to handle your question, and their administrator will be The Software House sp. z o.o. with its registered office in Gliwice. Other information regarding the processing of personal data, including information on your rights, can be found in our Privacy Policy.

    This site is protected by reCAPTCHA and the Google
    Privacy Policy and Terms of Service apply.

    We regard the TSH team as co-founders in our business. The entire team from The Software House has invested an incredible amount of time to truly understand our business, our users and their needs.

    Eyass Shakrah

    Co-Founder of Pet Media Group

    Thanks

    Thank you for your inquiry!

    We'll be back to you shortly to discuss your needs in more detail.