Original author: Jason Schmidt - F5 NGINX Solutions Architect
Original link:Automating certificate management in Kubernetes environment
Reprint source:NGINX Chinese official website
NGINX is the only official Chinese community, all at nginx.org.cn
A valid SSL/TLS certificate is a core requirement for modern application environments. Unfortunately, certificate update management is easily overlooked when deploying applications. Certificates are valid for limited and varying lengths of time, ranging from around 13 months for DigiCert certificates to 90 days for Let’s Encrypt certificates. To ensure secure access, these certificates need to be renewed/reissued before expiration. Due to the heavy workloads of most operations teams, certificate renewals sometimes get put on hold, leaving you scrambling to manage certificates as they near (or worse) pass their expiry date.
In fact, this need not be the case. Certificate management can be simplified and automated with a little planning and preparation. Here we introduce a solution for Kubernetes environments using the following three technologies:
-
Jetstack 的 cert-manager
In this article, you'll learn how to simplify certificate management by providing automatically renewed certificates to your endpoints.
Certificates in a Kubernetes environment
Before discussing the technical details, we need to define some terms. "TLS certificate" refers to the two components required to enable HTTPS connections on our Ingress controller:
-
Certificate
-
private key
Both the certificate and private key are issued by Let’s Encrypt . For a detailed explanation of how TLS certificates work, see the DigiCert article How TLS/SSL Certificates Work.
In Kubernetes, these two components are stored as Secrets. Kubernetes workloads such as NGINX Ingress Controller and cert-manager can write Inputting and reading these Secrets can also be managed by users with access to the Kubernetes installation.
Introduction to cert-manager
The cert-manager project is a certificate controller that works with Kubernetes and OpenShift. When deployed in Kubernetes, cert-manager will automatically issue the certificates required by the Ingress controller and ensure that they are valid and up-to-date. Additionally, it will track the certificate's expiration date and attempt renewal at configured intervals. While it works with a large number of public and private certificate authorities, here we’ll only cover its integration with Let’s Encrypt.
Two challenge types
When using Let’s Encrypt, all certificate management can be automated. While this provides great convenience, it also raises a question: How does the service ensure you have a fully qualified domain name (FQDN)?
This problem can be solved using a challenge, which requires you to answer a verification request that can only be initiated by people with access to the DNS records for a specific domain name. A challenge takes one of two forms:
-
HTTP-01: This challenge can be resolved by creating a DNS record for the FQDN to which the certificate is issued. For example, if your server is at IP www.xxx.yyy.zzz, and your FQDN is cert.example.com, then the challenge mechanism will expose a token on the server at www.xxx.yyy.zzz, Let's The Encrypt server will attempt to access the token through cert.example.com. If successful, the challenge passes and a certificate is issued.
HTTP-01 is the simplest way to generate a certificate because it does not require direct access to the DNS provider. This type of challenge is always made over the DNS port supported by HTTP 80. Note that when using HTTP-01 challenge, cert-manager will utilize the Ingress controller to provide the challenge token.
-
DNS-01: This challenge uses a token to create a DNS TXT record, which is then verified by the issuing authority. If the token is recognized, it proves that you have ownership of the domain name and a certificate can be immediately issued for its records. Unlike the HTTP-01 challenge, when using the DNS-01 challenge, the FQDN does not need to resolve to the IP address of the server (it does not even exist). Additionally, DNS-01 can be used when port 80 is blocked. The inconvenience is the need to provide access to the DNS infrastructure via an API token that accesses the cert-manager installation.
Ingress Controllers
Ingress controller is a service of Kubernetes. It is specially used to introduce traffic from outside the cluster and load balance the traffic to internal Pods (a set of single or multiple containers), and manages egress outbound traffic. In addition, the Ingress controller is controlled through the Kubernetes API. When Pods are added, removed, or Pods fail, it will monitor and update the load balancing configuration.
For more information about the Ingress controller, see the following blog:
In the following examples, we will use the NGINX Ingress Controller developed and maintained by F5 NGINX.
Certificate Management Example
These examples assume that you have a working Kubernetes installation for testing that can be assigned an external IP address (Kubernetes LoadBalancer object). Additionally, it assumes that you can receive traffic on ports 80 and 443 (if using the HTTP-01 challenge) or only port 443 (if using the DNS-01 challenge). The examples are demonstrated using Mac OS X, but you can also use Linux or WSL.
You will also need a DNS provider and an FQDN that allows adjusting the A record. If you use the HTTP-01 challenge, you only need to add an A record (or have someone add one on your behalf). If you use a DNS-01 challenge, you need a supported DNS provider that can support or a supported webhook provider Perform API access.
Deploy NGINX Ingress Controller
The easiest way is to deploy via Helm . This deployment allows you to use both Kubernetes Ingress and NGINX Virtual Server CRDs.
-
Add NGINX repository.
$ helm repo add nginx-stable https://helm.nginx.com/stable
"nginx-stable" has been added to your repositories
2. Update the warehouse.
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nginx-stable" chart repository
Update Complete. ⎈Happy Helming!⎈
3. Deploy the Ingress controller.
$ helm install nginx-kic nginx-stable/nginx-ingress \
--namespace nginx-ingress --set controller.enableCustomResources=true \
--create-namespace --set controller.enableCertManager=true
NAME: nginx-kic
LAST DEPLOYED: Thu Sep 1 15:58:15 2022
NAMESPACE: nginx-ingress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NGINX Ingress Controller has been installed.
4. Check the deployment and retrieve the IP address of the egress facing the Ingress controller. Note that you cannot proceed without a valid IP address.
$ kubectl get deployments --namespace nginx-ingress
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-kic-nginx-ingress 1/1 1 1 23s
$ kubectl get services --namespace nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-kic-nginx-ingress LoadBalancer 10.128.60.190 www.xxx.yyy.zzz 80:31526/TCP,443:32058/TCP 30s
Add DNS A record
The exact process will depend on your DNS provider. This DNS name needs to be resolvable from the Let’s Encrypt server, and you may need to wait for the record to propagate before doing so. For more information on this, see the SiteGround articleWhat is DNS Propagation and Why Does It Take So Long? 》
If the selected FQDN can be resolved, you can proceed to the next step.
$ host cert.example.com
cert.example.com has address www.xxx.yyy.zzz
Deploy cert-manager
The next step is to deploy the latest version of cert-manager. Again, we will use Helm for deployment.
-
Add Helm repository.
$ helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
2. Update the warehouse.
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈
3. Deploy cert-manager.
$ helm install cert-manager jetstack/cert-manager \
--namespace cert-manager --create-namespace \
--version v1.9.1 --set installCRDs=true
NAME: cert-manager
LAST DEPLOYED: Thu Sep 1 16:01:52 2022
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.9.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
Issuer Configuration
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
Securing Ingress Resources
4. Verify the deployment.
$ kubectl get deployments --namespace cert-manager
NAME READY UP-TO-DATE AVAILABLE AGE
cert-manager 1/1 1 1 4m30s
cert-manager-cainjector 1/1 1 1 4m30s
cert-manager-webhook 1/1 1 1 4m30s
Deploy the NGINX Cafe example
We will use the NGINX Cafe sample to provide backend deployment and services. This example is very common in the documentation provided by NGINX. During this process, we will not deploy the Ingress.
-
Clone the NGINX Ingress GitHub project.
$ git clone https://github.com/nginxinc/kubernetes-ingress.git
Cloning into 'kubernetes-ingress'...
remote: Enumerating objects: 44979, done.
remote: Counting objects: 100% (172/172), done.
remote: Compressing objects: 100% (108/108), done.
remote: Total 44979 (delta 87), reused 120 (delta 63), pack-reused 44807
Receiving objects: 100% (44979/44979), 60.27 MiB | 27.33 MiB/s, done.
Resolving deltas: 100% (26508/26508), done.
2. Enter the sample directory. This directory contains several examples demonstrating various configurations of the Ingress controller. We are using the example provided in the complete-example directory.
$ cd ./kubernetes-ingress/examples/ingress-resources/complete-example
3. Deploy the NGINX Cafe sample.
$ kubectl apply -f ./cafe.yaml
deployment.apps/coffee created
service/coffee-svc created
deployment.apps/tea created
service/tea-svc created
4. Use the kubectl get command to verify the deployment and services. You need to make sure that the pods show up as READY and the service shows up as running. The example below gives you a representative sample. Note that the kubernetes service is a system service and runs in the same namespace (default) as the NGINX Cafe example.
$ kubectl get deployments,services --namespace default
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coffee 2/2 2 2 69s
deployment.apps/tea 3/3 3 3 68s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coffee-svc ClusterIP 10.128.154.225 <none> 80/TCP 68s
service/kubernetes ClusterIP 10.128.0.1 <none> 443/TCP 29m
service/tea-svc ClusterIP 10.128.96.145 <none> 80/TCP 68s
Deploy ClusterIssuer
In cert-manager, ClusterIssuer can be used to issue certificates. This is a cluster-wide object that can be referenced by any namespace and used by any certificate request made to the specified certificate authority. In this example, any request for a Let’s Encrypt certificate can be handled by this ClusterIssuer.
Deploys a ClusterIssuer for the selected challenge type. Although this is outside the scope of this article, a quick mention is that you can specify multiple resolvers (based on the selector field selection).
ACME challenge basics
You can use the Automated Certificate Management Environment (ACME) protocol to determine whether you own a domain name and, therefore, whether you can be issued a Let’s Encrypt certificate. For this challenge, the following parameters need to be passed:
-
metadata.name: ClusterIssuer name, needs to be unique within Kubernetes installation. This name will be used later in the certificate issuance example.
-
spec.acme.email: This is the email address you registered with Let’s Encrypt to generate the certificate. This should be your email.
-
spec.acme.privateKeySecretRef: This is the name of the Kubernetes secret you will use to store your private key. spec.acme.solvers: This should remain unchanged - it indicates the challenge type (or resolver as ACME calls it) you are using (HTTP-01 or DNS-01), and which Ingress type it should apply to (In this case it would be nginx).
Use HTTP-01
This example shows how to set up a ClusterIssuer to use an HTTP-01 challenge to prove domain ownership and receive a certificate.
-
Create a ClusterIssuer, using HTTP-01 challenge.
$ cat << EOF | kubectl apply -f
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: prod-issuer
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: prod-issuer-account-key
solvers:
- http01:
ingress:
class: nginx
EOF
clusterissuer.cert-manager.io/prod-issuer created
2. Verify the ClusterIssuer (should show Ready).
$ kubectl get clusterissuer
NAME READY AGE
prod-issuer True 34s
Use DNS-01
This example shows how to set up a ClusterIssuer to use a DNS-01 challenge to verify domain name ownership. Depending on the specific DNS provider, you may need to use a Kubernetes Secret to store the token. This example uses Cloudflare. Pay attention to the use of namespaces. cert-manager applications deployed into the cert-manager namespace require access to Secrets.
For this example, you need a Cloudflare API token that you can create from your account. This will need to be placed in the <API Token> line below. If you're not using Cloudflare, you'll need tofollow your provider's documentation.
-
Create a Secret for the API token.
$ cat << EOF | kubectl apply -f
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-token-secret
namespace: cert-manager
type: Opaque
stringData:
api-token: <API Token>
EOF
2. Create an authority and use DNS-01 challenge.
$ cat << EOF | kubectl apply -f
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: prod-issuer
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: prod-issuer-account-key
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-api-token-secret
key: api-token
EOF
3. Verify the authority (it should show ready).
$ kubectl get clusterissuer
NAME READY AGE
prod-issuer True 31m
Deploy Ingress
After completing the above steps, you can deploy the Ingress resource for your application. This will route traffic to the NGINX Cafe application we deployed earlier.
Using Kubernetes Ingress
If you use the standard Kubernetes Ingress resource, use the following deployment YAML to configure the Ingress and request the certificate.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
cert-manager.io/cluster-issuer: prod-issuer
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- cert.example.com
secretName: cafe-secret
rules:
- host: cert.example.com
http:
paths:
- path: /tea
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
- path: /coffee
pathType: Prefix
backend:
service:
name: coffee-svc
port:
number: 80
Some key parts of the checklist to check:
-
The API being called is the standard Kubernetes Ingress.
-
A key part of this configuration is under metadata.annotations, where we can set acme.cert-manager.io/http01-edit-in-place to "true". This value is required and can adjust the solution to the challenge. For more information, see theSupported Notes documentation. This can also be handled by using master/minion setup .
-
spec.ingressClassName refers to the NGINX Ingress Controller we have installed and will be using.
-
spec.tls.secret Kubernetes Secret resource stores the certificate key returned by Let’s Encrypt when issuing a certificate.
-
Our hostname cert.example.com is specified for spec.tls.hosts and spec.rules.host. Our ClusterIssuer issues a certificate for this hostname.
-
The spec.rules.http section defines the paths and the backend services that satisfy requests on those paths. For example, traffic to /tea will be directed to port 80 on tea-svc.
1. Modify the above list according to your installation. At a minimum, changing the values of spec.rules.host and spec.tls.hosts is required, but it is recommended that you review all parameters in your configuration.
2. Application list.
$ kubectl apply -f ./cafe-virtual-server.yaml
virtualserver.k8s.nginx.org/cafe created
3. Wait for the certificate to be issued, and the value of the READY field will change to “True”.
$ kubectl get certificates
NAME READY SECRET AGE
certificate.cert-manager.io/cafe-secret True cafe-secret 37m
Using NGINX Virtual Server/Virtual Router
If you use NGINX CRD, you need to use the following deployment YAML to configure Ingress.
Click for more practical codesNGINX community official websiteView
View certificate
You can view the certificate through the Kubernetes API. This will display details about the certificate, including its size and associated private key.
$ kubectl describe secret cafe-secret
Name: cafe-secret
Namespace: default
Labels: <none>
Annotations: cert-manager.io/alt-names: cert.example.com
cert-manager.io/certificate-name: cafe-secret
cert-manager.io/common-name: cert.example.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group:
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: prod-issuer
cert-manager.io/uri-sans:Type: 404 Page not found
====
tls.crt: 5607 bytes
tls.key: 1675 bytes
To see the actual certificate and key, you can run the following command. (Note: This exposes a weakness in Kubernetes Secrets. That is, they can be read by anyone with the necessary access).
$ kubectl get secret cafe-secret -o yaml
Test Ingress
Test certificate. You can use any method you want here. The following example uses cURL. If successful, a code similar to the one shown earlier will appear, including the server name, the server's internal address, the date, the selected URI (route) (coffee or tea), and the request ID. If it fails, an HTTP error code is generated, most likely 400 or 301.
$ curl https://cert.example.com/tea
Server address: 10.2.0.6:8080
Server name: tea-5c457db9-l4pvq
Date: 02/Sep/2022:15:21:06 +0000
URI: /tea
Request ID: d736db9f696423c6212ffc70cd7ebecf
$ curl https://cert.example.com/coffee
Server address: 10.2.2.6:8080
Server name: coffee-7c86d7d67c-kjddk
Date: 02/Sep/2022:15:21:10 +0000
URI: /coffee
Request ID: 4ea3aa1c87d2f1d80a706dde91f31d54
Certificate update
At the beginning, we mentioned that this approach would eliminate the need to manage certificate updates. However, we have not yet explained how to do this. why? Because this is a core built-in part of cert-manager. During this automated process, when cert-manager realizes that a certificate does not exist, has expired, is about to expire within 15 days, or when a user requests a new certificate via the CLI, it will automatically request New certificate. Extremely simple.
common problem
Which challenge type should I use?
This mainly depends on your use case.
The HTTP-01 challenge method requires that port 80 is open to the Internet and that the DNS A record is correctly configured based on the Ingress controller's IP address. This method does not require access to a DNS provider other than creating an A record.
When port 80 cannot be exposed to the Internet, the DNS-01 challenge method can be used, which only requires cert-manager to have egress access to the DNS provider. However, this approach requires that you have access to the DNS provider's API, and the level of access required varies depending on the specific provider.
How to troubleshoot?
Because of the complexity of Kubernetes, it can be difficult to provide targeted troubleshooting information. When you encounter issues, please ask us in the NGINX community’s official WeChat group (NGINX Plus users can use their normal support options).
NGINX is the only official Chinese community, all at nginx.org.cn
More technical information, interactive Q&A, series of courses, and event resources related to NGINX: Open source community official website | WeChat public account
IntelliJ IDEA 2023.3 & JetBrains annual major version update New concept "defensive programming": make yourself a stable job GitHub .com runs more than 1,200 MySQL hosts, how to seamlessly upgrade to 8.0? Stephen Chow’s Web3 team will launch an independent app next month Will Firefox be eliminated? Visual Studio Code 1.85 released, floating window US CISA recommends abandoning C/C++ to eliminate memory security vulnerabilities Yu Chengdong: Huawei Disruptive products will be launched next year and rewrite the history of the industry TIOBE December: C# is expected to become the programming language of the year Lei Jun’s paper written 30 years ago: "Computer Virus Determination Expert System Principles and Design