Linkerd is a fully open source service mesh implementation for Kubernetes that makes running services easier and more secure by giving you runtime debugging, observability, reliability, and security, all without requiring any changes to your code. Change.
Linkerd works by installing a set of ultra-light, transparent proxies next to each service instance, which automatically handle all traffic to and from the service. Because they are transparent, these agents act as highly instrumented out-of-process networking stacks, sending telemetry data to and receiving control signals from the control plane. This design allows Linkerd to measure and manipulate traffic to and from the service without introducing excessive latency. To be as small, light, and secure as possible, Linkerd's agents are written in Rust.
2. Function
Automatic mTLS: Linkerd automatically enables mutual Transport Layer Security (TLS) for all communications between mesh applications.
Automatic proxy injection: Linkerd automatically injects data plane proxies into pods based on annotations.
Container Network Interface Plugin: Linkerd can be configured to run a CNI plugin that automatically rewrites iptables rules for each pod.
Dashboards and Grafana: Linkerd provides a web dashboard, as well as preconfigured Grafana dashboards.
Distributed tracing: You can enable distributed tracing support in Linkerd.
Fault injection: Linkerd provides mechanisms to programmatically inject faults into services.
High availability: The Linkerd control plane can run in high availability (HA) mode.
HTTP, HTTP/2, and gRPC proxies: Linkerd will automatically enable advanced features (including metrics, load balancing, retries, and more) for HTTP, HTTP/2, and gRPC connections.
Ingress: Linkerd works with the ingress controller of your choice.
Load balancing: Linkerd automatically load balances requests to all target endpoints over HTTP, HTTP/2, and gRPC connections.
Multi-cluster communication: Linkerd can transparently and securely connect services running in different clusters.
Retries and timeouts: Linkerd can perform service-specific retries and timeouts.
Service Profiles: Linkerd's service profiles support per-route metrics as well as retries and timeouts.
TCP proxy and protocol detection: Linkerd is able to proxy all TCP traffic, including TLS connections, WebSockets, and HTTP tunnels.
Telemetry and Monitoring: Linkerd automatically collects metrics from all services that send traffic through it.
Traffic splitting (canary, blue/green deployment): Linkerd can dynamically send a portion of traffic to different services.
3. Installation
You can install a Linkerd CLI command line tool locally, through which the Linkerd control plane can be installed on the Kubernetes cluster. Therefore, you first need to run the kubectl command locally to ensure that you can access an available Kubernetes cluster. If there is no cluster, you can use KinD to quickly create one locally.
$ kubectl version --short
ClientVersion: v1.23.5ServerVersion: v1.22.8
Linkerd's CLI tool can be installed locally using the following command:
$ curl --proto '=https' --tlsv1.2-sSfL https://run.linkerd.io/install | sh
If it is a Mac system, you can also use the Homebrew tool for one-click installation:
$ brew install linkerd
Similarly go directly to Linkerd Release page to download and install. After installation, use the following command to verify whether the CLI tool is installed successfully:
$ linkerd version
Client version: stable-2.11.1Server version: unavailable
Normally you can see the version information of the CLI, but the Server version: unavailable message will appear. This is because the control plane has not been installed on the Kubernetes cluster, so the next step is to install the server side. Kubernetes clusters can be configured in many different ways. Before installing the Linkerd control plane, you need to check and verify that all configurations are correct. To check whether the cluster is ready to install Linkerd, you can execute the following command:
$ linkerd check --pre
Linkerd core checks
===================
kubernetes-api
--------------
√ can initialize the client
√ can query the KubernetesAPI
kubernetes-version
------------------
√ is running the minimum KubernetesAPI version
√ is running the minimum kubectl version
pre-kubernetes-setup
--------------------
√ control plane namespace does not already exist
√ can create non-namespaced resources
√ can create ServiceAccounts
√ can create Services
√ can create Deployments
√ can create CronJobs
√ can create ConfigMaps
√ can create Secrets
√ can read Secrets
√ can read extension-apiserver-authentication configmap
√ no clock skew detected
linkerd-version
---------------
√ can determine the latest version
‼ cli is up-to-date
is running version 2.11.1 but the latest stable version is 2.11.4
see https://linkerd.io/2.11/checks/#l5d-version-cli for hints
Status check results are √
If all checks are OK, you can start installing the Linkerd control plane. Just execute the following command to install it with one click:
$ linkerd install | kubectl apply -f -
In this command, linkerd install generates a Kubernetes resource manifest file that contains all necessary control plane resources, which can then be installed into the Kubernetes cluster using the kubectl apply command. You can see that the Linkerd control plane will be installed under a namespace named linkerd. After the installation is complete, the following Pods will run:
After the installation is complete, wait for the control plane to be ready by running the following command, and verify whether the installation result is normal:
$ linkerd check
Linkerd core checks
===================
kubernetes-api
--------------
√ can initialize the client
√ can query the KubernetesAPI
kubernetes-version
------------------
√ is running the minimum KubernetesAPI version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all node podCIDRs
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days
linkerd-version
---------------
√ can determine the latest version
‼ cli is up-to-date
is running version 2.11.1 but the latest stable version is 2.11.4
see https://linkerd.io/2.11/checks/#l5d-version-cli for hints
control-plane-version
---------------------
√ can retrieve the control plane version
‼ control plane is up-to-date
is running version 2.11.1 but the latest stable version is 2.11.4
see https://linkerd.io/2.11/checks/#l5d-version-control for hints
√ control plane and cli versions match
linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
‼ control plane proxies are up-to-date
some proxies are not running the current version:* linkerd-destination-79d6fc496f-dcgfx(stable-2.11.1)* linkerd-identity-6b78ff444f-jwp47(stable-2.11.1)* linkerd-proxy-injector-86f7f649dc-v576m(stable-2.11.1)
see https://linkerd.io/2.11/checks/#l5d-cp-proxy-version for hints
√ control plane proxies and cli versions matchStatus check results are √
When the above Status check results are √ message appears, it means that the Linkerd control plane is successfully installed. In addition to using the CLI tool to install the control plane, you can also install it through the Helm Chart, as shown below:
$ helm repo add linkerd https://helm.linkerd.io/stable
# set expiry date one year from now,inMac:
$ exp=$(date -v+8760H +"%Y-%m-%dT%H:%M:%SZ")
# inLinux:
$ exp=$(date -d '+8760 hour' +"%Y-%m-%dT%H:%M:%SZ")
$ helm install linkerd2 \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
--set identity.issuer.crtExpiry=$exp \
linkerd/linkerd2
In addition, the chart contains a values-ha.yaml file, which overrides some default values to set up in high-availability scenarios. Similar to the --ha option in linkerd install, values-ha can be obtained by obtaining the chart file. yaml:
$ helm fetch --untar linkerd/linkerd2
Then provide the coverage file using the -f flag, for example:
## see above on how to set $exp
helm install linkerd2 \
--set-file identityTrustAnchorsPEM=ca.crt \
--set-file identity.issuer.tls.crtPEM=issuer.crt \
--set-file identity.issuer.tls.keyPEM=issuer.key \
--set identity.issuer.crtExpiry=$exp \
-f linkerd2/values-ha.yaml \
linkerd/linkerd2
You can install it in any way. The installation of Linkerd is completed here. Re-execute the linkerd version command to see the server version information:
$ linkerd version
Client version: stable-2.11.1Server version: stable-2.11.1
4. Example
Next, install a simple sample app, Emojivoto, which is a simple standalone Kubernetes application that uses a mix of gRPC and HTTP calls to allow users to vote for their favorite emoji. Emojivoto can be installed into the emojivoto namespace by running the following command:
$ curl -fsL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
namespace/emojivoto created
serviceaccount/emoji created
serviceaccount/voting created
serviceaccount/web created
service/emoji-svc created
service/voting-svc created
service/web-svc created
deployment.apps/emoji created
deployment.apps/vote-bot created
deployment.apps/voting created
deployment.apps/web created
You can see a total of 4 Pod services under this application:
The Emojivoto application can now be accessed in the browser via http://localhost:8080:
You can choose your favorite emoticons on the page to vote, but some errors will occur after selecting certain emoticons. For example, when you click on the donut emoticon, you will get a 404 page:
But don’t worry, this is a deliberate error left in the application so that Linkerd can be used to identify the problem later. Next, you can add the above sample application to the Service Mesh, add Linkerd's data plane proxy to it, and directly run the following command to mesh the Emojivoto application:
The above command first fetches all Deployments running in the emojivoto namespace, then runs their manifest via linkerd inject before reapplying it to the cluster. Note that the linkerd inject command only adds a linkerd.io/inject: enabled annotation to the Pod specification and does not directly inject a Sidecar container. This annotation instructs Linkerd to inject the agent into the Pod when it is created, so execute the above After executing the command, a sidecar proxy container will be added to the application Pod.
You can see that each Pod now has 2 containers, one more Linkerd sidecar proxy container than before:
When the application update is completed, the application is successfully introduced into Linkerd's grid service. The newly added proxy container forms the data plane. You can also check the data plane status through the following command:
$ linkerd -n emojivoto check --proxy
Linkerd core checks
===================
kubernetes-api
--------------
√ can initialize the client
√ can query the KubernetesAPI
kubernetes-version
------------------
√ is running the minimum KubernetesAPI version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ control plane pods are ready
√ cluster networks contains all node podCIDRs
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor
linkerd-webhooks-and-apisvc-tls
-------------------------------
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days
√ policy-validator webhook has valid cert
√ policy-validator cert is valid for at least 60 days
linkerd-identity-data-plane
---------------------------
√ data plane proxies certificate matchCA
linkerd-version
---------------
√ can determine the latest version
‼ cli is up-to-date
is running version 2.11.1 but the latest stable version is 2.11.4
see https://linkerd.io/2.11/checks/#l5d-version-cli for hints
linkerd-control-plane-proxy
---------------------------
√ control plane proxies are healthy
‼ control plane proxies are up-to-date
some proxies are not running the current version:* linkerd-destination-79d6fc496f-dcgfx(stable-2.11.1)* linkerd-identity-6b78ff444f-jwp47(stable-2.11.1)* linkerd-proxy-injector-86f7f649dc-v576m(stable-2.11.1)
see https://linkerd.io/2.11/checks/#l5d-cp-proxy-version for hints
√ control plane proxies and cli versions match
linkerd-data-plane
------------------
√ data plane namespace exists
√ data plane proxies are ready
‼ data plane is up-to-date
some proxies are not running the current version:* emoji-696d9d8f95-8wrmg (stable-2.11.1)* vote-bot-6d7677bb68-c98kb(stable-2.11.1)* voting-ff4c54b8d-rdtmk(stable-2.11.1)* web-5f86686c4d-qh5bz(stable-2.11.1)
see https://linkerd.io/2.11/checks/#l5d-data-plane-version for hints
√ data plane and cli versions match
√ data plane pod labels are configured correctly
√ data plane service labels are configured correctly
√ data plane service annotations are configured correctly
√ opaque ports are properly annotated
Status check results are √
Of course, you can still access the application through http://localhost:8080. Of course, the use is no different from before. You can use Linkerd to see what the application actually does, but you need to install a separate plug-in. Due to the core control of Linkerd Plane is very lightweight, so Linkerd comes with some plugins that add some non-critical but often useful functionality to Linkerd, including various dashboards. For example, you can install a viz plugin. The Linkerd-Viz plugin contains Linkerd's observables. and visualization components. The installation command is as follows:
$ linkerd viz install | kubectl apply -f -
The above command will create a namespace named linkerd-viz, and install monitoring-related applications in this namespace, such as Prometheus, Grafana, etc.:
After the installation is complete, you can use the following command to open a dashboard page:
$ linkerd viz dashboard &
When the viz plug-in is deployed, a Linkerd observability Dashboard will automatically open in the browser after executing the above command. In addition, you can also expose the viz service through Ingress and create a resource object as shown below:
After application, you can access viz through linkerd.k8s.local:
Automatically generated topology diagrams can also be displayed:
On the page, we can find the real-time indicators of each Emojivoto component, and we can determine which component has a partial failure, so that we can solve the problem in a targeted manner. There is a Grafana icon behind the corresponding resource. Clicking it will automatically jump to the Grafana monitoring page: