[Tencent Cloud Finops Crane Training Camp] How to quickly build a Kubernetes+Crane environment and application on Windows

I. Introduction

  • Personal homepage : ζ Xiaocaiji
  • Hello everyone, I am a rookie, let us learn together how to quickly build a Kubernetes+Crane environment and application on Windows .
  • If the article is helpful to you, welcome to follow, like, and bookmark (one-click three links)

2. Crane's past and present lives

  Crane is the first domestic cost optimization project based on cloud-native technology led by Tencent Cloud to open source. It follows the FinOps standard and has been authorized by the FinOps Foundation as the world's first certified cost-reducing and efficiency-enhancing open source solution. It provides a simple, reliable and powerful automated deployment tool for enterprises using Kubernetes clusters.

  Crane was originally designed to help enterprises better manage and scale their Kubernetes clusters, enabling more efficient cloud-native application management.

  Crane is easy to use, highly customizable and extensible. It provides a set of easy-to-use command-line tools that allow developers and administrators to easily deploy applications to Kubernetes clusters. Crane also supports multiple cloud platforms and can be customized according to specific business needs.

  Crane has been deployed in production systems by companies such as Tencent, Netease, Speedy, Kujiale, Mingyuan Cloud, Shushu Technology, etc. Its main contributors are from Tencent, Xiaohongshu, Google, eBay, Microsoft, Tesla, etc. well-known company.


3. If a worker wants to do a good job, he must first sharpen his tools (tools and environment preparation)

1. Tool preparation

Both curl and brew are available, I use curl here .


Download the installation package

   Download through train: https://curl.se/windows
   curl official website download address: https://curl.se/download.html

insert image description here


install curl

  Unzip the downloaded curl-8.0.1_9-win64-mingw.zip file to the installation directory, as shown in the figure:

insert image description here


  Enter the bin file and find the curl.exe and curl-ca-bundle.crt files, as shown in the figure:

insert image description here


  Configure environment variables (the installation directory is the directory where curl.exe is located)

insert image description here


Verify that the installation was successful

  Open cmd, enter curl to verify whether it is successful, as shown in the figure:

insert image description here


2. Environment preparation

Install the operating environment kubectl, helm, kind, Docker


kubectl-deployment

  There are several ways to install kubectl on a Windows system:

  (1) Install kubectl on Windows with curl.
  (2) Install with Chocolatey, Scoop or winget on Windows.

  The following mainly introduces how to install kubectl on Windows with curl


  【1】Download

  Download the latest patch version 1.27: kubectl 1.27.1

  If you have curl installed, you can also use this command:

curl.exe -LO "https://dl.k8s.io/release/v1.27.1/bin/windows/amd64/kubectl.exe"

  Open cmd and execute the above command, as shown in the figure:

insert image description here

   If curl is not installed in the system, you can install curl. Detailed steps: Windows how to install using the curl command


  【2】check

  Verify the executable (optional step):

  Download the kubectl checksum file:

curl.exe -LO "https://dl.k8s.io/v1.27.1/bin/windows/amd64/kubectl-convert.exe.sha256"

  In a command line environment, manually compare the output of the CertUtil command with the checksum file:

CertUtil -hashfile kubectl.exe SHA256
type kubectl.exe.sha256

  Open cmd and execute the above command, as shown in the figure:

insert image description here

  To automate validation with PowerShell, use the operator -eq to directly get a True or False result:

$(Get-FileHash -Algorithm SHA256 .\kubectl.exe).Hash -eq $(Get-Content .\kubectl.exe.sha256)

insert image description here


  【3】Configure environment variables

  Append or insert the kubectl binary folder to your PATH environment variable as shown:

insert image description here


  【4】Verify whether the installation is successful

  Test to make sure that the version of this kubectl is consistent with the expected version:

kubectl version --client

  Open cmd and execute the above command, as shown in the figure:

insert image description here

Explanation: The above command produces a warning:
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.`
You can ignore this warning. You only check the version of kubectl you have installed.

  Or use the following command to view the details of the version:

kubectl version --client --output=yaml

  Open cmd and execute the above command, as shown in the figure:

insert image description here


helm deployment

  【1】Using Chocolatey (Windows)

  (1) Install the Chocolatey software environment:

  Windows 7+
  PowerShell v2+
  .NET Framework 4+

  Then run PowerShell as an administrator and run the following code directly in PowerShell:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

  Run the above code in PowerShell, the result is shown in the following figure:

insert image description here


  (2) Check whether Chocolatey is installed successfully. Check the help
  through choco or choco -?. In PowerShell, my execution process is as follows:

insert image description here


  (3) Use Chocolatey to build Helm
  Helm community members contribute a Helm package to build in Chocolatey, and the package is usually the latest.

choco install kubernetes-helm

  Run the above code in PowerShell, the result is shown in the following figure:

insert image description here


  [2] Use Scoop (Windows)

  (1) Install Scoop

  Please refer to: Install scoop on Windows


  (2) Use Scoop to build Helm

  Helm community members contribute a Helm package for Scoop, which is usually up to date.

scoop install helm

kind deployment

  The official local environment system refers to the corresponding document to install kind: https://kind.sigs.k8s.io/docs/user/quick-start/#installation

  [1] PowerShell installation in Windows

  In PowerShell, execute the following code to install:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.18.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

   [2] Install with Chocolatey on Windows (this method I used)

  In PowerShell, execute the following code to install:

choco install kind

insert image description here


  【3】Verify whether the installation of kind is successful

  View by kind. In PowerShell, my execution process is as follows:

insert image description here


Docker deployment

  【1】Prepare the environment

  Start the command window as an administrator, enter: wsl --install

insert image description here


  The installation is complete

insert image description here

  Remember to restart your computer! ! !


  Once again, the administrator opens the command window and enters the following command:

wsl --install -d Ubuntu

  It may be a bit slow, press Enter if there is no response for a long time:

insert image description here
insert image description here


  Enter the user name and password
  User name: zhangjingqi
  Password: 123456 (It will not be displayed when entering the password, just grasp it by yourself)
insert image description here


  Set the account/password, after success, as shown in the following figure:

insert image description here


  【2】Docker installation

  Official website address: https://docs.docker.com/get-docker/

  According to your own system, choose the appropriate installation package, I choose Windows, as shown in the figure:

insert image description here


  Click "Download", after the download is complete, double-click to run

  The picture below means to add a desktop shortcut, you can add it or not, depending on your personal preference, and then click OK

insert image description here
insert image description here


  After installation, remember to restart your computer! ! !

  The following interface appears to indicate that the installation is successful

insert image description here


Fourth, make persistent efforts (local installation of Crane and applications)

1. Install the local Kind cluster and Crane components

  The following commands will install Crane and its dependencies (Prometheus/Grafana).

curl -sf https://raw.githubusercontent.com/gocrane/crane/main/hack/local-env-setup.sh | sh -

  If the above installation command reports a network error, you can use the local installation package to perform the installation, and execute the following installation command on the command line:

# 必须在 installation 的上级目录例如:我们预设好的 training 跟目录中执行

# Mac/Linux
bash installation/local-env-setup.sh

# Windows
./installation/local-env-setup.sh

  Execute the above command in PowerShell, as shown in the following figure:

insert image description here


  Make sure all pods are up and running:

$ export KUBECONFIG=${
    
    HOME}/.kube/config_crane
$ kubectl get pod -n crane-system

NAME                                             READY   STATUS    RESTARTS       AGE
craned-6dcc5c569f-vnfsf                          2/2     Running   0              4m41s
fadvisor-5b685f4cd6-xpxzq                        1/1     Running   0              4m37s
grafana-64656f6d54-6l24j                         1/1     Running   0              4m46s
metric-adapter-967c6d57f-swhfv                   1/1     Running   0              4m41s
prometheus-kube-state-metrics-7f9d78cffc-p8l7c   1/1     Running   0              4m46s
prometheus-server-fb944f4b7-4qqlv                2/2     Running   0              4m46s

Tip: It takes a certain amount of time to start the Pod. After a few minutes, enter the command to check whether the cluster status is Running


2. Visit Crane Dashboard

kubectl -n crane-system port-forward service/craned 9090:9090

# 后续的终端操作请在新窗口操作,每一个新窗口操作前请把配置环境变量加上(不然会出现8080端口被拒绝的提示)
export KUBECONFIG=${
    
    HOME}/.kube/config_crane

  Click here to visit the Crane Dashboard

  Add local cluster:

insert image description here


3. Use Smart Elastic EffectiveHPA

  【1】Install Metrics Server

  Install Metrics Server with the following command:

kubectl apply -f installation/components.yaml
kubectl get pod -n kube-system

  【2】Create a test application

kubectl apply -f installation/effective-hpa.yaml

  Run the following command to view the current status of EffectiveHPA:

kubectl get ehpa

  The output is similar to:

NAME         STRATEGY   MINPODS   MAXPODS   SPECIFICPODS   REPLICAS   AGE
php-apache   Auto       1         10                       0          3m39s

  【3】Increase the load

# 在单独的终端中运行它
# 如果你是新创建请配置环境变量 
export KUBECONFIG=${
    
    HOME}/.kube/config_crane

# 以便负载生成继续,你可以继续执行其余步骤
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"

  Now execute:

# 准备好后按 Ctrl+C 结束观察
# 如果你是新创建请配置环境变量 
export KUBECONFIG=${
    
    HOME}/.kube/config_crane

kubectl get hpa ehpa-php-apache --watch

  As the number of requests increases, the CPU utilization rate will continue to increase. You can see that EffectiveHPA will automatically expand the instance.

  Note: Forecast data requires more than two days of monitoring data to appear.


4. Cost display

  Crane Dashboard provides a variety of charts showing the cost and resource usage of the cluster.

  【1】Cluster Overview

insert image description here
insert image description here

  • Total cost of the month: The total cost of the cluster in the past month. Starting from the time of Crane installation, the cluster cost is accumulated by the hour
  • Estimated monthly cost: Estimate the cost of the next month based on the latest hourly cost. hourly cost * 24 * 30
  • Estimated total CPU cost: Estimate the CPU cost for the next month based on the CPU cost of the last hour. Hourly CPU cost * 24 * 30
  • Estimated total memory cost: Estimate the memory cost for the next month based on the memory cost of the last hour. Memory cost per hour * 24 * 30

  【2】Cost Insight -> Cluster Overview

insert image description here

insert image description here

  • Workload Spec CPU Slack: CPU Specs for Workload - Recommended CPU Specs
  • Workload Total CPU Slack: (Workload CPU Specifications - Recommended CPU Specifications) * Number of Pods

  More cost analysis charts can be found by logging on to Grafana's page or analyzing the source code.

  To log in to Grafana, you can create a port-mapping with the following command:

# 如果你是新创建请配置环境变量 
export KUBECONFIG=${
    
    HOME}/.kube/config_crane

kubectl -n crane-system port-forward service/grafana 8082:8082

Access local Grafana (account password: admin/admin): http://127.0.0.1:8082/grafana/login


5. How to calculate the cost

  The cost calculation function is implemented by the component Fadvisor, which will be installed together when Crane is installed, and provides cost display and cost analysis functions together:

  • Server: collect cluster metric data and calculate cost
  • Exporter: expose the cost Metric

insert image description here
principle:

  Fadvisor cost models provide a way to estimate and analyze resource prices per container, pod or other resource in Kubernetes.

  Please note that the cost model is only an estimated cost, not a substitute for cloud orders, because the actual billing data depends on more reasons, such as various billing logics. Here is the theory of computation:

  • The simplest cost model is to estimate resource prices for all nodes or pods at the same price. For example, when calculating costs, you can assume that all containers have
    the same price per unit of CPU and RAM, 3 Gib per hour for 2 cores.

  • Advanced cost models estimate resource prices through cost allocation. The basis of this theory is that the price of each cloud machine instance of different instance types and billing types is different, but the price ratio of CPU and RAM is relatively fixed, and the resource cost can be calculated through this price ratio.

The specific calculation formula under the cost allocation model is as follows:

  • Overall cluster cost: the sum of cvm costs.
  • The CPU/mem price ratio is relatively fixed.
  • cvm cost = CPU cost * CPU amount + mem cost * mem amount
  • CPU application cost: overall cost * (the ratio of CPU to cvm cost) to get the overall CPU cost, and then calculate the CPU application cost according to the ratio of the applied CPU overview to the overall CPU.
  • CPU application cost under namespace: CPU application cost is aggregated by namespace.

5. Take it to the next level (configuration optimization)

  You can see the relevant cost data after opening the box in the dashboard, because we installed the recommended rules when adding the cluster.

  The recommendation framework will automatically analyze the running status of various resources in the cluster and give optimization suggestions. Crane's recommendation module will regularly detect and discover cluster resource configuration problems and give optimization suggestions. Smart Recommendation provides various Recommenders to implement optimized recommendations for different resources.

  On the Cost Analysis > Recommended Rules page, you can see the two recommended rules we installed.

insert image description here

  These recommendation rules are actually the RecommendationRule CRD object installed when the K8s cluster is connected to the Dashboard:

$ kubectl get RecommendationRule
NAME             RUNINTERVAL   AGE
idlenodes-rule   24h           16m
workloads-rule   24h           16m

  The resource object of the workloads-rule recommended rule is as follows:

apiVersion: analysis.crane.io/v1alpha1
kind: RecommendationRule
metadata:
  name: workloads-rule
  labels:
    analysis.crane.io/recommendation-rule-preinstall: "true"
spec:
  resourceSelectors:
    - kind: Deployment
      apiVersion: apps/v1
    - kind: StatefulSet
      apiVersion: apps/v1
  namespaceSelector:
    any: true
  runInterval: 24h
  recommenders:
    - name: Replicas
    - name: Resource

  RecommendationRule is an object of the cluster dimension. This recommendation rule will recommend resources and number of replicas for Deployments and StatefulSets in all namespaces. The relevant specification properties are as follows:

  • Run the analysis recommendation every 24 hours. The format of runInterval is the time interval, for example: 1h, 1m. If it is set to empty, it means to run only once.

  • The resources to be analyzed are set by configuring the resourceSelectors array. Each resourceSelector selects resources in K8s through kind, apiVersion, and name. When the name is not specified, it means all resources based on the namespaceSelector.

  • namespaceSelector defines the namespace of the resources to be analyzed, any: true means to select all namespaces.

  • recommenders defines which Recommenders the resource to be analyzed needs to be analyzed by. Currently supports two Recommenders:

    • Resource recommendation (Resource): Analyze the actual usage of the application through the VPA algorithm and recommend a more appropriate resource configuration.
    • Replicas recommendation (Replicas): Analyzing the actual usage of the application through the HPA algorithm recommends a more appropriate number of replicas.

1. Resource recommendation

  When Kubernetes users create application resources, they often set the request and limit based on experience values. The resource recommendation algorithm analyzes the actual usage of the application and recommends a more appropriate resource configuration. You can refer to and adopt it to improve the resource utilization of the cluster. The recommendation algorithm model uses VPA's sliding window (Moving Window) algorithm for recommendation:

  • Obtain the historical CPU and memory usage of Workload in the past week (configurable) through monitoring data.
  • The algorithm considers the timeliness of data, and newer data sampling points will have higher weights.
  • The CPU recommended value is calculated based on the target percentile value set by the user, and the memory recommended value is based on the maximum value of historical data.

2. Recommended number of copies

  Kubernetes users often set the number of replicas based on experience when creating application resources. Analyze the actual usage of the application through the algorithm recommended by the number of replicas and recommend a more appropriate replica configuration. You can also refer to and adopt it to improve the resource utilization of the cluster. The basic algorithm it implements is based on the historical CPU load of the workload, finds the CPU usage with the lowest hourly load in the past seven days, and calculates the number of copies that should be configured according to the 50% (configurable) utilization rate and the workload CPU Request.


3. Recommended configuration

  When we deploy the crane , a ConfigMap object named recommendation-configuration will be created in the same namespace , which contains a RecommendationConfiguration in yaml format , which subscribes to the recommender configuration, as shown below:

$ kubectl get cm recommendation-configuration -n crane-system -oyaml
apiVersion: v1
data:
  config.yaml: |-
    apiVersion: analysis.crane.io/v1alpha1
    kind: RecommendationConfiguration
    recommenders:
      - name: Replicas  # 副本数推荐
        acceptedResources:
          - kind: Deployment
            apiVersion: apps/v1
          - kind: StatefulSet
            apiVersion: apps/v1
      - name: Resource  # 资源推荐
        acceptedResources:
          - kind: Deployment
            apiVersion: apps/v1
          - kind: StatefulSet
            apiVersion: apps/v1
kind: ConfigMap
metadata:
  name: recommendation-configuration
  namespace: crane-system

  It should be noted that the resource type and recommenders need to match. For example, the Resource recommendation only supports Deployments and StatefulSets by default.

  Similarly, you can check the resource object of the idle node recommendation rule again, as follows:

$ kubectl get recommendationrule idlenodes-rule -oyaml
apiVersion: analysis.crane.io/v1alpha1
kind: RecommendationRule
metadata:
  labels:
    analysis.crane.io/recommendation-rule-preinstall: "true"
  name: idlenodes-rule
spec:
  namespaceSelector:
    any: true
  recommenders:
  - name: IdleNode
  resourceSelectors:
  - apiVersion: v1
    kind: Node
  runInterval: 24h

  After creating the RecommendationRule configuration, the RecommendationRule controller will periodically run recommendation tasks according to the configuration, give optimization suggestions and generate Recommendation objects, and then we can adjust the resource configuration according to the optimization suggestions Recommendation.

  For example, multiple optimization suggestion Recommendation objects have been generated in our cluster.

kubectl get recommendations -A
NAME                            TYPE       TARGETKIND    TARGETNAMESPACE   TARGETNAME       STRATEGY   PERIODSECONDS   ADOPTIONTYPE          AGE
workloads-rule-resource-8whzs   Resource   StatefulSet   default           nacos            Once                       StatusAndAnnotation   34m
workloads-rule-resource-hx4cp   Resource   StatefulSet   default           redis-replicas   Once                       StatusAndAnnotation   34m

  You can view any optimization suggestion object at will.

$ kubectl get recommend workloads-rule-resource-g7nwp -n crane-system -oyaml
apiVersion: analysis.crane.io/v1alpha1
kind: Recommendation
metadata:
  name: workloads-rule-resource-g7nwp
  namespace: crane-system
spec:
  adoptionType: StatusAndAnnotation
  completionStrategy:
    completionStrategyType: Once
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: fadvisor
    namespace: crane-system
  type: Resource
status:
  action: Patch
  conditions:
  - lastTransitionTime: "2022-10-20T07:43:49Z"
    message: Recommendation is ready
    reason: RecommendationReady
    status: "True"
    type: Ready
  currentInfo: '{"spec":{"template":{"spec":{"containers":[{"name":"fadvisor","resources":{"requests":{"cpu":"0","memory":"0"}}}]}}}}'
  lastUpdateTime: "2022-10-20T07:43:49Z"
  recommendedInfo: '{"spec":{"template":{"spec":{"containers":[{"name":"fadvisor","resources":{"requests":{"cpu":"114m","memory":"120586239"}}}]}}}}'
  recommendedValue: |
    resourceRequest:
      containers:
      - containerName: fadvisor
        target:
          cpu: 114m
          memory: "120586239"
  targetRef: {
    
    }

  You can also view the list of optimization suggestions on the resource recommendation page of the dashboard.

insert image description here
View detailed monitoring data   by viewing monitoring .

insert image description here

  On the page, you can see the current resource (container/CPU/Memory) and recommended resource data, click to adopt the suggestion to get the optimized execution command.

insert image description here

  Execute the command to complete the optimization, in fact, it is to modify the resources resource data of the resource object.

patchData=`kubectl get recommend workloads-rule-resource-g7nwp -n crane-system -o jsonpath='{.status.recommendedInfo}'`;kubectl patch Deployment fadvisor -n crane-system --patch "${patchData}"

  For the recommendation of idle nodes, since the steps for offline nodes are different on different platforms, users can offline or shrink nodes according to their own needs.

  The longer the historical data applied in the monitoring system (such as Prometheus), the more accurate the recommendation results will be. It is recommended to use more than two weeks in production. Predictions for new applications are often inaccurate.


6. Wave goodbye (environmental cleanup)

  After the hands-on experiment is completed, the local cluster can be cleaned up and deleted:

kind delete cluster --name=crane

7. Summary

1. How does Crane improve utilization

  How does Crane achieve a 3-fold increase in utilization without compromising stability?
  The following figure shows the current status of CPU resources in a production system. It can be seen from the figure that the waste of idle resources of computing nodes mainly comes from the following aspects: Crane
insert image description here
  provides service optimization capabilities such as Request recommendation, copy number recommendation, HPA recommendation, and EPA , can assist business automation decision-making to optimize resource allocation. However, in larger organizations, business reform requires the support and cooperation of all business component leaders, which has a long cycle and slow results. How to quickly improve the utilization rate of cluster resources without changing the business, and ensure the stability and quality of service of delay-sensitive and high-quality businesses without interference while increasing the deployment density, Crane's hybrid deployment capability provides the answer.

  Crane provides the ability to mix high-quality sensitive business and low-quality batch processing business, which can increase the cluster utilization rate by 3 times!

insert image description here


2. The core challenge of mixed department

  The so-called mixed deployment means that workloads with different priorities are mixed and deployed in the same cluster. Generally speaking, latency-sensitive (Latency Sensitive) services that support online services have a higher priority, and high-throughput (Batch) services that support offline computing usually have a lower priority.
insert image description here
  It seems that deploying these different types of businesses in the same cluster and reusing computing resources can effectively improve resource utilization, so why is the large-scale application of hybrid deployment only in top technology companies? The ideal is beautiful, but the reality is very skinny. If we simply deploy different business types together without any level of resource isolation, then the quality of online business services will inevitably be affected. This is also the core reason why it is difficult to implement mixed deployments. .


3. Crane's mixed department plan

  Crane provides an out-of-the-box solution for the hybrid deployment scenario. With the help of Kubernetes CRD, the solution can be flexibly adapted to multi-priority online hybrid deployment scenarios and offline hybrid deployment scenarios. The capabilities of the hybrid deployment solution are summarized as follows :

  • Node load profile and elastic resource recovery
    Crane collects node utilization data in real time, and calculates future idle resources based on various prediction algorithms, builds profiles for nodes, and updates them into node schedulable resources in the form of extended resources. The amount of elastic resources varies with the actual usage of high-quality services, and the usage of high-quality services increases while elastic resources decrease.
  • Redistribution of elastic resources
    Low-quality services use elastic resources, and the scheduler ensures that sufficient elastic resources are available when low-quality services are scheduled for the first time to prevent node overload.
  • Interference detection and active avoidance capabilities based on custom water levels
    • NodeQoS API allows cluster operation and maintenance to define node water level, including total CPU water level, or elastic resource allocation rate, elastic resource water level, etc., and define avoidance actions when the actual usage reaches the water level.
    • PodQoS API defines resource isolation policies for different types of workloads, such as CPU scheduling priority, disk IO, etc., and defines the avoidance actions allowed by this type of business.
    • AvoidanceAction defines action parameters such as scheduling prohibition, suppression, and expulsion. When the node water level is triggered, only business Pods that allow an action can perform the action.
  • In Crane's open-source solution to enhance QoS capabilities based on kernel isolation
    , the upper limit of interference source resources can be suppressed by dynamically adjusting CGroups. At the same time, in order to support the isolation requirements of large-scale production systems, Crane is based on the Tencent RUE kernel, and through features such as multi-level CPU scheduling priority and absolute preemption, it ensures that high-quality services are not affected by low-quality services.
  • Enhanced rescheduling capabilities such as graceful eviction that supports analog scheduling
    When suppression is not enough to suppress interference, it is necessary to evict low-quality Pods from nodes to ensure the service quality of high-quality services. Crane's graceful eviction rescheduling capability that supports analog scheduling can reduce the impact of rescheduling on applications with the help of the cluster's global perspective and pre-scheduling capabilities.

  When scheduling offline jobs, the mixed department should give priority to nodes with lower actual loads in the elastic resource usage to avoid uneven node loads; at the same time, it is necessary to ensure the resource demands of high-quality and delay-sensitive services, such as scheduling to resources with abundant resources. Nodes, meeting the binding core requirements of NUMA topology, etc. Crane meets the above requirements through real load scheduling and CPU topology-aware scheduling. For details, please refer to Crane-Scheduler and CPU topology-aware scheduling.


4. Summary

  As cloud platform users, we all hope that the purchased servers can be used to the fullest and achieve maximum utilization. However, it is very difficult to achieve the theoretical node load target. Computing nodes always have some idle resources caused by packing fragments and low load. Crane provides business optimization capabilities such as Request recommendation, copy number recommendation, HPA recommendation, and EPA, which can assist business automation decision-making to optimize resource allocation. At the same time, Crane provides the ability to mix high-quality sensitive business and low-quality batch processing business, which can increase the cluster utilization rate by 3 times! Crane covers most of the functions of RUE, involving multiple dimensions such as CPU, memory, IO, and network. Through PodQOS and NodeQOS, it provides applications with batch RUE isolation capabilities, so that users can easily implement the kernel level without paying attention to complex CGroup configurations. resource isolation and security. In summary, its use scenarios and scope will inevitably expand in the future.


  About Tencent Cloud Finops Crane Training Camp:

  The Finops Crane training camp is mainly for developers. It aims to improve developers' practical ability in container deployment and K8s level. At the same time, it absorbs Crane open source project contributors and encourages developers to submit issues and bug feedback. A series of technical activities such as hands-on experiment team formation and award-winning essay collection. It will not only allow developers to have an in-depth understanding of the Finops Crane open source project through the event, but also help developers gain substantial gains in cloud-native skills.

  In order to reward developers, we have specially set up points acquisition tasks and corresponding points exchange gifts.

  Event introduction delivery: https://marketing.csdn.net/p/038ae30af2357473fc5431b63e4e1a78

  Open source project: https://github.com/gocrane/crane

  [Tencent Cloud Finops Crane Training Camp] How to quickly build a Kubernetes+Crane environment and application on Windows This is the end, thank you for reading, if the article is helpful to you, welcome to follow, like, and bookmark (one-click triple link )


Guess you like

Origin blog.csdn.net/weixin_45191386/article/details/130672057
Recommended