Database server and client binding core

0. Binding core

OpenGauss database server and client binding core

yum install numactl

numactl -H

numactl -H is a command used to display the hardware information of the NUMA (Non-Uniform Memory Access) nodes on a system. NUMA is a memory architecture that allows multiple processors or cores to access their own local memory as well as a shared memory pool.

The output you provided shows the details of the NUMA nodes and their characteristics:

  • available: 4 nodes (0-3): Indicates that there are four NUMA nodes numbered from 0 to 3.

For each node, the following information is provided:

  • node X cpus: Lists the CPUs (or cores) that belong to the particular NUMA node. In your example, node 0 has CPUs with IDs 0-23, node 1 has CPUs with IDs 24-47, node 2 has CPUs with IDs 48-71, and node 3 has CPUs with IDs 72-95.

  • node X size: Specifies the total size of memory available on the NUMA node. For instance, node 0 has a total size of 97577 MB (megabytes).

  • node X free: Indicates the amount of free memory available on the NUMA node. For example, node 0 has 93294 MB of free memory.

The last part of the output provides the distances between the nodes. It represents a matrix where each row and column corresponds to a specific node. The numbers denote the distance between the nodes, which can be useful for optimizing data access and minimizing latency in NUMA-aware applications.

For example, the distance between node 0 and itself is 10, between node 0 and node 1 is 12, between node 0 and node 2 is 20, and between node 0 and node 3 is 22. The distances are symmetric, so the distance from node 1 to node 0 is also 12, and so on.

Understanding the NUMA architecture and its distances is important for efficiently allocating resources and optimizing performance in systems with multiple processors or cores.

1. The purpose of binding the core?

numactlis a utility for managing NUMA architecture systems. NUMA (Non-Uniform Memory Access) is a multiprocessor architecture in which each processor core and memory in the system is divided into multiple nodes, and each node has its own local memory.

numactlThe purpose of is to allow the user to explicitly control the memory access behavior and core binding of processes on NUMA systems. By binding processes to specific cores or nodes, you can optimize memory access, reduce remote access latency, and improve application performance.

Here are numactlsome common uses and purposes of :

  1. Optimize memory access performance: By binding a process to a specific node or core, you can make it more likely that the process will fetch data from the local node's memory rather than via remote access. This reduces memory access latency and improves overall application performance.

  2. Reduced cache contention: Binding processes to specific cores or nodes reduces cache contention between different cores. This is because different cores share the memory of the same node, and when multiple cores access memory in the same node at the same time, it may cause cache contention problems. Binding a process can limit it to a certain node or core, reducing contention with other cores.

  3. Control memory allocation and migration: numactl Can also be used to control memory allocation and migration strategies. By setting the corresponding options, you can specify the behavior of memory allocation and migration to meet specific performance requirements. For example, memory can be bound to specific nodes, or memory migration can be limited to reduce interference in the system.

Overall, numactlthe goal is to help users optimize application performance on NUMA-architecture systems by controlling core binding and memory access behavior of processes to reduce latency, cache contention, and memory allocation/migration overhead.

2. How to bind the core?

Use numactlthe command to bind a core on a Linux system. numactlis a tool for controlling memory and CPU policies on non-uniform memory access (NUMA) systems.

To use numactlbonded cores, you can use the following command:

numactl --physcpubind=<cpu_list> <command>

in:

  • --physcpubindoption to specify a list of cores to bind.
  • <cpu_list>is a comma-separated list of cores, for example, "0,2,4" means to bind cores 0, 2, and 4.
  • <command>is the command to run.

For example, if you want to bind a process to core 0 and core 1, you can use the following command:

numactl --physcpubind=0,1 <command>

This will make <command>scheduling between core 0 and core 1.

Note that numactlit needs to be installed and used on a NUMA-capable system, and may require root privileges to be effective.

3、numa?

NUMA-related tools

numastat -h
numastat: invalid option -- 'h'
Usage: numastat [-c] [-m] [-n] [-p <PID>|<pattern>] [-s[<node>]] [-v] [-V] [-z] [ <PID>|<pattern>... ]
-c to minimize column widths
-m to show meminfo-like system-wide memory usage
-n to show the numastat statistics info
-p <PID>|<pattern> to show process info
-s[<node>] to sort data by total column or <node>
-v to make some reports more verbose
-V to show the numastat code version
-z to skip rows and columns of zeros
numactl -h
numactl: invalid option -- 'h'
usage: numactl [--all | -a] [--interleave= | -i <nodes>] [--preferred= | -p <node>]
               [--physcpubind= | -C <cpus>] [--cpunodebind= | -N <nodes>]
               [--membind= | -m <nodes>] [--localalloc | -l] command args ...
       numactl [--show | -s]
       numactl [--hardware | -H]
       numactl [--length | -l <length>] [--offset | -o <offset>] [--shmmode | -M <shmmode>]
               [--strict | -t]
               [--shmid | -I <id>] --shm | -S <shmkeyfile>
               [--shmid | -I <id>] --file | -f <tmpfsfile>
               [--huge | -u] [--touch | -T]
               memory policy | --dump | -d | --dump-nodes | -D

memory policy is --interleave | -i, --preferred | -p, --membind | -m, --localalloc | -l
<nodes> is a comma delimited list of node numbers or A-B ranges or all.
Instead of a number a node can also be:
  netdev:DEV the node connected to network device DEV
  file:PATH  the node the block device of path is connected to
  ip:HOST    the node of the network device host routes through
  block:PATH the node of block device path
  pci:[seg:]bus:dev[:func] The node of a PCI device
<cpus> is a comma delimited list of cpu numbers or A-B ranges or all
all ranges can be inverted with !
all numbers and ranges can be made cpuset-relative with +
the old --cpubind argument is deprecated.
use --cpunodebind or --physcpubind instead
<length> can have g (GB), m (MB) or k (KB) suffixes

4. Allow the CPU core used by the container

https://weread.qq.com/web/reader/57f327107162732157facd6?

insert image description here

おすすめ

転載: blog.csdn.net/hezuijiudexiaobai/article/details/131664116
おすすめ