Greenplum architecture

Greenplum can perform massively parallel processing (Massively Parallel Processing)

1. Greenplum architecture

1-1 Greenplum architecture

Greenplum is composed of three parts: Master Host, Segment, Interconnect.

 

1.1 Master Host

  1. Access to the system
  2. Database listening process (postgres)
  3. Handle all user connections
  4. Establish an inquiry plan
  5. Coordinate work processes
  6. Management tools
  7. System catalog tables and metadata (data dictionary)
  8. Does not store any user data

1.2 Segment

  1. Each segment (Segment) stores a part of user data
  2. A system can have multiple segments
  3. User cannot directly access access
  4. All access to the segment goes through the Master
  5. The database listening process (postgres) listens for connections from the Master

1.3 Interconnect

  1. Connection layer between Greenplum databases
  2. Inter-process coordination and management
  3. Based on Gigabit Ethernet architecture
  4. Belongs to the internal private network configuration of the system
  5. Supports two protocols: TCP or UDP

Greenplum network configuration example

Figure 1-2 Greenplum network configuration example

Description:

(1) Master Host and StandBy Master are connected by WAN to meet the user's network submission requirements (outside).

(2) Each segment host can have multiple segment instances, each segment corresponds to a CPU / online, mainly to avoid resource contention. In this example, there are 4 network ports connected to 4 virtual LANs, and each virtual LAN corresponds to a network segment 172.16.0, 172.16.1, 172.16.2.

(3) The ILOM network port is mainly the control of the console, and provides an interface for administrators to access each host.

2. Greenplum high availability architecture

Figure 1-3 Greenplum high availability architecture

  1. The Master Host is synchronized to the StandBy Master node in real time.

2.1 Master / Standby image protection

Figure 1-4 Master-Standby image protection

  • Standby node is used to provide Master service when the Master node is damaged
  • Standby is synchronized with the catalog and transaction logs of the master node in real time

2.2 Data redundancy-Segment mirror protection

Figure 1-5 Data redundancy-Segment image protection

  • The data of each segment is stored redundantly on another segment, and the data is synchronized in real time
  • When the Primary Segment fails, the Mirror Segment will automatically provide services
  • After the Primary Segment returns to normal, use gprecoverseg –F to synchronize the data.

Segment host hardware configuration example

Figure 1-6 Segment host hardware configuration example

2.3 Network redundancy

Figure 1-7 Network redundancy


 

Published 13 original articles · Like1 · Visitors 211

Guess you like

Origin blog.csdn.net/murkey/article/details/105521000