"Distributed technical principles and algorithms parse" two - one: a distributed architecture of distributed resource management and scheduling of the load

Clouds can manage multiple servers as a unified resource to provide services

How to organize server is a distributed architecture category

A centralized architecture

concept:

By one or more servers for a central server, all the data in the system are stored in the central server, all of the operations are also the first by the central server processing system;
a plurality of nodes connected to the central server with a server, and their information reporting to a central server, resource and task scheduling unified by a central server:
the central server based on this information, the task will be assigned to the node server; node server to perform the task, and the results fed back to a central server.

[Image dump the chain fails, the source station may have security chain mechanism, it is recommended to save the picture down uploaded directly (img-LUWDedX8-1585533740321) (../../ markdownPicture / assets / image-20200330093810633.png)]

Scene: Google Borg, K8S, Mesos

Master Slave send heartbeat packets allows to listen to the Master slave of viable state; can also refer to Redis sentinel pattern by the master node from sentinel to monitor, that is to say by means of the intermediate layer

Disadvantages: high performance requirements of the central server, a single point of bottlenecks, single point of failure

2 non-centralized architecture

concept:

Store execution and data services are distributed to different server clusters, cluster by message passing between the server communications and coordination;
this arrangement no sub-node server and a central server, the status of all servers are equal (equivalent) of

Compared to the centralized architecture, decentralized architecture reduces the pressure in one of a cluster or cluster of computers, a single point in solving the bottleneck and single point of failure problem while also improves system concurrency, more suitable for large management cluster size
Here Insert Picture Description
scene: Akka, Redis, Cassandra

Gossip protocol: Final coherence protocol
each node of the cluster from the node list periodically maintained their randomly selected k nodes, the data information is sent to its own storage section the k
points, receiving node uses this information consensus principle (whose most recent time stamp (ie the latest data), with regard to the principle of who prevail), of the received data and local data consolidation, data information such iteration after several cycles on all nodes in the cluster on consistent with the

Edge computing for application developers and service providers to provide services cloud services and IT environment at the edge of the network;
goal is to provide computing, storage and network bandwidth "at or near the user's input data;
edge multiple computing devices, dispersion, high availability and speed, compared with the use of decentralized better.

Published 235 original articles · won praise 264 · views 20000 +

Guess you like

Origin blog.csdn.net/qq_41594698/article/details/105192748