HHU Cloud Computing Final Review (Part 2) Hadoop, virtualization technology, openstack

All the knowledge points for memorizing concepts, you still have to follow the book, and if there are subtitles, remember the subtitles

It's half-opened, I'm too lazy to write very detailed

Just follow the test center in the second half, and you need to read more in the first half (not just the test center)

Some related articles of this up are very interesting. The textbooks used are the same but the test points are different. If you have spare time, you can read them: Zhihu

Chapter 5 Hadoop

https://chu888chu888.gitbooks.io/hadoopstudy/content/

Hadoop is a distributed system infrastructure developed by the Apache Foundation. Users can program distributed computing on the basis of Hadoop, which is mainly used in the processing of big data.

Hadoop cloud computing system Google Cloud Computing System
Hadoop HDFS Google GFS
Hadoop MapReduce Google MapReduce
HBase Google BigTable
ZooKeeper Google Chubby
Pig Google Sawzall

Distributed file system HDFS

Multiple computers work together on a network (sometimes called a cluster) to solve a certain problem just like a single system. We call such a system a distributed system. Distributed file systems are a subset of distributed systems, and the problem they solve is data storage. In other words, they are storage systems that span multiple computers. Data stored on a distributed file system is automatically distributed across different nodes.

Separation of metadata and data: NameNode and DataNode

Some basic concepts of Hadoop

  • Every file stored on the file system has associated metadata. Metadata includes file name, i-node (inode) number, data block location, etc., while data is the actual content of the file .

  • In a traditional file system, metadata and data are stored on the same machine because the file system does not span multiple machines.

  • In order to build a distributed file system in which clients are simple to use and don't need to know what other clients are doing, metadata needs to be maintained outside the client. The design philosophy of HDFS is to take out one or more machines to store metadata, and let the remaining machines store the content of files.

  • NameNode and DataNode are the two main components of HDFS. Among them, the metadata is stored on the NameNode, and the data is stored on the cluster of DataNodes. The NameNode not only manages the metadata of the content stored on HDFS, but also records things such as which nodes are part of the cluster, how many copies of a file, etc. It also determines what the system needs to do when a node in the cluster goes down or a copy of data is lost.

    • NameNode
      • NameNode is the most important one in the Hadoop daemon process. Hadoop adopts a master/slave (Master/slave) structure in both distributed computing and distributed storage . The distributed storage system is called the Hadoop file system, or simply called HDFS .NameNode is located at the main end of HDFS, which guides DataNode to perform underlying IO tasks.
      • Running the NameNode consumes a lot of memory and IO resources. Therefore, in order to reduce the load on the machine, the server that resides on the NameNode usually does not store user data or execute the computing tasks of the MapReduce program . This means that the NameNode server will not be a DataNode or TaskTracker at the same time.
      • However, the importance of the NameNode also brings a negative impact - the failure of the Hadoop cluster.
    • DataNode
      • Each slave node on the cluster will host a DataNode daemon process to perform the heavy work of the distributed file system, read or write HDFS data blocks to the actual files of the local file system. When you want to read HDFS files When writing, the file is divided into multiple blocks**, and the NameNode tells the client which DataNode** each data block resides in. The client communicates directly with the DataNode daemon process to process the local file corresponding to the data block. However, DataNodes communicate with other DataNodes to replicate these data blocks for redundancy.
    • Secondary NameNode
      • SNN is an auxiliary daemon process that monitors the status of HDFS clusters. It usually monopolizes a server that does not run other DataNode or TaskTracker daemon processes. The difference between SNN and NameNode is that it does not receive or record any real-time changes in HDFS. Instead it Communicates with NameNode to take snapshots of HDFS metadata at intervals configured by the cluster.
      • The NameNode is the single point of failure of the Hadoop cluster, and the snapshot of the SNN can help reduce downtime and reduce the risk of data loss. However, the failure of the NameNode requires manual intervention, that is, manually reconfiguring the cluster to use the SNN as the primary The NameNode.
  • Each piece of data stored on HDFS has multiple copies (replicas) stored on different servers. In essence, NameNode is the Master (master server) of HDFS, and DataNode is Slave (slave server).

  • Topology

  • Very vivid comics: principle comics

pipeline replication

(DataNode will communicate with other DataNodes to replicate these data blocks for redundancy)

In Hadoop Distributed File System (HDFS), in order to improve data reliability, each data block (block) will be replicated on multiple DataNodes. In this process, a DataNode is first selected for replication, and then the DataNode will transfer the data block to the next DataNode, thus forming a replication "pipeline". In this way, data blocks can be replicated on multiple DataNodes at the same time, which greatly improves the speed of data replication and reduces the load on NameNode. This is Hadoop's pipeline replication mechanism.

This pipeline replication mechanism can be compared to workers on a conveyor belt. After each worker finishes his work, he passes the item to the next worker. In this process, each worker can work at the same time, which improves work efficiency.

Chapter 7 Virtualization Technology

7.1 Introduction to virtualization technology

  • Traditional Data Center vs Virtual Data Center

    • traditional data center
      • Using a variety of technologies
      • isolated between businesses
      • Complex network structure
    • virtual data center
      • high speed
      • flat
      • Virtualization
  • Introduction to Virtualization Technology

  • With the development of cloud computing, traditional data centers are gradually transitioning to virtualized data centers, that is, virtualization technology is used to abstract and integrate the physical resources of the original data center.

    • Realize the dynamic allocation and scheduling of resources, improve the utilization rate of existing resources and service reliability
    • Provides automated service provisioning capabilities to reduce O&M costs
    • Have an effective security mechanism and reliability mechanism to meet the security needs of public customers and enterprise customers
    • Facilitate system upgrades, migrations and retrofits

7.2 Virtual machine migration

A host contains more than one virtual machine VM

  • virtual machine migration

    • It is to migrate the virtual machine instance from the source host to the target host, and on the target host, the running state of the virtual machine can be restored to the same state before the migration, so that the task of the application can be continued
  • step

  • Virtual machine migration is mainly divided into three phases: Push phase, Stop-and-Copy phase, and Pull phase.

    1. Push phase (1\2\3): In this phase, the source host will transfer the memory pages of the virtual machine to the target host in advance, especially the pages that are not frequently modified, so as to reduce the actual downtime.
    2. Stop-and-Copy phase (4): In this phase, the source host will suspend the virtual machine, copy the remaining memory pages to the target host, and send the status of the virtual machine (including processor status, device status, etc.) to the target host.
    3. Pull phase (5\6): In this phase, the target host will pull the remaining memory pages from the source host. When these pages are pulled, the target host starts the migrated virtual machine.

    Memory migration is the most challenging part of the virtual machine migration process, because it is necessary to minimize the downtime of the virtual machine while maintaining data consistency.

    In fact, the virtual machine migration process does not have to contain the above three phases at the same time. For example, depending on the actual situation and requirements, it may only include the Push phase and the Stop-and-Copy phase, or only the Stop-and-Copy phase and the Pull phase, or even a single phase. The choice of a specific strategy depends on factors such as the running status of the virtual machine, network environment, and application requirements.

7.3 Network Virtualization

  • Require

    • The core layer network has ultra-large-scale data exchange capabilities
    • Sufficient 10G access capacity
  • Benefits of Core Layer Network Virtualization

    • Provides Virtual Chassis technology
    • Simplify device management
    • Improve resource utilization
    • Improve the flexibility and scalability of the switching system
    • Provide support for flexible scheduling and dynamic scaling of resources

Chapter 8 openstack

Google Cloud Platform (GCP), Amazon Web Services (AWS), Microsoft Azure, and OpenStack are all cloud computing platforms. These platforms provide a variety of cloud services, including virtualized hardware resources (such as computing, storage, and networking ), and advanced services (such as databases, machine learning, big data analysis , etc.). Their main goal is to help users quickly get applications up and running without their own physical hardware or data centers.

  1. Public cloud providers : Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure are all public cloud providers. They offer a variety of cloud services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Public cloud providers typically offer these services in data centers around the world, and users pay for what they need without having to maintain the physical hardware themselves. However, using public cloud services may cause users to lose control of the infrastructure to some extent, especially when it comes to security and compliance.
  2. OpenStack : OpenStack is an open source cloud computing platform that mainly provides IaaS (Infrastructure as a Service) solutions. Using OpenStack, users can build and manage cloud computing platforms in their own data centers or private cloud environments. Therefore, OpenStack users can have complete control over the infrastructure, including data storage, computing resource allocation, and network structure design. However, using OpenStack requires certain technical skills and resources for installation, configuration, and maintenance.

OpenStack is an open source cloud computing platform that mainly provides Infrastructure as a Service (IaaS). OpenStack consists of a series of components, each responsible for a different function in the cloud environment.

OpenStack has three main service members: computing service (Nova), storage service (Swift), mirror service (Glance)

8.1 Computing service Nova

Nova : Nova is one of the main components of OpenStack and is responsible for computing services. Nova provides functions such as creation, scheduling, and management of virtual machine instances. It supports various virtualization technologies, including KVM, Xen, VMware, etc., and also supports container technologies, such as Docker.

I will further explain the main components of the Nova and Swift components of OpenStack.

  • Nova : Nova is the computing component of OpenStack. Its main task is to provide virtualization services, that is, to run and manage virtual machines. The main components of Nova include:
    1. API Server (Nova-Api): API Server provides an interface for interacting with cloud infrastructure, and is the only component that can be used externally to manage infrastructure. Responsible for handling API requests, such as starting, pausing, and stopping virtual machines.
    2. Message Queue (Rabbit MQ Server): OpenStack nodes use AMQP (Advanced Message Queue Protocol) to communicate through message queues (such as RabbitMQ).
    3. Compute Worker (Nova-Compute): Compute Worker manages the instance lifecycle, receives instance lifecycle management requests through Message Queue, and undertakes operational work. This is the service that runs on the host machine and is used to start and terminate virtual machine instances.
    4. Network Controller (Nova-Network): The Network Controller handles the network configuration of the host, including IP address allocation, configuring VLANs for projects, implementing security groups, and configuring computing node networks.
    5. Volume Workers (Nova-Volume): Volume Workers are used to manage instance volumes based on LVM (Logical Volume Manager). Volume Workers has functions related to volumes, such as creating new volumes, deleting volumes, attaching volumes to instances, and detaching volumes from instances.
    6. Scheduler (Nova-Scheduler): This service is responsible for determining which host the virtual machine instance runs on, that is, the scheduling policy.

Suppose a user wants to create a virtual machine instance via the OpenStack API:

  • First, the user's request will be sent to OpenStack's external interface, which is responsible for receiving and parsing the user's request nova-api.nova-api
  • nova-apiWhen a request to create a virtual machine is received, the request is placed in Message Queuea .
  • nova-schedulerRequests will be obtained Message Queuefrom , and the most suitable host machine will be selected to run the new virtual machine instance according to the scheduling strategy (such as load balancing, resource optimization, etc.).
  • After the host is selected, nova-schedulerthe request and the selected host information will be returned to Message Queue.
  • nova-computeRun on the corresponding host machine, get the request from Message Queueit , and start the virtual machine instance on the host machine.
  • nova-conductorResponsible for handling all requests related to the database throughout the process, including updating the status of the virtual machine, recording logs, etc.

RabbitMQ

**AMQP (Advanced Message Queuing Protocol)** is an open standard application layer protocol for message-oriented middleware. RabbitMQ is an open source implementation of the AMQP protocol. The main goal of AMQP is to provide reliable message delivery to ensure that messages between sender and receiver are not lost during transmission. In AMQP, there are three key components: messages, queues and exchanges. The following is a detailed description of these components:

  1. Message : A message is data passed between a sender and a receiver. Every message has a header and a body. The header contains the attributes and routing information of the message, and the body contains the actual data to be sent.

  2. Queue : A queue is a buffer for storing messages. Each queue has a name, and the consumer application can get the messages in the queue through the name. Queues can be persistent, temporary or auto-deleted:

    • Persistent queue : The persistent queue will not disappear due to system restart or application termination.
    • Temporary queues : Temporary queues are stored in memory and disappear when the system is shut down.
    • Auto-delete queues : Auto-delete queues are automatically deleted when no consumers are connected to it.
  3. Exchange (Exchange) : The exchange is responsible for receiving messages and forwarding messages to queues. The switch can accurately and safely forward the message to the corresponding queue through the routing table according to the routing information of the message. Each exchange has a unique Exchange ID, and exchanges can be persistent, temporary, or auto-deleted:

    • Persistent Exchange : Persistent exchanges will not disappear due to system restart or application termination.
    • Temporary swaps : Temporary swaps are stored in memory and disappear when the system is shut down.
    • Auto-delete exchange : Auto-delete exchanges are automatically deleted when there are no bound queues.

AMQP defines three different types of exchanges:

  • Fanout Exchange : Routes all messages sent to this exchange to all queues bound to it.
  • Direct Exchange : Route the message to the queue whose Binding Key exactly matches the Routing Key.
  • Topic Exchange : Route messages to queues whose Binding Key and Routing Key match a certain pattern.

In addition, another important concept in AMQP is binding (Binding) . Binding is the relationship between the exchange and the queue. Through the Binding Key, the exchange knows to which queue the binding (Binding) message should be sent. When creating a binding, an optional Binding Key can be specified for it. The exchange decides how to route the message based on the key and its own type.

8.2 Swift

Swift : Swift is the object storage component of OpenStack responsible for storing and retrieving data. Swift uses a distributed architecture where data can be replicated across multiple disks, hosts, or even data centers, providing high availability and fault tolerance. Swift is ideal for storing and distributing large amounts of static data such as virtual machine images, photos, emails

Swift provides the same service as Amazon S3

Swift : Swift is the object storage component of OpenStack, and its main task is to provide large-scale distributed storage services. The major components of Swift include:

  • Proxy Server : Receives read and write requests from users and routes these requests to other Swift services.

  • Storage Server: Storage Server provides storage services on disk devices.

    • Account Server : Manage the data of user accounts and record the container information stored by users.
    • Container Server : Manage containers and record information about objects in containers.
    • Object Server : Manages the actual stored object data.
  • Ring : It is a mechanism for data location and distribution in Swift, which is responsible for maintaining the status of all storage nodes and determining the storage location of data.

  • Consistency Servers: The purpose is to find and resolve errors caused by data corruption and hardware failure.

Suppose a user wants to store a file in OpenStack Swift:

  • First, the user's storage request is sent to Proxy Server. Proxy ServerIt is the interface between the user and the Swift system, responsible for receiving the user's request.
  • Proxy ServerWhen a request is received, the s are consulted Ringto determine on which s the file should be stored Object Server. RingIt is Swift's data positioning and distribution mechanism, which is responsible for maintaining the status of all storage nodes and determining the storage location of data.
  • After determining the storage location, Proxy Serverthe file will be transferred to the corresponding Object Serverfor storage.
  • At the same time, the information of and Proxy Serverwill be updated . Manage the data of the user account and record the container information stored by the user. Manage containers and record information about objects in containers.Account ServerContainer ServerAccount ServerContainer Server

In this way, when the user wants to access this file next time, Proxy Serverhe can find this file by querying the information of Ring, Account Server, and Container Server, and send it to the user.

  • Data Consistency Model (Consistency Model)
    • In this model, data is replicated across multiple nodes for increased availability and durability. When reading and writing data, the number of replicas for reading (R) and writing (W) can be adjusted to achieve different levels of data consistency.
    • N is the total number of copies of the data
    • W is the number of replicas whose write operations are confirmed to be accepted
    • R is the number of replicas for read operations
  • strong consistency
    • R+W >N, to ensure that the read and write operations on the replicas will produce an intersection, because at least one replica participates in both read and write operations, so it can be guaranteed that the latest version of the data is read.
  • weak consistency
    • R+W<=N, if the copy sets of read and write operations do not produce intersection, dirty data may be read

This means that in some cases, you may read old or out-of-date data for a short period of time. However, over time, all replicas will eventually reach a consistent state. This consistency model provides Swift with high availability and fault tolerance, because even in the event of a failure or network partition, Swift can still provide read and write services. However, this consistency model may not be suitable for applications that require strong consistency.

Chapter 9 Cloud Computing Data Center

9.1 Characteristics of Cloud Data Center

  • High Equipment Utilization
    • Use virtualization technology to integrate systems and data centers, optimize resource utilization and simplify management
  • green technology
    • Reduce data center energy consumption through advanced power supply and cooling technologies
  • automated management
    • The cloud data center should be 24x7 unattended and remotely manageable.
  • high availability
    • When the network is expanded or upgraded, the network can operate normally and has little impact on the performance of the network.

9.2 Network Deployment (FaTree Focus)

If the cloud storage architecture research is inside the cloud storage system, then the network deployment of cloud storage is researching the outside of the system—the network architecture. The most important research is the line problem between the switch and the switch/ server .

Improve the tree structure: This article has an explanation of the fat tree: cloud storage quick learning, it is enough to read these two articles (Part 1)

  • There is also VL2 architecture: more similar to mesh structure

  • recursive hierarchy

    • Both the switch and the server have the function of data forwarding
    • Dcell、FiConn、BCube
  • Optical Switching Network

  • Wireless Data Center Network

  • Software Defined Networking

9.3 Green energy-saving technology

  • Distribution System

    • a) High-voltage DC power distribution technology (the number of conversion stages is one less than the traditional one)
    • b) Mains direct power supply and distribution technology (only after two-stage circuit conversion)
  • Air Conditioning System

    • High temperature return air conditioning system
    • Low Energy Humidification System
    • natural cooling system
  • Main features and design of container energy-saving technology (emphasis)

    • Container data center: "squeeze" servers, networks, air conditioners, power supplies and other equipment into containers at high density, and containers (as a module) form a data center
    • technology
      • a) Shorten the air supply distance: because of "crowding", the air supply distance is short
      • b) Increase the temperature of the cold aisle: the chiller (a type of refrigeration equipment) does not need to lower the temperature so low, so it consumes less power
      • c) Cold/hot aisles are completely isolated: similar to the principle of the previous point
      • d) Thermal insulation material: coated on the container, there will be no condensation inside in winter, and no external heat can enter in summer
      • e) Free Cooling function: natural cooling instead of cooling system
    • features
      • characteristics of non-energy-efficient technologies)
      • a) High density (refers to the equipment inside the container)
      • b) Modularity (referring to containers)
      • c) Rapid deployment on demand (referring to containers)
      • d) Easy to move (refers to the container)
  • Data center energy saving

    • DVFS energy-saving technology
    • Energy saving technology based on virtualization
    • Energy-saving technology based on host off/on
  • Noun explanation PUE

    • Data Center Energy Utilization
    • PUE = Total Data Center Energy Consumption / IT Equipment Energy Consumption
    • All the energy in the data center is used for the IT equipment, so the PUE value will be close to 1. But in reality, the data center also needs to use energy for cooling, lighting and other operations, so the value of PUE is usually greater than 1.
    • lower the better

9.4 Automated Management

  • Automated Management Features
    • Cloud automation: allocate and reclaim servers, storage, network, applications on demand
    • full visibility
    • Automatic Control Execution
    • Seamless integration at multiple levels
    • Comprehensive and real-time reporting
    • Full Lifecycle Support

9.5 Disaster recovery backup

  • Compared
Disaster recovery capability practice difference Service Features
Data-level disaster recovery Establish a remote data system, replicate application data in real time, and quickly take over business in the event of a disaster Disruption of service in the event of a disaster
Application-level disaster recovery Establish a remote data system, which can back up each other with the local application system and work together Uninterrupted service in the event of a disaster
  • technical indicators
index explain
Data Recovery Point Objective RPO The amount of data loss that the business system can tolerate
Recovery Time Objective RTO The maximum time that the business can be out of service can be tolerated
  • key technology
    • remote mirroring technology
    • snapshot technology
    • Remote Data Disaster Recovery and Backup Technology Based on IP-based SAN
    • database replication technology

Chapter 10 Cloud Computing Core Algorithms

  1. Paxos : Paxos is a protocol for solving consensus problems in distributed systems. Consensus is one of the key issues in distributed systems, which involves how to make all nodes in the system agree on a certain value without a central coordinator. The Paxos protocol ensures that the nodes in the system can reach consensus even if some nodes in the system fail.

  2. DHT (Distributed Hash Table) : DHT is a data structure that can achieve efficient data lookup and storage without a central coordinator. In DHT, data items are distributed to various nodes in the system, and each node is responsible for a part of the data items. When you need to find a data item, you can directly locate the node responsible for the data item through the hash function. The main advantage of DHT is its high lookup efficiency and its ability to adapt to the joining and leaving of nodes.

    1. https://zhuanlan.zhihu.com/p/166126098
  3. Gossip : Gossip (also known as gossip protocol or dissemination protocol) is an information dissemination protocol, which is often used to achieve information dissemination and consistency between nodes in large-scale distributed systems. The working method of the Gossip protocol is similar to the spread of rumors in human society: each node regularly exchanges information with other nodes, and through multiple exchanges, information can quickly spread among all nodes. The main advantage of the Gossip protocol is that it can tolerate node failures, and when the system scale increases, its communication overhead will not increase significantly.

Chapter 11 Overview of Cloud Computing Development in China

11.1 Overview of Domestic Development

Four Trends

  • First, with the continuous improvement of the level of cloud computing innovation, the integration trend of the upper, middle and lower reaches of the industrial chain is more obvious.
  • Second, the domestic cloud computing application market has further developed and matured, and the market space has expanded significantly.
  • Third, with the rapid development of cloud computing services, public cloud services and private cloud construction and operation and maintenance within large enterprises and institutions will become the focus.
  • Fourth, the degree of publicity of cloud computing will be further enhanced. The domestic cloud computing application market has further developed and matured, and the market space has expanded significantly.

11.2 Domestic Cloud Storage Technology

  • Taobao Distributed File System TFS

    • Taobao File System (TFS) is a highly scalable, highly available, high-performance, Internet service-oriented distributed file system, mainly for massive unstructured data. Reliable and highly concurrent storage access.

    • Taobao provides massive small file storage

    • High fault-tolerant architecture and smooth expansion

      • 1) Cluster fault tolerance
        • TFS can be configured with primary and secondary clusters. Generally, the primary and secondary clusters will be stored in two different computer rooms.
        • The primary cluster provides all functions, and the secondary cluster provides read only.
      • 2) NameServer fault tolerance
        • Namserve is responsible for maintaining the Block list and the relationship between DataServer and Block.
        • There will also be a regular heartbeat between NameServer and DataServer.
      • 3) DataServer fault tolerance
        • TFS uses Block to store multiple copies to achieve DataServer fault tolerance.
        • For each write request, it must be considered successful when all blocks are written successfully.
        • TFS records the checksum CRC for each file
    • Flat data organization structure

  • Features of the A8000

    • A8000 ultra-low power consumption cloud storage machine

    • Excellent performance, ultra-low power consumption, convenient management, simple and universal, ultra-high density, comprehensive monitoring, and high reliability

    • key technology

      • A8000 Low Power Motherboard
      • Centralized DC power supply
      • Centralized Cooling System

11.3 Domestic large database

Alibaba OceanBase ( 有待补充)

OceanBase was mainly created to solve the large-scale data of Taobao . It is a high-performance distributed database system that supports massive data. It can manage hundreds of billions of records. Support SQL operations.

  • System Features and Advantages
    • The main data remains relatively stable over time
    • Save addition, deletion and modification records in memory, which greatly improves the performance of system write transactions
    • Expanding the memory of UpdateServer increases the modification amount contained in the memory
    • Dynamic data server UpdateServer writes commit log and adopts dual-machine (or even multi-machine) hot backup
    • OceanBase queries by the range of the primary key correspond to continuous disk reads

11.4 Cloud Monitoring Technology

cVideo Cloud Video Surveillance System

  • Cloud monitoring technology system architecture and key technologies

    • architecture
      • Front-end equipment
      • Handle server clusters
      • access server
      • storage server cluster
      • flow media services
      • central server
      • client
    • key technology
      • Based on distributed network design, it supports multi-point ultra-long-distance real-time high-definition video monitoring
      • Support large-scale, multi-level monitoring system
      • Support massive video data backup
      • Adopt advanced video content intelligent analysis technology
  • 云监控技术应用(?)

    • 1. Specific person video retrieval

    • 2. Zone Intrusion Detection

      • A detection and alarm application function extended on the basis of moving target detection, which can automatically detect moving targets appearing in the preset defense zone in the surveillance video, if the detected moving target and its behavior meet the preset warning conditions , it will automatically perform relevant actions such as snapshot, video recording and alarm.
    • 3. traffic statistics

      • Traffic flow statistics is extended on the basis of video intelligent analysis technology-moving target detection. It adopts the "virtual coil" method to automatically detect the vehicles appearing in the surveillance video, and count the number of vehicles entering and leaving the corresponding lane.
      1. flame detection
      • The flame detection function can automatically analyze and judge the video image information, find out the signs of fire in the monitoring area in time, issue alarms and provide useful information in the fastest and best way, and can effectively assist firefighters to deal with fire crises.

11.5 Alibaba Cloud Services

  • Elastic Compute Service (ECS)

    • full administrative rights
    • API interface
    • elastic memory
    • Snapshot backup and restore
    • custom mirror
    • online migration
  • Open Storage Service OSS

    • elastic expansion
    • Large-scale concurrent read and write
    • Image processing optimization
  • Open Structured Data Service (OTS)

    • mass storage of data
    • Simple and easy table management
    • data management
  • Open Data Processing Service ODPS

    • Mass computing
    • Data Security
    • out of the box
  • RDS

    • Safe and Stable, Data Reliable
    • Automatic backup, transparent management
    • Excellent performance, flexible expansion

Guess you like

Origin blog.csdn.net/weixin_57345774/article/details/131482768