Information Systems Project Management Tutorial (4th Edition): Chapter 2 Information Technology and Its Development

Please click ↑ to follow and collect . This blog will provide you with wonderful knowledge sharing for free ! There is a surprise! !

Chapter 2 Information Technology and Its Development

2.1 Information technology and its development

Information technology is formed by the combination of computer technology and telecommunications technology based on microelectronics. It acquires, processes, processes, stores, and stores sound, image, text, digital and various sensor signal information. According to the different manifestations of the technologies used and disseminated, information technology can be divided into hard technology (materialized technology) and soft technology (non-materialized technology). The former refers to various information equipment and their functions, such as sensors, servers, smartphones, communication satellites, and laptops. The latter refers to various knowledge, methods and skills related to information acquisition and processing, such as language technology, data statistical analysis technology, planning and decision-making technology, computer software technology, etc.

2.1.1 Computer software and hardware

Computer Hardware refers to the general term for various physical devices composed of electronic, mechanical and optoelectronic components in a computer system. These physical devices form an organic whole according to the requirements of the system structure and provide a material basis for the operation of computer software. Computer software (Computer Software) refers to the programs and their documents in the computer system. The program is the description of the processing object and processing rules of the computing task; the documentation is the explanatory information required to facilitate the understanding of the program. The program must be installed into the machine in order to work. Documents are generally for people to see and may not be installed into the machine.

Hardware and software are interdependent. Hardware is the material basis on which software works, and the normal operation of software is an important way for hardware to function. A computer system must be equipped with a complete software system to work properly and fully utilize the various functions of its hardware. Hardware and software develop collaboratively. Computer software develops with the rapid development of hardware technology, and the continuous development and improvement of software promotes the update of hardware. The two are closely intertwined and developed, and neither can be separated. With the development of computer technology, in many cases, some functions of the computer can be implemented by both hardware and software. Therefore, in a certain sense, there is no absolutely strict boundary between hardware and software.

2.1.2 Computer network

In the computer field, a network uses physical links to connect isolated workstations or hosts to form a data link, thereby achieving the purpose of resource sharing and communication. A system that connects multiple computer systems with independent functions in different geographical locations through communication equipment and lines, and uses fully functional network software (network protocols, information exchange methods, network operating systems, etc.) to share network resources. All can be called computer networks. From the scope of the network, the network categories can be divided into Personal Area Network (PAN), Local Area Network (Local Arc Network, LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), and public network. (PublicNetwork), Private Network (Private Network) .

1. Network standard protocols

A network protocol is a set of rules, standards, or conventions established for the exchange of data in a computer network. Network protocols consist of three elements, namely semantics, syntax and timing. Semantics is the explanation of the meaning of each part of the control information. It stipulates what kind of control information needs to be sent, the actions to be completed and what kind of response to make; syntax is the structure and format of user data and control information, and the order in which the data appears; Chronology is a detailed description of the order in which events occur. People vividly describe these three elements as: semantics represents what to do, syntax represents how to do it, and timing represents the order of doing it.

1) OSI

The Open System Interconnect Reference Model (Open System Interconnect, OSI) jointly developed by the International Organization for Standardization (ISO) and the Consultative Committee on International Telegraph and Telephone (CCITT) aims to provide a common foundation and standard framework for heterogeneous computer interconnection, and Provide a common reference for maintaining consistency and compatibility of related standards. OSI uses a layered structured technology, which is divided into physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer from bottom to top. WAN protocols operate at the bottom three layers of the OSI reference model and define communication WAN protocols on different WAN media. They mainly include: PPP point-to-point protocol, ISDN integrated services digital network, XDSL (general name for DSL digital subscriber line: HDSL, SDSL, MVL, ADSL), DDN digital dedicated line, x.25, FR trunk, ATM asynchronous transmission mode.

2) IEEE802 protocol family

The IEEE 802 specification defines how network cards access transmission media (such as optical cables, twisted pairs, wireless, etc.) and how to transmit data on the transmission media. It also defines the establishment, maintenance, and teardown of connections between network devices that transmit information. Products that comply with the IEEE 802 standard include network cards, bridges, routers, and other components used to build local area networks. IEEE 802 specifications include: 802.1 (Introduction to 802 Protocol), 802.2 (Logical Link Control Layer LLC Protocol), 802.3 (Ethernet CSMA/CD Carrier Sense Multiple Access/Conflict Detection Protocol), 802.4 (Tken Bus Protocol) ), 8025 (Token Ring protocol), 802.6 (Metropolitan area network MAN protocol), 802.7 (FDDI broadband technology protocol), 802.8 optical fiber technology protocol), 802.9 (voice/data integration specification on LAN), 802.10 (LAN Security interoperability standard).802.11 (Wireless LAN WLAN standard protocol).

3 ) TCP/IP

The Internet is a collection of thousands of organizations and networks that collaborate with each other. TCP/IP is the core of the Internet.

TCP/P refers to OSI to a certain extent. It simplifies the seven layers of OSI into four layers: ① The services provided by the three layers of application layer, presentation layer and session layer are not very different, so in TCP/IP, they Be merged into one layer of application layer. ②Because the transport layer and network layer play a very important role in network protocols, they are regarded as two independent layers in TCP/IP. ③Because the contents of the data link layer and the physical layer are similar, they are merged into one layer of the network interface layer in TCP/IP.

In the application layer, many application-oriented protocols are defined, and applications use the network to complete data interaction tasks through this layer of protocols. These protocols mainly include FTP (File Transfer Protocol, file transfer protocol), TFTP (Trivial File Transfer Protocol, simple file transfer protocol), HTTP (Hpertext Transfer Protocol, hypertext transfer protocol), SMTP (Simple Mail Transfer Protocol, simple mail transfer protocol) ), DHCP (Dynamic Host Configuration Protocol), Telnet (Remote Login Protocol) DNS (Domain Name System), SNMP (Simple Network Management Protocol), etc.

The transport layer mainly has two transport protocols, namely TCP and UDP (User Datagram Protocol). These protocols are responsible for providing flow control, error checking and sorting services. The protocols in the network layer mainly include IP, ICMP (Internet Control Message Protocol, Internet Control Message Protocol), IGMP (Internet Group Management Protocol, Internet Group Management Protocol), ARP (Address Resolution Protocol, Address Resolution Protocol) and RARP (Reverse Address Resolution Protocol, Reverse Address Resolution Protocol), etc. These protocols handle the routing of information and host address resolution.

Since the network interface layer merges the physical layer and the data link layer, the network interface layer is not only a physical medium for transmitting data, but also provides an accurate line for the network layer.

2. Software-defined networking

Software Defined Network (SDN) is a new innovative network architecture and an implementation method of network virtualization. It can define and control the network through software programming. It combines the control plane of network equipment with data. The network is separated from each other, thereby achieving flexible control of network traffic, making the network more intelligent, and providing a good platform for the innovation of core networks and applications.

Using the idea of ​​layering, SDN separates data and control. The control layer includes a logically centralized and programmable controller that can master global network information and facilitate operators and researchers to manage and configure the network and deploy new protocols. The data layer, including dumb switches (different from traditional Layer 2 switches, refers specifically to devices used to forward data), only provides simple data forwarding functions, which can quickly process matching data packets to adapt to the growing demand for traffic. The two layers use open unified interfaces (such as OpenFlow, etc.) to interact. The controller delivers unified standard rules to the switch through a standard interface, and the switch only needs to perform corresponding actions according to these rules. SDN breaks the closed nature of traditional network equipment. In addition, the open interfaces and programmability in the north-south and east-west directions also make network management simpler, more dynamic and more flexible.

The overall architecture of SDN is divided into data plane, control plane and application plane from bottom to top (from south to north), as shown in Figure 2-1. Among them, the data plane is composed of general network hardware such as switches, and various network devices are connected through SDN data paths formed by different rules; the control plane includes the logically centered SDN controller, which masters global network information and is responsible for various Control of forwarding rules; the application plane contains various SDN-based network applications, and users can program and deploy new applications without caring about the underlying details.

Figure 2-1 SDN architecture diagram

The control plane and data plane communicate through the SDN Control-Data-Plane Interface (CDPI), which has a unified communication standard and is mainly responsible for delivering the forwarding rules in the controller to the forwarding device. The most important thing is The OpenFlow protocol is applied. The control plane and the application plane communicate through the SDN Northbound Interface (NBI). However, NBI is not a unified standard. It allows users to customize and develop various network management applications according to their own needs.

The interfaces in SDN are open, with the controller as the logical center. The southbound interface is responsible for communicating with the data plane, the northbound interface is responsible for communicating with the application plane, and the east-west interface is responsible for communication between multiple controllers. The most mainstream southbound interface CDPI uses the OpenFlow protocol. The most basic feature of OpenFlow is to match forwarding rules based on the concept of flow. Each switch maintains a flow table (Flow Table) and forwards according to the forwarding rules in the flow table. The establishment, maintenance and downloading of the flow table All are done by the controller. For the northbound interface, the application calls various required network resources through northbound interface programming to achieve rapid configuration and deployment of the network. The east-west interface makes the controller scalable and provides technical guarantee for load balancing and performance improvement.

3. Fifth generation mobile communication technology

The fifth generation mobile communication technology (5G) is a new generation of mobile communication technology with the characteristics of high speed, low delay and large connection.

The International Telecommunications Union (ITU) has defined eight major indicators of 5G, and the comparison with 4G is shown in Table 2-1.

Table 2-1 Benchmarking of main indicators of 4G and 5G

5G international technical standards focus on meeting the needs of flexible and diverse Internet of Things. Based on the basic technologies of Orthogonal Frequency Division Multiple Access (OFDMA) and Multiple Input Multiple Output (MIMO), 5G adopts a new and flexible system design to support three major application scenarios. In terms of frequency bands, unlike 4G, which supports mid- and low-frequency, 5G supports both mid-low and high-frequency bands, considering that mid- and low-frequency resources are limited. The mid-low frequency meets the coverage and capacity requirements, and the high frequency meets the need to increase capacity in hotspot areas. 5G targets A unified technical solution is designed for mid-low frequency and high frequency, and supports a basic bandwidth of 100 MHz. In order to support high-speed transmission and better coverage, 5G adopts LDPC (a group error correction code with a sparse check matrix) and Polar (a linear block code based on channel polarization theory) new channel coding schemes with stronger performance Large-scale antenna technology, etc. In order to support low latency and high reliability, 5G uses technologies such as short and fast feedback, multi-layer/multi-station data retransmission.

The International Telecommunications Union (ITU) has defined three major application scenarios for 5G, namely enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (uRLLC) and massive machine-type communications (mMTC). Enhanced mobile broadband is mainly aimed at the explosive growth of mobile Internet traffic and provides mobile Internet users with a more extreme application experience; ultra-high reliability and low-latency communication is mainly aimed at industrial control, telemedicine, autonomous driving, etc. It has extremely high latency and reliability Required vertical industry application needs; massive machine-type communications are mainly oriented to application needs targeting sensing and data collection, such as smart cities, smart homes, and environmental monitoring.

2.1.3 Storage and database

1. Storage technology

Storage classification is divided into closed system storage and open system storage according to server type. Closed systems mainly refer to servers such as mainframes. Open systems refer to servers based on operating systems including Kirin, Euler, UNIX, Linux and other operating systems. Open system storage is divided into: built-in storage and plug-in storage. Plug-in storage is divided into direct-attached storage (Direct-Attached Storage, DAS) and networked storage (Fabric-Attached Storage, FAS) according to the connection method. Networked storage is divided into network-attached storage (NAS) and storage area network (Storage Area NetworkSAN) according to the transmission protocol. The comparison of technologies and applications among DAS, NAS, SAN and other storage modes is shown in Table 2-2.

Table 2-2 Comparison of technologies and applications of common storage modes

Storage virtualization is one of the core technologies of "cloud storage". It integrates storage resources from one or more networks and provides users with an abstract logical view. Users can use the unified logic in this view to interface to access integrated storage resources. Users do not know their true physical location when accessing data. The direct benefits it brings to people are improved storage utilization, reduced storage costs, and simplified management of large, complex, and heterogeneous storage environments.

Storage virtualization enables storage devices to be converted into logical data stores. A virtual machine is stored as a set of files in a directory on the datastore. Data stores are logical containers similar to file systems. It hides the characteristics of each storage device and forms a unified model to provide disks to virtual machines. Storage virtualization technology helps the system manage virtual infrastructure storage resources, improve resource utilization and flexibility, and improve application uptime.

Green Storage technology refers to technology used from the perspective of energy conservation and environmental protection to design and produce storage products with better energy efficiency, reduce the power consumption of data storage devices, and improve the performance per watt of storage devices. Green storage is a system design plan that runs through the entire storage design process and includes many factors such as the external environment of the storage system, storage architecture, storage products, storage technology file systems, and software configurations.

The core of green storage technology is to design processors and more efficient systems that run cooler, produce storage systems or components with lower energy consumption, and reduce the electronic carbon compounds produced by products. The ultimate goal is to improve the performance of all network storage devices. Energy efficiency, using the least storage capacity to meet business needs, thus consuming the least energy. A storage system guided by green concepts is ultimately a balance between storage capacity, performance, and energy consumption.

Green storage technology involves all storage sharing technologies, including disk and tape systems, server connections, storage devices, network architecture and other storage network architectures, file services and storage application software, data deduplication, automatic thin provisioning and tape-based backup technology, etc. Storage technology that can improve storage utilization and reduce construction and operating costs, with the goal of improving the energy efficiency of all network storage technologies.

2. Data structure model

The data structure model is the core of the database system. The data structure model describes the method of structuring and manipulating data in the database. The structural part of the model specifies how the data is described (such as trees, tables, etc.). The manipulation part of the model specifies operations such as adding, deleting, displaying, maintaining, printing, searching, selecting, sorting, and updating data.

There are three common data structure models: hierarchical model , network model and relational model . Hierarchical model and network model are also collectively referred to as formatted data models .

    1) Hierarchical model

The hierarchical model is the earliest model used in database systems. It uses a "tree" structure to represent the association between entity sets, in which the entity sets (represented by rectangular boxes) are nodes, and the connections between the nodes in the tree represent the relationship between them. In the hierarchical model, each node represents a record type, and the connection between record types is represented by the connection between nodes (directed edge). This connection is a one-to-many connection between parent and child. This makes the hierarchical database system only handle one-to-many entity relationships. Each record type can contain several fields, where the record type describes the entity and the fields describe the attributes of the entity. Each record type and its fields must be named. Each record type and each field in the same record type cannot have the same name. Each record type can define a sorting field, also called a code field. If the value defined in the sorting field is unique, it can uniquely identify a record value.

A hierarchical model can theoretically contain any limited number of record types and fields, but any actual system will limit the number of record types and fields contained in the hierarchical model due to storage capacity or implementation complexity. In the hierarchical model, child nodes of the same parent are called sibling nodes, and nodes without child nodes are called leaf nodes. A basic feature of the hierarchical model is that any given record value can only be viewed according to its hierarchical path, and no child record value can exist independently from the parent record value.

    2)Network model

A network database system uses a network model as a data organization method . The network model uses a network structure to represent entity types and the connections between them. The network model is a database model that can flexibly describe things and the relationships between them.

The connections between things in the real world are mostly non-hierarchical. One thing is connected to several others. It is very unintuitive to express this relationship using a hierarchical model. The network model overcomes this shortcoming and can clearly represents this non-hierarchical relationship. A data structure model that uses a directed graph structure to represent entity types and relationships between entities is called a network model. The network model cancels the limitation of the hierarchical model that it cannot represent non-tree structures. Two or more nodes can have multiple parent nodes, so this directed tree becomes a directed graph. Figure depicts the mesh model.

In the network model, records are used as the storage unit of data. A record contains several data items. Data items in a network database can be multi-valued and composite data. Each record has an internal identifier that uniquely identifies it, called a key (DatabaseKey, DBK), which is automatically assigned by the Database Management System (DBMS) when a record is stored in the database. DBK can be regarded as the logical address of the record, and can be used as a substitute for the record or to find the record. The network database is a navigation database. When operating the database, users not only explain what to do, but also how to do it. For example, in the search statement, not only the search object must be specified, but also the access path must be specified.

    3) Relational model

The relational model is a model that uses a two-dimensional table to represent entities and the connections between entities in a database with a relational structure. The relational model is developed based on the concept of relationships in set theory. In the relational model, both entities and the connections between entities are represented by a single structural type relationship.

The basic assumption of the relational model is that all data are expressed as mathematical relationships, that is, a subset of the Cartesian product of n sets. Reasoning about this kind of data is carried out through binary predicate logic, which means that Every proposition has only two possible values: either true or false. The data is manipulated through a form of relational calculus and relational algebra. The relational model is a data model that uses a two-dimensional table structure to express entity types and relationships between entities.

The relational model allows designers to establish a consistent model of information through database standardization and extraction. Access plans and other implementation and operational details are handled by the DBMS engine and should not be reflected in the logical model. This is contrary to common practice with SQL DBMS, where performance tuning often requires changes to the logical model.

The basic relational building blocks are fields or data types. A tuple is an ordered multiset of attributes, and an attribute is an ordered pair of domain and value. A relational variable (Relvar) is a collection of ordered pairs of fields and names, which serves as the header of the relationship. A relation is a collection of tuples. Although these relational concepts are mathematically defined, they can be loosely mapped to traditional database concepts. Tables are recognized visual representations of relationships; tuples are similar to the concept of rows.

The basic principle of the relational model is the information principle, that is, all information is represented as data values ​​in relationships. So, relational variables are not related to each other at design time; instead, the designer uses the same domain in multiple relational variables, and if one attribute depends on another attribute, this dependency is enforced through referential integrity.

3. Common database types

Databases can be divided into relational databases (SQL)  and non-relational databases (Not Only SQL  NoSOL) based on storage methods .

    1) Relational database

Network databases (databases based on the network data model) and hierarchical databases (databases that use hierarchical models as data organization methods) have well solved the problems of data concentration and sharing, but at the data independence and abstraction level There is still a lot to be desired. When users access these two databases, they still need to clarify the storage structure of the data and indicate the access path. The relational database that emerged later solved these problems better. Relational database systems use the relational model as the way data is organized. A relational database is a collection of all entities and the relationships between entities in a given application domain. Relational databases support the ACID principles of transactions, namely atomicity, consistency, isolation, and durability. These four principles ensure the correctness of data during the transaction process.

    2)Non-relational database

Non-relational database is a distributed, non-relational data storage system that does not guarantee compliance with ACID principles. NOSOL data storage does not require a fixed table structure, and there is usually no connection operation. It has performance advantages that relational databases cannot match in terms of big data access.

Common non-relational databases are divided into:

    • Key-value database : similar to hash tables used in traditional languages. You can add, query or delete the database by key, because using key primary key access will achieve high performance and scalability. For information systems, the advantages of the Key/Value model are simplicity, ease of deployment, and high concurrency.

    • Column-oriented database : stores data in column families. A column family store is often queried together. For example, people often query a person's name and age instead of salary. In this case, the name and age will be placed in one column family, and the salary will be placed in another column family. This kind of database is usually used to deal with distributed storage of massive data.

    • Document-Oriented database : The document database can be regarded as an upgraded version of the key-value database, allowing key values ​​to be nested, and the query efficiency of the document-oriented database is higher than that of the key-value database. Document-oriented databases store data in the form of documents.

    • Graph database : allows people to store data in the form of graphs. Entities will act as vertices, and relationships between entities will act as edges. For example, if there are three entities: Steve Jobs, Apple, and Next, there will be two Founded by edges connecting Apple and Next to Steve Jobs.

3) Advantages and disadvantages of databases with different storage methods

The advantages and disadvantages of relational databases and non-relational databases are shown in Table 2-3.

Table 2-3 Advantages and disadvantages of commonly used storage database types

4. Data warehouse

Traditional database systems lack a large amount of historical data information required for decision-making analysis, because traditional databases generally only retain current or recent data information. In order to meet the prediction and decision-making analysis needs of middle and high-level managers, a data environment—data warehouse—that can meet the prediction and decision-making analysis needs is produced on the basis of traditional databases. Basic concepts related to data warehouse include:

    • Cleaning/Transformation/Load ( Extract/Transformation/Load, ETL) : The user extracts the required data from the data source, undergoes data cleaning and transformation, and finally loads the data into the data warehouse according to the predefined data warehouse model. .

    • Metadata : Data about data refers to key data related to data source definitions, target definitions, conversion rules, etc. generated during the data warehouse construction process. Metadata also contains business information about the meaning of the data. Typical metadata includes: the structure of the data warehouse table, the attributes of the data warehouse table, the source data of the data warehouse (recording system), the mapping from the recording system to the data warehouse, the specification of the data model, and the common methods for extracting logs and accessing data. Routines etc.

    • Granularity : The level of detail or comprehensiveness at which data is stored in data units in a data warehouse. The higher the degree of refinement, the smaller the granularity level; conversely, the lower the degree of refinement, the larger the granularity level.

    • Segmentation : Data with the same structure is divided into multiple physical units of data. Any given data unit belongs to one and only one partition.

    • Data mart : A small, department- or workgroup-level data warehouse.

    • Operation Data Store (ODS) : A data collection that can support an organization's daily global applications. It is a new data environment different from DB. It is a hybrid form obtained after DW expansion. It has four basic characteristics: topic-oriented, integrated, variable, and current or near-current.

    • Data model : A logical data structure that includes the operations and constraints provided by a database management system for efficient database processing; a system used to represent data.

    • Artificial relationships : A design technique used to represent referential integrity in the context of decision support systems.

A data warehouse is a subject-oriented, integrated, non-volatile, time-varying collection of data used to support management decisions. The architecture of a common data warehouse is shown in Figure 2-2.

Figure 2-2 Data warehouse architecture

(1) Data source . It is the foundation of the data warehouse system and the data source of the entire system. Usually includes information internal to the organization and information external to the organization. Internal information includes various business processing data and various document data stored in the relational database management system. External information includes various laws and regulations, market information, competitor information, etc.

(2) Data storage and management . It is the core of the entire data warehouse system. The real key to a data warehouse is the storage and management of data. The organization and management method of the data warehouse determines that it is different from the traditional database, and also determines its representation of external data. To decide what products and technologies to use to build the core of the data warehouse, you need to analyze the technical characteristics of the data warehouse. Extract, clean and effectively integrate data from existing business systems, and organize it according to themes. Data warehouses can be divided into organizational-level data warehouses and department-level data warehouses (often called data marts) according to data coverage.

(3) On-Line Analytic Processing (OLAP) server . OLAP effectively integrates the data required for analysis and organizes it according to multi-dimensional models to conduct multi-angle and multi-level analysis and discover trends. Its specific implementation can be divided into: OLAP (Relational OLAP, ROLAP) based on relational database, OLAP (Multidimensional OLAP, MOLAP) based on multidimensional data organization, and OLAP (Hybrid OLAPHOLAP) based on hybrid data organization. ROLAP basic data and aggregated data are stored in RDBMS. MOLAP basic data and aggregated data are stored in multidimensional databases. HOLAP basic data are stored in Relational Database Management System (RDBMS), and aggregated data are stored in multidimensional databases. middle.

(4) Front-end tools . Front-end tools mainly include various query tools, report tools, analysis tools, data mining tools, and various application development tools based on data warehouses or data marts. Among them, data analysis tools are mainly aimed at OLAP server reporting tools, and data mining tools are mainly aimed at data warehouses.

2.1.4 Information security

Common information security problems are mainly manifested as: the proliferation of computer viruses, malware intrusions, hacker attacks, the use of computer crimes, the proliferation of harmful information on the Internet, personal privacy leaks, etc. With the widespread application of new generation information technologies such as the Internet of Things, cloud computing, artificial intelligence, and big data, information security is also facing new problems and challenges.

1. Basics of information security

Information security emphasizes the security attributes of information (data) itself, which mainly includes the following contents.

lConfidentiality (Confdentiality): The attribute that information is not known to unauthorized persons.

lIntegrity : The information is correct ,  authentic, untampered, and complete.

lAvailability : The attribute that information can be used normally at any time .

Information must rely on the carrier (media) for its storage, transmission, processing and application. Therefore, for information systems, security can be divided into four levels: equipment security, data security, content security, and behavioral security .

Information systems generally consist of computer systems, network systems, operating systems, database systems and application systems. Correspondingly, information system security mainly includes computer equipment security, network security, operating system security, database system security and application system security.

Network security technologies mainly include: firewalls, intrusion detection and prevention, VPN, security scanning, network honeypot technology, user and entity behavior analysis technology, etc.

2. Encryption and decryption

In order to ensure the security of information, information encryption technology needs to be used to disguise the information so that illegal thieves cannot understand the true meaning of the information; encryption algorithms need to be used to extract the feature code (check code) or feature vector of the information and combine it with Relevant information is encapsulated together, and the legal owner of the information can use the signature to verify the integrity of the information: encryption algorithms need to be used to authenticate, identify and confirm the identity of the information user to control the use of the information.

The sender encrypts the plain text data into cipher text, then sends the cipher text data to the network for transmission or saves it in a computer file, and only distributes the key to the legitimate recipient. After the legitimate recipient receives the ciphertext, he performs a transformation inverse of the encryption transformation to remove the disguise of the ciphertext and restore the plaintext. This process is called decryption. Decryption is performed under the control of the decryption key. The set of mathematical transformations used for decryption is called the decryption algorithm.

Encryption technology consists of two elements: algorithm and key. The cryptographic systems of key encryption technology are divided into two types: symmetric key system and asymmetric key system . Accordingly, data encryption technologies are divided into two categories, namely symmetric encryption (private key encryption) and asymmetric encryption (public key encryption). Symmetric encryption is typically represented by the Data Encryption Standard ( DES) algorithm, and asymmetric encryption is usually represented by the RSA  (Rivest Shamir Adleman) algorithm. The encryption key and decryption key of symmetric encryption are the same, while the encryption key and decryption key of asymmetric encryption are different. The encryption key can be made public but the decryption key needs to be kept secret .

3. Security behavior analysis technology

Traditional security products, technologies, and solutions are basically based on rule matching of known features for analysis and detection. Based on features, rules and manual analysis, detection analysis with "features" as the core has blind spots in security visibility, has a hysteresis effect, is unable to detect unknown attacks, is easily bypassed, and is difficult to adapt to the network reality of offensive and defensive confrontation and the rapidly changing organizational environment. .External threats and other issues. On the other hand, while most attacks may come from outside the organization, the most severe damage is often caused by insiders. Only by managing insider threats can information and network security be ensured.

User and Entity Behavior Analytics  (UEBA) provides user profiling and anomaly detection based on various analysis methods, combining basic analysis methods (using signature rules, pattern matching, simple statistics, threshold values, etc.) and advanced Analysis methods (supervised and unsupervised machine learning, etc.), use packaged analysis to evaluate users and other entities (hosts, applications, networks, databases, etc.) to discover potential activities related to standard profiles of users or entities or abnormal behavior event. UEBA targets users and entities, uses big data, combines rules and machine learning models, and by defining such baselines, analyzes and anomaly detects user and entity behavior, and perceives suspicious or illegal internal users and entities as quickly as possible Behavior.

UEBA is a complete system that involves detection parts such as algorithms and engineering, as well as user exchange and feedback such as user and entity risk score ranking surveys. From an architectural point of view, UEBA systems usually include a data acquisition layer, an algorithm analysis layer and a scenario application layer.

4. Network security situational awareness

Network Security Situation Awareness is the acquisition, understanding, and display of security factors that can cause changes in the network situation in a large-scale network environment, and based on this, prediction of future network security development trends. Security situational awareness is not only a security technology, but also an emerging security concept. It is an environment-based, dynamic, and overall ability to gain insight into security risks. The premise of security situation awareness is security big data. It performs data integration, feature extraction, etc. on the basis of security big data, and then applies a series of situation assessment algorithms to generate the overall situation of the network, and uses situation prediction algorithms to predict the development of the situation. And use data visualization technology to display situational conditions and predictions to security personnel, making it easier for security personnel to intuitively and conveniently understand the current status of the network and expected risks.

Key technologies for network security situational awareness mainly include: aggregation and fusion technology of massive diversified heterogeneous data, network security threat assessment technology for multiple types, network security situation assessment and decision support technology, network security situation visualization, etc.

2.1.5 Development of information technology

As the foundation of information technology, computer software and hardware, networks, storage and databases, information security, etc. are constantly developing and innovating, leading the current trend of information technology development.

In terms of computer software and hardware, computer hardware technology will develop in the direction of ultra-high speed, ultra-small size, parallel processing, and intelligence. Computer hardware equipment will become smaller and smaller, faster and faster, with larger capacity and lower power consumption. It is getting lower and lower, and the reliability is getting higher and higher. Computer software is becoming more and more abundant and its functions are becoming more and more powerful. The concept of "software defines everything" has become the mainstream of current development.

In terms of network technology, the connection between computer network and communication technology is becoming increasingly close, and has even been integrated. As one of the country's most important infrastructures, 5G has become the current mainstream, oriented to the Internet of Things and low-latency scenarios Narrow Band Internet of Things (NB-IoT) and enhanced Machine-Type Communication (Enhanced Machine-Type Communication, Technologies such as eMTC), Industrial Internet of Things (IloT) and Ultra Reliable Low Latency Communication (URLLC) will further be fully developed.

In terms of storage and databases, with the continuous explosive growth of data volume, data storage structures are becoming more and more flexible and diverse. The increasingly changing emerging business needs drive the existence of databases and application systems to become more abundant. These changes have impacted all types of The architecture and storage model of the database pose challenges, pushing database technology to continuously evolve in the direction of model expansion and architecture decoupling.

In terms of information security, the traditional computer security concept will transition to computer security with the concept of trusted computing as the core. The changes in technology and application models triggered by the popularization and application of the network are further promoting the innovation of key technologies for information security networking: at the same time, information The research and formulation of security standards and the integration and integration of information security products and services are leading the current development of information security technology in the direction of standardization and integration.

2.2 New generation information technology and applications

Information technology continues to integrate and innovate on the basis of intelligence, systematization, miniaturization, and cloudization, promoting the birth of new generation information technologies such as the Internet of Things, cloud computing, big data, blockchain, artificial intelligence, and virtual reality. The new models and new business forms formed by the full development and utilization of new generation information technology and information resources are the main trends in the development of information technology and are also important future business categories in the field of information system integration.

2.2.1 Internet of Things

The Internet of Things refers to a network that connects any items to the Internet through information sensing devices according to agreed protocols for information exchange and communication to achieve intelligent identification, positioning, tracking, monitoring and management. The Internet of Things mainly solves the interconnection between things (Thing to Thing, T2T), people and things (Human to Thing, H2T), and people (Human to Human, H2H). In addition, many scholars often introduce the concept of M2M when discussing the Internet of Things: it can be interpreted as Man to Man, Man to Machine or Machine to Machine.

1. Technical basis

The IoT architecture can be divided into three layers: perception layer, network layer and application layer. The perception layer is composed of various sensors, including temperature sensors, QR code tags, RFID tags, readers, cameras, GPS and other perception terminals . The perception layer is the source of identifying objects and collecting information in the Internet of Things. The network layer is composed of various networks, including the Internet, radio and television networks, network management systems, and cloud computing platforms. It is the hub of the entire Internet of Things and is responsible for transmitting and processing information obtained by the perception layer . The application layer is the interface between the Internet of Things and users. It is combined with industry needs to realize intelligent applications of the Internet of Things.

The industrial chain of the Internet of Things includes sensors and chips, equipment, network operations and services, software and application development, and system integration. Internet of Things technology is used in smart grids, smart logistics, smart homes, smart transportation, smart agriculture, environmental protection, and medical health. It has very critical and important applications in urban management (smart cities), financial services and insurance industry, public safety, etc.

2.Key technologies

The key technologies of the Internet of Things mainly involve sensor technology, sensor networks and application system frameworks.

1) Sensor technology

A sensor is a detection device that can "feel" the measured information and convert the detected information into electrical signals or other required forms of information output according to certain rules to meet the needs of information transmission, processing, and storage. , display, recording and control requirements. It is the primary link to achieve automatic detection and automatic control, and is also the basic means for the Internet of Things to obtain information about the physical world.

Radio frequency identification technology (Radio Frequency identification, RFID) is a sensor technology used in the Internet of Things, and has attracted much attention in the development of the Internet of Things. RIFD can identify a specific target through radio signals and read and write related data without establishing mechanical or optical contact between the identification system and the specific target. RFID is a simple wireless system consisting of an interrogator (or reader) and many transponders (or tags). Tags are composed of composite components and chips. Each tag has a unique electronic code with an extended entry. It is attached to the object to identify the target object. It transmits radio frequency information to the reader through the antenna. The reader is the device that reads the information. RFID technology allows objects to "speak." This gives the Internet of Things a feature - traceability, that is, the accurate location of items and their surrounding environment can be known at any time.

2) Sensor network

Micro-Electro-Mechanical Systems (MEMS) are integrated micro-device systems composed of micro-sensors, micro-actuators, signal processing and control circuits, communication interfaces and power supplies. Its goal is to integrate the acquisition, processing and execution of information to form a multi-functional micro system and integrate it into a large-scale system, thereby greatly improving the automation, intelligence and reliability level of the system. MEMS gives ordinary objects new "life". They have their own data transmission channels, storage functions, operating systems and specialized applications, thus forming a huge sensor network, enabling the Internet of Things to realize control through objects. Monitoring and protection of people. In the future, clothes can "tell" the washing machine how much water and washing powder to put in the most economical way through the sensor network: folders will "check" what important documents people have forgotten, and labels on food and vegetables will tell customers' mobile phones whether "they" are real. "Green safety"

3) Application system framework

The Internet of Things application system framework is a networked application and service with machine terminal intelligent interaction as the core. It will enable objects to achieve intelligent control, involving 5 important technical parts: machines, sensor hardware, communication networks, middleware and applications. The framework is based on a cloud computing platform and intelligent network, and can make decisions based on data obtained from the sensor network and change the object's behavioral control and feedback. Taking a smart parking lot as an example, when a vehicle enters or leaves the antenna communication area, the antenna conducts two-way data exchange with the electronic identification card through microwave communication. The relevant information of the vehicle is read from the electronic car card and the information is read from the driver's card. Get the driver's relevant information, automatically identify the electronic car card and driver's card, determine whether the car card is valid and the driver's card is legal, check the lane control computer and display the license plate number and the one-to-one correspondence with the electronic car card and driver's card. Driver and other information. The lane control computer automatically stores the passing time, vehicle and driver related information into the database. The lane control computer determines whether it is a normal card, unauthorized card, no card or illegal card based on the data it reads, and responds accordingly. and tips. In addition, the elderly at home can wear watches embedded with smart sensors, and their children in other places can check whether their parents' blood pressure and heartbeat are stable at any time through their mobile phones. In an intelligent house, when the owner goes to work, the sensor automatically closes the water, electricity, doors and windows, and regularly sends messages to the owner's mobile phone to report the safety situation.

3.Application and development

The application fields of the Internet of Things involve all aspects of people's work and life. Applications in infrastructure fields such as industry, agriculture, environment, transportation, logistics and security have effectively promoted the intelligent development of these areas, allowing limited resources to be used and distributed more rationally, thereby improving industry efficiency and benefits: in the home , medical health, education, finance and service industry, tourism and other fields closely related to life, through full integration and innovation with social science and social governance, great changes and progress in service scope, service methods and service quality have been achieved. .

2.2.2 Cloud computing

Cloud Computing is a type of distributed computing, which refers to decomposing huge data computing processing programs into countless small programs through the network "cloud", and then processing and analyzing these through a system composed of multiple servers. Xiaochengzi gets the result and returns it to the user. In the early days of cloud computing, it was simple distributed computing, distributing tasks and merging calculation results. The current cloud computing is not just a kind of distributed computing, but the result of the mixed evolution and leap of computer technologies such as distributed computing, utility computing, load balancing, parallel computing, network storage, hot backup redundancy and virtualization.

1. Technical basis

Cloud computing is an Internet-based computing method in which shared software resources, computing resources, storage resources and information resources are configured on the network and provided to online terminal devices and end users on demand. Cloud computing can also be understood as a distributed processing architecture that shields users from underlying differences. In a cloud computing environment, users are separated from the computing resources provided by actual services, and the cloud collects a large number of computing devices and resources.

When using cloud computing services, users do not need to arrange dedicated maintenance personnel. Cloud computing service providers will provide a relatively high level of protection for the security of data and servers. Since cloud computing stores data in the cloud (the part of a distributed cloud computing device that undertakes computing and storage functions), business logic and related calculations are completed in the cloud. Therefore, the terminal only needs an ordinary device that can meet basic applications. .

Cloud computing realizes "fast, on-demand, elastic" services. Users can access the "cloud" and obtain services at any time through broadband networks, acquire or release resources according to actual needs, and dynamically expand resources according to needs.

According to the resource level provided by cloud computing services, it can be divided into three types: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Service type.

LaaS provides users with infrastructure services such as computer power and storage space. This service model requires large infrastructure investment and long-term operation and management experience, and its profitability by simply renting resources is limited.

PaaS provides users with platform services such as virtual operating systems, database management systems, and Web applications. The focus of PaaS services is not on direct economic benefits, but on building and forming a close industrial ecosystem.

SaaS provides users with application software (such as CRM, office software, etc.), components, workflow and other virtualization software services. SaaS generally uses Web technology and SOA architecture to provide users with multi-tenant and customizable application capabilities through the Internet, which greatly It shortens the channel chain of the software industry, reduces the complexity of software upgrades, customization, operation and maintenance, and enables software providers to transform from producers of software products to operators of application services.

2.Key technologies

The key technologies of cloud computing mainly involve virtualization technology, cloud storage technology, multi-tenant and access control management, cloud security technology, etc.

1)Virtualization technology

Virtualization is a broad term that generally refers to computing elements running on a virtual basis rather than a real basis. Virtualization technology can expand the capacity of hardware and simplify the reconfiguration process of software. CPU virtualization technology can simulate multiple CPUs in parallel with a single CPU, allowing one platform to run multiple operating systems at the same time, and applications can run in independent spaces without affecting each other, thereby significantly improving computer work efficiency.

Virtualization technology is completely different from multitasking and hyper-threading technology. Multitasking refers to the simultaneous running of multiple programs in an operating system in parallel. In virtualization technology, multiple operating systems can be run at the same time, and each operating system has multiple programs running. Each operating system Runs on a virtual CPU or virtual host. Hyper-threading technology is just a single CPU simulating dual CPUs to balance program running performance. The two simulated CPUs cannot be separated and can only work together.

Container technology is a new virtualization technology in a new sense, which belongs to the category of operating system virtualization, that is, the operating system provides virtualization support. The most popular container environment currently is Docker. Container technology divides the resources of a single operating system into isolated groups to better balance conflicting resource usage needs among isolated groups. For example: when a user creates an application, the traditional method requires a virtual machine, but the virtual machine itself takes up more system resources. For another example, applications need to be transferred and collaborated between development and operation and maintenance. When the operating environments of development and operation and maintenance are different, the results will also be affected. Using container technology, applications can be isolated in an independent running environment, called a container, which can reduce the additional consumption caused by running the program and can run in the same way almost anywhere.

2)Cloud storage technology

Cloud storage technology is a new information storage and management method developed based on traditional media systems. This method integrates the software and hardware advantages of computer systems and can process massive amounts of data online quickly and efficiently. Through a variety of cloud technologies The application of the platform enables in-depth data mining and security management.

As an important part of cloud storage technology, distributed file systems improve system replication and fault tolerance functions while maintaining compatibility. At the same time, the scalability of cloud storage is achieved through cloud cluster management, and with the help of reasonable combinations between modules, the network storage problems, joint storage problems, multi-node storage problems, backup processing, load balancing, etc. that the solution is planned to solve are completed. In the implementation process of cloud storage, the distributed file structure is combined with the hardware support to optimize the hardware operating environment to ensure the integrity and fault tolerance of data transmission; combined with the expansion of low-cost hardware, the storage cost is greatly reduced. the cost of.

With the support of the distributed file system, the expansion of cloud storage resources can be achieved to assist in the analysis of high-throughput data, allowing users to more fully and comprehensively manage data, achieve optimal management of user uploaded information, and meet the needs of different platform information Get needs. On the other hand, by strengthening the security protection of relevant data in cloud storage technology, virus protection and security monitoring during the information storage process are achieved to ensure the security of information storage applications.

3)Multi-tenancy and access control management

The research on access control in cloud computing environment began with the development of cloud computing. Access control management is one of the core issues in cloud computing applications. Research on cloud computing access control mainly focuses on cloud computing access control models, cloud computing access control based on the ABE cryptographic system, multi-tenant and virtualization access control in the cloud.

The cloud computing access control model is a method to describe the security system according to specific access policies and establish a security model. Users (tenants) can obtain certain permissions through the access control model and then access data in the cloud. Therefore, the access control model is mostly used to statically assign user permissions. The access control models in cloud computing are based on the traditional access control model, and are improved on the traditional access control model to make it more suitable for cloud computing environments. According to the different functions of the access control model, the research content and methods are also different. Common ones include task-based access control model, attribute model-based cloud computing access control, UCON model-based cloud computing access control, and BLP model-based cloud computing. access control etc.

Cloud computing access control based on the ABE password mechanism includes four participants: data provider, trusted third-party authorization center, cloud storage server and user. First, the trusted authorization center generates the master key and public parameters, and passes the system public key to the data provider. After the data provider receives the system public key, it uses the policy tree and system public key to encrypt the file, and converts the ciphertext and policy Upload the tree to the cloud server: Then, when a new user joins the system, he uploads his attribute set to the trusted authorization center and submits a private key application request. The trusted authorization center calculates and generates the attribute set and master key submitted by the user. The private key is passed to the user; finally, the user downloads the data of interest. If its attribute set satisfies the policy tree structure of the ciphertext data, the ciphertext can be decrypted; otherwise, access to the data fails.

Multi-tenancy and virtualized access control in the cloud are typical features of cloud computing. Since physical resources are shared between tenants and their credibility is not easy to obtain, tenants can obtain useful information from the underlying physical resources through side-channel attacks. In addition, deploying access control policies on virtual machines may cause conflicts between multiple tenants accessing resources, resulting in unauthenticated or incorrectly assigned information flows on the physical host. This requires that in a cloud environment, communication between tenants should be guaranteed by access control, and each tenant has its own access control policy, making the access control of the entire cloud platform complicated. At present, research on multi-tenant access control mainly focuses on multi-tenant isolation and virtual machine access control.

4) Cloud security technology

Cloud security research mainly includes two aspects. One is the security protection of cloud computing technology itself, involving corresponding data integrity and availability, privacy protection and service availability; the other is the use of cloud services. To ensure the security protection needs of client users, Internet security is achieved through cloud computing technology, involving cloud computing-based virus prevention and Trojan detection technology, etc.

In terms of research on cloud security technology, it mainly includes:

Cloud computing security : It mainly analyzes the cloud itself and the application services involved, focusing on its corresponding security issues. This mainly involves how to effectively implement security isolation, ensure the security of Internet user data, and how to effectively protect against malicious intent. Network attacks, improving the system security of cloud computing platforms, as well as user access authentication and corresponding information transmission auditing and security.

Ensuring the security of cloud infrastructure : The main thing is how to use the corresponding resources of corresponding Internet security infrastructure equipment to effectively optimize cloud services, so as to ensure that expected security protection requirements are met.

Cloud security technology services : Focus on how to ensure the security service requirements of Internet end users, and can effectively realize client computer virus prevention and other related services. Judging from the development of cloud security architecture, if the security level of cloud computing service providers is not high, service users will need to have stronger security capabilities and assume more management responsibilities.

In order to improve the capabilities of the cloud security system and ensure its strong reliability, cloud security technology must be considered from the perspectives of openness, security assurance, and system structure. ① The cloud security system has a certain degree of openness and must ensure trusted authentication in an open environment; ② In the cloud security system, advanced network technology and virus protection technology must be actively adopted; ③ During the construction process of the cloud security system, it must ensure that it Stability to meet the dynamic changes of massive data.

To sum up, cloud security technology is the core content of the security technology architecture in the new generation of the Internet. It reflects the advancement of the current rapidly developing cloud computing and is an inevitable trend in the development of future information security technology. With the expansion of cloud computing application fields, cloud security technology will inevitably become more and more mature. It can effectively protect the data application security of the majority of Internet users in an all-round way and play a vital role in the further promotion and application of cloud computing.

3.Application and development

After more than ten years of development, cloud computing has gradually entered a mature stage and is playing an increasingly important role in many fields. "Going to the cloud" will become the first step for various organizations to accelerate digital transformation, encourage technological innovation, and promote business growth. A choice, even a necessary prerequisite.

Cloud computing will further become an important carrier and testing ground for innovative technologies and best engineering practices. From AI and machine learning IoT and edge computing, blockchain to DevOps, cloud native and service mesh in engineering practice, cloud computing vendors are actively participating, investing and promoting. Take artificial intelligence as an example, from the provision of GPU computing resources in IaaS mentioned above to the opening of mature model capabilities in specific fields (such as APIs for various natural language processing, image recognition, and language synthesis), to helping to create customized AI As a machine learning platform for models, cloud computing has actually become the foundation of AI-related technologies.

Cloud computing will follow the trend of the industrial Internet, sink into industry scenarios, and develop in depth towards verticalization and industrialization. With the continuous improvement of general architecture and functions and the continuous deepening of industry customers, cloud computing has naturally penetrated into more vertical fields and become the basic capability to provide more close to industry business and typical scenarios. Taking the financial cloud as an example, cloud computing can provide physically isolated infrastructure for the special compliance and security needs of financial and insurance institutions, and can also provide business components such as payment, settlement, risk control, and auditing.

Multi-cloud and hybrid cloud will become an urgent need for large and medium-sized organizations and receive more attention and development. When an organization deploys a large number of workloads in the cloud, new problems will emerge: ① Although the cloud can already provide very high availability, in order to avoid the risk of a single supplier failure, key applications must still set up necessary technical redundancy ; ② When the business scale is large, from the perspective of business strategy, it is also necessary to avoid too tight binding of manufacturers in order to seek some level of business checks and balances and initiative.

The importance of cloud ecological construction has become increasingly prominent and has become a key factor affecting competition among clouds. When a certain cloud develops to a certain scale and stage, it cannot just consider technology and products. It needs to establish and cultivate a prosperous ecosystem and community with vitality from a long-term development perspective. In addition, the cloud ecosystem needs to focus on continuous output, training and influence on developers, architects and operation and maintenance engineers. Only by winning the attention and love of the majority of technical personnel can we win the future cloud computing market.

To sum up, the four major trends of "innovation, verticality, hybridization, and ecology" are accompanying the rapid development of cloud computing. Cloud computing standardizes, abstracts and scales IT hardware resources and software components. In a sense, it subverts and reconstructs the supply chain of the IT industry. It is a huge innovation and progress in the development of the current new generation of information technology.

2.2.3 Big data

Big Data refers to a collection of data that cannot be captured, managed and processed within a certain time range using conventional software tools. It is a massive, high-growth and diverse data with stronger decision-making power, insight discovery and process optimization capabilities. ized information assets.

1. Technical basis

Big data is data with characteristics of large volume, diverse structure, and strong timeliness. Processing big data requires the use of new computing architectures, intelligent algorithms and other new technologies. From data source to final value realization, big data generally needs to go through processes such as data preparation, data storage and management, data analysis and calculation, data governance and knowledge presentation, involving data models, processing models, computing theory and related distributed computing and distribution. Research on storage platform technology, data cleaning and mining technology, streaming computing and incremental processing technology, data quality control, etc. Generally speaking, the main characteristics of big data include:

Massive data : The data volume of big data is huge, jumping from TB level to PB level (IPB=1024TB), EB level (1EB=1024PB), and even reaching ZB level (1ZB=1024EB).

Diverse data types : Big data has many data types, generally divided into structured data and unstructured data. Compared with the text-based structured data that was easy to store in the past, there are more and more unstructured data, including web logs, audio, video, pictures, geographical location information, etc. These multiple types of data pose challenges to the data processing capabilities. higher requirements.

Low data value density : The level of data value density is inversely proportional to the size of the total amount of data. Take video as an example. For a one-hour video, under continuous and uninterrupted monitoring, the useful data may only be one or two seconds. How to "purify" the value of data more quickly through powerful machine algorithms has become a difficult problem to be solved in the current context of big data.

Fast data processing speed : In order to quickly mine data value from massive amounts of data, it is generally required to process different types of data quickly. This is the most significant feature of big data that distinguishes it from traditional data mining.

2.Key technologies

As an emerging technology in the information age, big data technology is in a rapid development stage and involves many aspects such as data processing, management, and application. Specifically, the technical architecture studies and analyzes the acquisition, management, distributed processing and application of big data from a technical perspective. The technical architecture of big data is closely related to the specific implemented technical platform and framework. Different technical platforms determine different technical architecture and implementation. Generally speaking, the big data technology architecture mainly includes big data acquisition technology, distributed data processing technology and big data management technology, as well as big data application and service technology.

1) Big data acquisition technology

At present, research on big data acquisition mainly focuses on three aspects: data collection, integration and cleaning . Data acquisition technology achieves the acquisition of data sources, and then ensures data quality through integration and cleaning technology.

Data collection technology mainly obtains data information from websites through distributed crawling, distributed high-speed and high-reliability data collection, and high-speed whole-network data imaging technology . In addition to the content contained in the network, the collection of network traffic can be processed using bandwidth management technologies such as DPI or DFI. 

Data integration technology is based on data collection and entity recognition to achieve high-quality integration of data into information. Data integration technology includes multi-source and multi-modal information integration models, heterogeneous data intelligent conversion models, intelligent pattern extraction and pattern matching algorithms for heterogeneous data integration, automatic fault-tolerant mapping and conversion models and algorithms, and correctness verification methods for integrated information. Usability assessment methods for integrated information, etc. Data cleaning technology generally removes unreasonable and erroneous data based on correctness conditions and data constraint rules, repairs important information, and ensures data integrity. Including data correctness semantic model, association model and data constraint rules, data error model and error recognition learning framework, automatic detection and repair algorithms for different error types, evaluation models and evaluation methods of error detection and repair results, etc.

2) Distributed data processing technology

Distributed computing emerged with the development of distributed systems. Its core is to decompose tasks into many small parts and assign them to multiple computers for processing. Through the mechanism of parallel work, it can save overall computing time and improve computing efficiency. the goal of. Currently, the mainstream distributed computing systems include Hadoop, Spark and Storm. Hadoop is often used for offline complex big data processing, Spark is often used for offline fast big data processing, and Storm is often used for online real-time big data processing. Big data analysis and mining technology mainly refers to improving existing data mining and machine learning technology; developing new data mining technologies such as data network mining, specific group mining, and graph mining, and innovating big data integration such as object-based data connection and similarity connection. Technology; breakthroughs in field-oriented big data mining technologies such as user interest analysis, network behavior analysis, and emotional semantic analysis.

3) Big data management technology

Big data management technology mainly focuses on big data storage, big data collaboration, security and privacy . Big data storage technology mainly has three aspects. ① New database clusters using MPP architecture realize big data storage through multiple big data processing technologies such as column storage and coarse-grained indexes and efficient distributed computing models; ② Relevant big data technologies are derived around Hadoop to cope with traditional relationships For data and scenarios that are difficult to process with large-scale databases, we can achieve support for big data storage and analysis by extending and encapsulating Hadoop; ③ Based on integrated servers, storage devices, operating systems, and database management systems, we can achieve good stability, Scalable big data all-in-one machine.

Collaborative management technology of multiple data centers is another important direction of big data research. Through the distributed workflow engine, workflow scheduling and load balancing are realized, and storage and computing resources of multiple data centers are integrated to provide support for building a big data service platform.

Research on big data privacy technology mainly focuses on new data release technologies, trying to maximize user privacy while minimizing the loss of data information. There is a contradiction between the amount of data information and privacy, and there is currently no very good solution.

4) Big data application and service technology

Big data application and service technologies mainly include analysis application technology and visualization technology . Big data analysis applications are mainly business-oriented analysis applications. Based on the analysis and mining of distributed massive data, big data analysis application technology is driven by business needs, carries out special data analysis for different types of business needs, and provides users with highly available and easy-to-use data analysis services.

Visualization helps people explore and understand complex data through interactive visual representations. Big data visualization technology mainly focuses on text visualization technology, network (graph) visualization technology, spatiotemporal data visualization technology, multi-dimensional data visualization and interactive visualization, etc. In terms of technology, the main focus is on In Situ Interactive Analysis, data representation, uncertainty quantification and domain-oriented visualization tool libraries.

3.Application and development

Big data, like water, ore, and oil, is becoming a new resource and an important system of social production. Mining potential value from data resources has become a hot research topic in the current big data era. How to quickly collect, store and correlate a huge amount of data with scattered sources and various formats to discover new knowledge, create new value and capture new capabilities is an important manifestation of its application value. .

(1) In the Internet industry, the widespread application of the Internet and social networks have penetrated into all aspects of social work and life, and the generation, application and service of massive data are integrated. Everyone is a producer, user and beneficiary of data. Mining user behavior from a large amount of data and transmitting it back to the business field supports more accurate social marketing and advertising, which can increase business revenue and promote business development. At the same time, with the massive generation, analysis and application of data, data itself has become an asset that can be traded. Big data trading and data capitalization have become currently valuable fields and directions.

(2) In the field of government public data, combined with the collection, management and integration of big data, the information collected by various departments can be analyzed and shared to discover management flaws, improve law enforcement, increase fiscal and tax revenue, and increase market supervision. It has greatly changed the government management model, saved government investment, strengthened market management, and improved the level of social governance, urban management capabilities and the people's service capabilities.

(3) In the financial field, big data credit reporting is an important application field. Through the analysis and profiling of big data, the combination of personal credit and financial services can be achieved, thereby serving trust management, risk control management, lending services, etc. in the financial field, and providing effective support for financial businesses.

(4) In the industrial field, combined with massive data analysis, it can provide accurate guidance for the industrial production process. For example, in the field of shipping big data, big data can be used to predict and analyze the international trade volume of future routes and predict the flow of goods at each port. Popularity: The ability to use weather data to analyze the impact of routes, provide early warnings on related businesses, adjust routes, and optimize resource allocation plans to avoid unnecessary losses.

(5) In the field of social and people’s livelihood, the analysis and application of big data can better serve people’s livelihood. Taking disease prediction as an example, based on the accumulation and intelligent analysis of big data, we can see the distribution of onset time and location of people searching for "influenza, hepatitis, tuberculosis and no disease", and build a prediction model based on factors such as temperature changes, environmental indexes, and population movements. , which can provide public health management personnel with trend predictions of various infectious diseases and help them make early preventive deployments.

2.2.4 Blockchain

The concept of "blockchain" was first proposed in 2008 in "Bitcoin: A Peer-to-Peer Electronic Cash System" and has been successfully applied in the data encryption currency system of the Bitcoin system. It has become a focus of governments, organizations and scholars. Blockchain technology, a hot topic of research and development, has the characteristics of multi-centered storage, privacy protection, and tamper-proofing. It provides an open, decentralized, and fault-tolerant transaction mechanism and has become the core of a new generation of anonymous online payments, remittances, and digital asset transactions. It is widely used Applied to major trading platforms, it has brought profound changes to the fields of finance, regulatory agencies, technological innovation, agriculture, and politics.

1. Technical basis

The concept of blockchain can be understood as a distributed storage based on an asymmetric encryption algorithm, an improved Merkle Tree as a data structure, and a combination of consensus mechanism, peer-to-peer network, smart contract and other technologies. Database Technology. Blockchain is divided into four categories: Public Blockchain, Consortium Blockchain, Private Blockchain and Hybrid Blockchain.

Generally speaking, typical characteristics of blockchain include:

l Multi-centralization: The verification, accounting, storage, maintenance and transmission of data on the chain all rely on the distributed system structure. Pure mathematical methods are used to replace centralized organizations to build trust relationships between multiple distributed nodes, thereby establishing Trusted distributed systems.

l Multi-party maintenance: The incentive mechanism ensures that all nodes in the distributed system can participate in the verification process of data blocks, and select specific nodes through the consensus mechanism to add newly generated blocks to the blockchain.

l Time series data: Blockchain uses a chain structure with timestamp information to store data information and adds time dimension attributes to the data information, thereby achieving traceability of data information.

lSmart contracts: Blockchain technology can provide users with flexible and variable script codes to support the creation of new smart contracts.

l Cannot be tampered with: In the blockchain system, because the subsequent blocks between adjacent blocks can verify the preceding blocks, if the data information of a certain block is tampered with, the block and all its contents need to be modified recursively. However, the cost of each hash recalculation is huge and must be completed within a limited time, so it can ensure that the data on the chain cannot be modified.

lOpen consensus: In the blockchain network, each physical device can be used as a node in the network, and any node can join freely and have a complete copy of the database.

l Safe and trustworthy: Data security can be achieved by encrypting data on the chain based on asymmetric encryption technology. Each node in the distributed system uses the computing power formed by the blockchain consensus algorithm to resist external attacks and ensure that the data on the chain is not secure. It has been modified and forged, thus having higher confidentiality, credibility and security.

2.Key technologies

From the perspective of the blockchain technical system, the blockchain is based on the underlying data processing, management and storage technology, with block data management, chain-structured data, digital signatures, hash functions, Merkel trees, Asymmetric encryption, etc., organize nodes to participate in data dissemination and verification through a symmetric network based on P2P network. Each node will be responsible for network routing, verifying block data, disseminating block data, recording transaction data, discovering new nodes, etc. Function, including propagation mechanism and verification mechanism. In order to ensure the security of the blockchain application layer, through the issuance mechanism and distribution mechanism of the incentive layer, consensus is reached in the most efficient way among the nodes of the entire distributed network.

1) Distributed ledger

Distributed ledger is one of the cores of blockchain technology . The core idea of ​​the distributed ledger is that transaction accounting is completed by multiple nodes distributed in different places, and each node saves a unique and true copy of the ledger. They can participate in supervising the legality of the transaction, and can also work together for it. Testify; any changes in the ledger will be reflected in all copies, and the response time will be within minutes or even seconds. There are enough accounting nodes. In theory, unless all nodes are destroyed, the entire distributed ledger system It is very robust, thus ensuring the security of account data.

The assets stored in the distributed ledger refer to legal assets recognized by law, such as financial, physical, electronic assets and other valuable assets in any form. In order to ensure the security and accuracy of assets, the distributed ledger controls the access rights of the ledger through public and private keys and signatures on the one hand; on the other hand, according to the rules of consensus, the information update in Changban can be participated by one, some people or all completed together.

Distributed ledger technology can ensure the security and accuracy of assets and has a wide range of application scenarios, especially in the field of public services. It can redefine the relationship between the government and citizens in terms of data sharing, transparency and trust. It has been widely used in finance. Transactions, government taxation, land ownership registration, passport management, social welfare and other fields.

2) Encryption algorithm

The encryption of block data is the focus of blockchain research and attention. Its main function is to ensure the security of block data during network transmission, storage and modification. Encryption algorithms in blockchain systems are generally divided into hashing (hash) algorithms and asymmetric encryption algorithms.

The hash algorithm is also called data digest or hash algorithm. Its principle is to convert a piece of information into a fixed-length string with the following characteristics: if two pieces of information are the same, then the characters are also the same; even if the two pieces of information are Very similar, but as long as they are different, the strings will be very messy, random, and have no relationship at all between the two strings.

In essence, the purpose of the hash algorithm is not to "encrypt" but to extract "data features". The hash value of a given data can also be understood as the "fingerprint information" of the data. Typical hashing algorithms include MD5, SHA-1/SHA-2 and SM3. Currently, blockchains mainly use the SHA256 algorithm in SHA-2.

Asymmetric encryption algorithm is an encryption method consisting of a corresponding pair of unique keys (ie, public key and private key). Anyone who knows the user's public key can use the user's public key to encrypt information and achieve secure information interaction with the user. Due to the dependency between the public key and the private key, only the user himself can decrypt the information, and no unauthorized user or even the sender of the information can decrypt the information. Commonly used asymmetric encryption algorithms include RSA, Elgamal, DH, ECC (circular curve encryption algorithm), etc.

3) Consensus mechanism

In digital currency, a typical application of blockchain, we are faced with a series of security and management issues, such as: how to prevent fraud? How to control the order in which block data is transmitted to various distributed nodes? How to deal with data corruption during the transmission process? Loss problem? How do nodes deal with erroneous or forged information? How to ensure the consistency of information update and synchronization between nodes? These issues are the so-called blockchain consensus issues.

Blockchain consensus issues need to be solved through the blockchain consensus mechanism . In the Internet world, consensus is mainly the basic guarantee for the collaboration of computers and software programs, and is the basic basis for the operation of distributed system nodes or programs. The consensus algorithm can ensure that distributed computers or software programs work together and respond correctly to the input and output of the system.

The idea of ​​the consensus mechanism of the blockchain is: in the absence of overall coordination from a central point, when a certain accounting node proposes an increase or decrease in block data and broadcasts the proposal to all participating nodes, all nodes must follow a certain The rules and mechanisms are used to calculate and process whether this proposal can reach consensus.

Currently, commonly used consensus mechanisms mainly include PoW, PoS, DPOS, Paxos, PBFT, etc. According to the characteristics of various consensus mechanisms in different application scenarios of blockchain, consensus mechanism analysis can be based on:

l Compliance supervision : Whether to support super authority nodes to supervise the entire network nodes and data.

lPerformance efficiency : the efficiency with which transactions reach consensus and are confirmed.

l Resource consumption : CPU, network input and output, storage and other resources consumed during the consensus process.

lFault tolerance : the ability to prevent attacks and fraud.

3.Application and development

Currently, the TCP/IP protocol is the "hand-holding protocol" of the global Internet. The concept of "multi-centralization and distribution" has been turned into an executable program, and more similar protocols have been derived on this basis. However, looking back at the development of Internet technology, it can be seen that the current Internet technology has successfully achieved multi-centering of information, but it cannot achieve multi-centering of value. In other words, activities that can be decentralized on the Internet are activities that do not require credit endorsement, and activities that require credit guarantees are centralized and involve third-party intermediaries. Therefore, Internet technology that cannot establish global credit has encountered obstacles in its development—people cannot participate in value exchange activities through a multi-centralized approach on the Internet.

From the perspective of blockchain technology research: ① In terms of consensus mechanism, how to solve the problems of authority control, consensus efficiency, constraints, fault tolerance and other aspects of public chains, private chains, and consortium chains, and seek universal solutions for typical scenarios. Safe and better consensus algorithms and decision-making will be the focus of research; ② In terms of security algorithms, most of the algorithms currently used are traditional security algorithms, which have potential "backdoor" risks, and the strength of the algorithms also needs to be continuously upgraded. ; In addition, the security issues brought about by management security, privacy protection, lack of supervision and new technologies (such as quantum computing) need to be taken seriously; ③ In the field of blockchain governance, research on how to combine the existing information technology governance system, starting from the blockchain The strategy, organization, structure of the blockchain and all aspects of the blockchain application system, research on the environment and culture, technology and tools, processes and activities during the implementation of the blockchain, and then realize the value of the blockchain and carry out relevant areas The audit of the blockchain is a core issue in the field of blockchain governance; ④ As the technology becomes increasingly mature, research on the standardization of the blockchain is also an important consideration.

(1) Blockchain will become one of the basic protocols of the Internet. In essence, the Internet, like the blockchain, is also a multi-centered network, and there is no "center of the Internet". The difference is that the Internet is an efficient information transmission network and does not care about the ownership of information. There is no endogenous protection mechanism for valuable information; blockchain, as a protocol that can transmit ownership, will be based on existing The Internet protocol architecture builds a new basic protocol layer. From this perspective, blockchain (protocol) will become the basic protocol of the future Internet like Transmission Control Protocol/Internet Protocol (TCP/IP) to build an efficient, multi-centered value storage and transfer network.

(2) Different layers of the blockchain architecture will carry different functions. Similar to the layered structure of the TCP/IP protocol stack, people have developed various application layer protocols on top of the unified transport layer protocol, ultimately building today's colorful Internet. In the future, the blockchain structure will also develop various application layer protocols based on a unified, multi-centralized underlying protocol.

(3) The application and development of blockchain is showing a spiral upward trend. Just like the development of the Internet, it will experience overheating and even bubble stages during the development process, and will integrate traditional industries with disruptive technological changes. As the core technology of the next stage of the digital wave, blockchain will have a longer development cycle than expected, and the scope and depth of its impact will far exceed people’s imagination. It will build a diversified ecological value Internet, thus Profoundly change the structure of future business society and everyone's life.

2.2.5 Artificial Intelligence

Artificial intelligence refers to a technical science that studies and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. This concept has developed and evolved over more than half a century since it was proposed in 1956. At the beginning of the 21st century, with the rapid iteration and progress of big data, high-performance computing and deep learning technology, artificial intelligence has entered a new round of development boom, and its powerful empowerment has had a profound impact on economic development, social progress, and the international political and economic pattern. It has had a significant and far-reaching impact and has become an important driving force for a new round of scientific and technological revolution and industrial transformation.

1. Technical basis

From its emergence to the present, the development history of artificial intelligence has gone through six main stages: initial development period (1956 to the early 1960s), reflective development period (1960s to the early 1970s), and application development period. (from the early 1970s to the mid-1980s), the sluggish development period (from the mid-1980s to the mid-1990s), the rapid development period (from the mid-1990s to 2010), and the booming development period (2011 year to present).

From the analysis of current artificial intelligence technology, it can be seen that its technical research mainly focuses on three aspects : hot technology, common technology and emerging technology . Among them, the optimization, improvement and practice of basic algorithms represented by machine learning, as well as new learning methods such as transfer learning, reinforcement learning, multi-core learning and multi-view learning are hot topics of research and exploration; feature extraction, semantic classification and word embedding related to natural language processing Research on basic technologies and models, as well as applied research on intelligent automatic question answering and machine translation have also achieved many results; systematic analysis based on knowledge graphs and expert systems are also constantly making breakthroughs, greatly expanding the application scenarios of artificial intelligence. , has an important potential impact on the future development of artificial intelligence.

2.Key technologies

The key technologies of artificial intelligence mainly involve machine learning, natural language processing, expert systems and other technologies . With the deepening of the application of artificial intelligence, more and more emerging technologies are also developing rapidly.

1) Machine learning

Machine learning is a technology that automatically matches a model to data and "learns" from the data by training the model. Research on machine learning mainly focuses on machine learning algorithms and applications, reinforcement learning algorithms, approximation and optimization algorithms, and planning problems. Common learning algorithms mainly include basic algorithms such as regression, clustering, classification, approximation, estimation, and optimization. Improvement research, reinforcement learning methods such as transfer learning, multi-core learning and multi-view learning are current research hotspots.

Neural networks are a form of machine learning that emerged in the 1960s and are used in classification applications that analyze problems in terms of inputs, outputs, variable weights, or "features" that relate inputs to outputs. It's similar to how neurons process signals. Deep learning is a neural network model that predicts results through multi-level features and variables. Thanks to the faster processing speed of current computer architecture, this type of model has the ability to handle thousands of features. Unlike earlier forms of statistical analysis, each feature in a deep learning model typically has little meaning to a human observer, making the model difficult to use and difficult to interpret. Deep learning models use a technique called backpropagation to make predictions or classify outputs through the model. Reinforcement learning is another form of machine learning, which means that the machine learning system sets a goal and receives some form of reward for each step toward the goal.

Machine learning models are based on statistics and should be compared with conventional analysis to determine their incremental value.

They tend to be more accurate than traditional "hand-made" analytical models based on human assumptions and regression analysis, but are also more complex and difficult to interpret.

Compared with traditional statistical analysis, automated machine learning models are easier to create and can reveal more data details.

2) Natural language processing

Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable effective communication between humans and computers using natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Therefore, research in this field will involve natural language, that is, the language that people use every day, so it is closely related to the study of linguistics, but there are important differences. Natural language processing is not the general study of natural language, but the development of computer systems that can effectively use natural language communication, especially the software systems. Therefore it is part of computer science.

Natural language processing is mainly used in machine translation, public opinion monitoring, automatic summarization, opinion extraction, text classification, question answering, text semantic comparison, speech recognition, Chinese OCR, etc.

Natural language processing (that is, realizing natural language communication between humans and machines, or realizing natural language understanding and natural language generation) is very difficult. The fundamental reason for the difficulty is that there are various kinds of natural language text and dialogue at all levels. Ambiguity or polysemy. The core problems solved by natural language processing are information extraction, automatic summarization/word segmentation, identification and transformation, etc., which are used to solve the effective definition of content, disambiguation and ambiguity, defective or non-standard input, and language behavior understanding and interaction. Currently, deep learning technology is an important technical support for natural language processing. Deep learning models, such as convolutional neural networks, recurrent neural networks, etc., need to be applied in natural language processing to complete natural language classification by learning the generated word vectors. , the process of understanding.

3) Expert system

An expert system is an intelligent computer program system, which usually consists of six parts: human-computer interaction interface, knowledge base, inference engine, interpreter comprehensive database, and knowledge acquisition. It contains a large amount of expert-level knowledge and experience in a certain field. Be able to apply artificial intelligence technology and computer technology to conduct reasoning and judgment based on the knowledge and experience in the system, and simulate the decision-making process of human experts in order to solve complex problems that require human experts to deal with. In short, an expert system is a computer program system that simulates human experts to solve domain problems.

In the development process of artificial intelligence, the development of expert systems has gone through three stages, and is transitioning and developing to the fourth generation. The first generation of expert systems is characterized by high specialization and strong ability to solve specialized problems. However, there are deficiencies in the integrity of the system structure, portability, system transparency and flexibility, and the ability to solve problems is weak. The second generation expert system is a single-subject professional and application-based system. Its system structure is relatively complete and its portability has been improved. It also has many advantages in the system's human-computer interface explanation mechanism, knowledge acquisition technology, uncertainty reasoning technology, and enhanced expert system. The heuristic and universality of knowledge representation and reasoning methods have been improved. The third generation expert system is a multi-disciplinary comprehensive system that uses multiple artificial intelligence languages, comprehensively adopts various knowledge representation methods and multiple reasoning mechanisms and control strategies, and uses various knowledge engineering languages, skeleton systems and expert system development tools and environment to develop large-scale comprehensive expert systems.

The current research on expert systems in artificial intelligence has entered the fourth stage, mainly researching large-scale multi-expert collaboration systems, multiple knowledge representations, comprehensive knowledge bases, self-organizing problem-solving mechanisms, multi-disciplinary collaborative problem-solving and parallel reasoning, and expert system tools And environmental engineering neural network knowledge acquisition and learning mechanism, etc.

3.Application and development

After more than 60 years of development, artificial intelligence has made important breakthroughs in terms of algorithms, computing power (computing power) and computing materials (data). It is at a technological turning point from "unusable" to "usable", but the distance is "very far away". There are still many bottlenecks in "easy to use". The leapfrog development from dedicated artificial intelligence to general artificial intelligence is not only an inevitable trend in the development of the next generation of artificial intelligence, but also a major challenge in the field of research and application, and the trend of future application and development.

(1) From artificial intelligence to human-machine hybrid intelligence. Drawing on the research results of brain science and cognitive science is an important research direction of artificial intelligence. Human-machine hybrid intelligence aims to introduce human functions or cognitive models into artificial intelligence systems to improve the performance of artificial intelligence systems, making artificial intelligence a natural extension and expansion of human intelligence, and solving complex problems more efficiently through human-machine collaboration. .

(2) Develop from “artificial intelligence + intelligence” to autonomous intelligent systems. A large amount of current research in the field of artificial intelligence focuses on deep learning, but the limitation of deep learning is that it requires a lot of manual intervention, such as manual design of deep neural network models, manual setting of application scenarios, manual collection and labeling of large amounts of training data, and the need for manual adaptation by users. Intelligent systems, etc. are very time-consuming and labor-intensive. Therefore, scientific researchers have begun to pay attention to autonomous intelligence methods that reduce manual intervention and improve the ability of machine intelligence to autonomously learn from the environment.

(3) Artificial intelligence will accelerate cross-penetration with other subject areas. Artificial intelligence itself is a comprehensive cutting-edge discipline and a highly interdisciplinary composite discipline. The research scope is wide and extremely complex. Its development requires deep integration with disciplines such as computer science, mathematics, cognitive science, neuroscience, and social science. With the help of breakthroughs in biology, brain science, life science, psychology and other disciplines, mechanisms can be turned into computable models, and artificial intelligence will deeply interpenetrate with more disciplines.

(4) The artificial intelligence industry will flourish. With the further maturity of artificial intelligence technology and the increasing investment from the government and industry, the cloudization of artificial intelligence applications will continue to accelerate, and the scale of the global artificial intelligence industry will enter a period of rapid growth in the next 10 years. The innovation model of "artificial intelligence +

(5) The sociology of artificial intelligence will be on the agenda. In order to ensure the healthy and sustainable development of artificial intelligence and make its development results benefit the people, it is necessary to systematically and comprehensively study the impact of artificial intelligence on human society from a sociological perspective, formulate and improve artificial intelligence laws and regulations, and avoid possible risks, aiming to “Promote and develop friendly artificial intelligence in a way that benefits humanity as a whole.”

2.2.6 Virtual reality

Since the creation of computers, computers have been the main body of the traditional information processing environment. This is inconsistent with the human cognitive space and the information space of computer processing problems. How to directly connect human perception and cognitive experience with the computer information processing environment? , is an important background for the emergence of virtual reality. How to build an information space that can accommodate multiple information sources such as images, sounds, chemical smells, etc., and integrate it with human living spaces such as vision, hearing, smell, passwords, gestures, etc., virtual reality technology emerged as the times require.

1. Technical basis

Virtual Reality (VR) is a computer system that can create and experience a virtual world (where virtual world is the general term for the entire virtual environment). The information space established through the virtual reality system is no longer a simple digital information space, but a multi-dimensional information space (Cyberspace) that accommodates a variety of information. Human perceptual knowledge and rational cognitive abilities can be integrated in this multi-dimensional information space. be fully utilized in the information space. To create a virtual reality system that allows participants to have an immersive experience and complete interaction capabilities, in terms of hardware, high-performance computer software and hardware and various advanced sensors are required; in terms of software, it is mainly necessary to provide A toolset that generates simulated environments.

The main characteristics of virtual reality technology include immersion, interactivity, multi-sensory nature, imagination (also known as imagination) and autonomy . With the rapid development of virtual reality technology, according to the degree of "immersion" and the degree of interaction, virtual reality technology has evolved from desktop virtual reality systems, immersive virtual reality systems, distributed virtual reality systems, etc., to augmented reality systems . The development of virtual reality systems (Augmented Reality, AR) and the metaverse .

2.Key technologies

The key technologies of virtual reality mainly involve human-computer interaction technology, sensor technology, dynamic environment modeling technology and system integration technology.

1) Human-computer interaction technology

Human-computer interaction technology in virtual reality is different from the traditional interaction mode of only keyboard and mouse. It is a new type of three-dimensional interaction technology that uses VR glasses, control handles and other sensor devices to allow users to truly feel the existence of things around them. , combining three-dimensional interaction technology with speech recognition, voice input technology and other devices used to monitor user behavior, forming the current mainstream human-computer interaction method.

2) Sensor technology

The progress of VR technology is restricted by the development of sensor technology. The shortcomings of existing VR equipment are closely related to the sensitivity of the sensor. For example, VR headsets (that is, VR glasses) are overweight, have low resolution, and slow refresh rates, which can easily cause visual fatigue; equipment such as data gloves also have shortcomings such as long delays and insufficient sensitivity, so sensor technology is more important for VR technology. The key to realizing human-computer interaction well.

3) Dynamic environment modeling technology

The design of virtual environment is an important part of VR technology, which uses three-dimensional data to build a virtual environment model. Currently, the commonly used virtual environment modeling tool is Computer Aided Design (CAD). Operators can obtain the required data through CAD technology, and use the obtained data to build a virtual environment model that meets actual needs. In addition to obtaining three-dimensional data through CAD technology, visual modeling technology can also be used in most cases. The combination of the two can obtain data more effectively.

4) System integration technology

Integrated technologies in VR systems include information synchronization, data conversion, model calibration, recognition and synthesis. Since VR systems store a lot of voice input information, perception information and data models, integrated technologies in VR systems are becoming more and more important. .

3.Application and development

(1) Hardware performance optimization iteration is accelerated . Thinness and ultra-high definition have accelerated the rapid expansion of the virtual reality terminal market and opened up a new space for the explosive growth of the virtual reality industry. The display resolution, frame rate, degree of freedom, delay, interactive performance, weight, and dizziness of virtual reality equipment Performance indicators such as user experience are increasingly optimized, and user experience continues to improve.

(2) The extent to which the development of network technology effectively facilitates its application . Ubiquitous network communications and high-speed network speeds have effectively improved the experience of virtual reality technology on the application side. With the help of terminal lightweight and mobile 5G technology, high peak rate, millisecond-level transmission delay and hundreds of billions of connection capabilities have reduced the requirements for the virtual reality terminal side.

(3) The integration of virtual reality industry elements is accelerating . Technology and talents are multi-dimensional, and the core technology of the virtual reality industry continues to make breakthroughs, forming a relatively complete virtual reality industry chain. The virtual reality industry shows an industrial trend from innovative applications to normal applications, and is widely used in the fields of stage art, sports smart viewing, promotion of new culture, education, medical care and other fields. "Virtual reality + trade exhibitions" has become the new normal in the post-epidemic era. "Virtual reality + industrial production" is the new driving force for digital transformation of organizations. "Virtual reality + smart life" has greatly improved the future intelligent life experience. "Virtual reality + industrial production" has greatly improved the future intelligent life experience. "Reality + entertainment and leisure" has become a new carrier of new information consumption models.

(4) Emerging concepts such as the metaverse have brought new business concepts such as "sink and superposition", "radical and progressive" and "open and closed" to virtual reality technology, greatly improving its application value and social value, and will gradually change the way people think about it. The customary physical rules of the real world stimulate industrial technological innovation in new ways, and drive the transformation and upgrading of related industries in new models and new business formats.

Guess you like

Origin blog.csdn.net/weixin_68261415/article/details/130223231