Software exam for advanced system architect in data communications and computer networks

concept

OSPF

After dividing the area, the routers in the non-backbone area of ​​the OSPF network must forward the routes to the external network through the ABR (area border router). In this case, there is no need for the routers in the area to know the route. For detailed routing to the external network, the ABR only needs to publish a default route to the area, telling other routers in the area that if they want to access the external network, they can go through the ABR. In this way, the routers in the area only need a few intra-area routes, routes in other areas in the AS and a default route pointing to the ABR, instead of recording external routes. This can simplify the routing table in the area and reduce the burden on the routers. performance requirements. This is the design concept of Stub Area in OSPF routing protocol.

RSVP

RSVP applies for network resources for the application before it starts sending messages. It is one-way. The recipient initiates a request for resource reservation and maintains the resource reservation information. When reserving resources for each flow, the router will send resource request message Path messages hop by hop along the data transmission direction, which contains its own demand information for bandwidth, delay and other parameters. After receiving the request, the router records the request and then continues to send the resource request message to the next hop. When the message reaches the destination, the receiver sends the resource reservation message Resv message hop by hop in reverse direction to the routers along the way for resource reservation.

DHCP

When 50% of the lease period expires, the client sends a unicast DHCP REQUEST message packet directly to the DHCP Server that provides the IP address. If the client receives the DHCP ACK message packet responded by the server, the client updates its configuration based on the new lease period and other updated TCP/IP parameters provided in the packet, and the IP lease update is completed. If no reply is received from the server, the client continues to use the existing IP address because 50% of the current lease period remains.

If there is no update when 50% of the lease period has elapsed, the client will contact DHCP again to send a broadcast DHCP REQUEST message packet to the DHCP that provided the IP address when 87.5% of the lease period has elapsed. If it is still unsuccessful, when the lease reaches 100%, the client must give up the IP address and apply again. If no DHCP is available at this time, the 169.254.0.0/16client randomly selects an address and tries again every 5 minutes.

DHCP Decline: After receiving the ACK message responded by the DHCP server, the DHCP client finds through address conflict detection that the address assigned by the server conflicts or cannot be used due to other reasons, and then sends a Decline message to notify the server that the IP address assigned is unavailable. .

Simplex, half duplex, full duplex

The classification and definition of signal transmission directions and methods of end-to-end communication buses are as follows:

  • Simplex: means A can only send signals, and B can only receive signals. Communication is one-way.
  • Half-duplex: means A can send a signal to B, and B can also send a signal to A, but these two processes cannot be performed at the same time.
  • Full duplex: While A sends a signal to B, B can also send a signal to A. These two processes can be carried out at the same time without affecting each other.

Network requirements analysis

Network demand analysis includes overall network demand analysis, comprehensive cabling demand analysis, network availability and reliability analysis, network security analysis demand analysis, and project cost estimation.

Hierarchical network design model

Also called the network layered design model, it can help designers design network structures by layers, assign specific functions to different layers, and select the correct equipment and systems for different layers. Usually consists of three layers:

  1. Core layer: Mainly to achieve optimized transmission between backbone networks. The focus of the backbone layer design task is usually redundancy , reliability and high-speed transmission. The network core layer forwards data packets from one area to another at high speed. Fast forwarding and convergence are its main functions. Network control functions should be implemented at the core layer as little as possible . The core layer has always been considered the final bearer and aggregator of all traffic, so the requirements for the design and network equipment of the core layer are very strict. If a connection between the Internet and an external network is required, the core layer should also include one or more lines to the external network. Other functions: link aggregation, IP routing configuration management, IP multicast, spanning tree, setting traps and alarms, high-speed connection of server groups, etc.
  2. Aggregation layer: The part between the access layer and the core layer is called the aggregation layer. It is the aggregation point of multiple access layer switches. It must be able to handle all communication traffic from the access layer devices and provide access to the core layer. uplink. Therefore, aggregation layer switches require higher performance, fewer interfaces, and higher switching rates than access layer switches. Other functions: access list control, inter-VLAN routing execution, packet filtering, multicast management, QoS, load balancing, fast convergence, etc.
  3. Access layer: The part of the network that directly connects or accesses the network to users becomes the access layer. The purpose of the access layer is to allow end users to connect to the network, so access layer switches have the characteristics of low cost and high port density. Other functions: user access and authentication, layer 2 and layer 3 switching, QoS, MAC address authentication filtering, billing management, collection of user information (such as IP, Mac address, access log)

In order to ensure the hierarchical nature of the network, additional connections cannot be added arbitrarily in the design. Except for the access layer, other layers should be modularized as much as possible, and the boundaries between modules should be very clear.

When designing a hierarchical network, you should start with the access layer and then work your way up to the core layer. The reason is that the access layer actually represents demand, because there are a large number of terminal devices that need to be accessed, and there are speed requirements. Based on the analysis of load, traffic and behavior, then there are what requirements the aggregation layer must meet, and how the core layer must be designed. .

Under normal circumstances, three levels are enough. Too many levels will lead to a decrease in overall network performance and increase network latency, but it facilitates network troubleshooting and document writing.

Network system life cycle

Also called the network design process, it can be divided into five stages:

  1. Requirements specification: Conduct network requirements analysis, including centralized interviews and collection of information materials
  2. Communication specifications: Carry out network system analysis, including analysis of internal network communication traffic
  3. Logical network design: Logical network design drawings (and documents), determine the logical network structure, formulate network IP address allocation plans, specific software and hardware, WAN connections and basic services. Network structure design, physical layer technology selection, LAN technology selection and application, WAN technology selection and application, address design and naming model, routing protocol, network management, network security
  4. Physical network design: Determine the physical network structure, and determine the specific physical distribution and operating environment of the equipment based on the requirements of logical network design. Specific hardware and software, connecting equipment, cabling and services need to be identified. Document specifications related to equipment selection, structured cabling, computer room design and physical network design (such as software and hardware list, expense list)
  5. Implementation: Perform network equipment installation, debugging, and maintenance during network operation

structured cabling system

According to general division, the structured cabling system includes six subsystems:

  1. The building backbone subsystem
    provides the connection point between the exterior building and the wiring within the building. The EIA/TIA569 standard specifies the physical specifications of network interfaces to enable connections between building groups.
  2. The equipment subsystem
    EIA/TIA569 standard specifies equipment cabling between equipment. It is the most important management area of ​​the cabling system, and data on all floors are transmitted here by cables or fiber optic cables. Usually, this system is installed in the main computer room of computer system, network system and program-controlled computer system.
  3. Vertical Backbone Subsystem
    It connects communications rooms, equipment rooms, and entrance equipment and includes backbone cabling, intermediate switching and main switching, mechanical terminations, and patch cords or plugs for backbone-to-backbone switching. The backbone wiring should adopt a star topology, and the grounding should comply with the requirements specified by EIA/TIA607.
  4. Management Subsystem
    This section houses the telecommunications cabling system equipment, including the mechanical terminations and 1 or switches of the horizontal and backbone cabling systems.
  5. Horizontal branch line subsystem
    connects the management subsystem to the work area, including horizontal cabling, information sockets, cable terminals and switching. The specified topology is a star topology. There are three optional media for horizontal cabling (UTP cable, STP cable and optical cable). The longest extension distance is 90 meters. In addition to the 90-meter horizontal cable, the total length of patch cords and jumper cables in the work area and management subsystem is Up to 10 meters.
  6. Work area subsystem
    The work area extends from the information socket to the station equipment. Workspace cabling requirements are relatively simple, making it easy to move, add and change equipment.

Integrated wiring

The computer room is the "home" of servers and network equipment in system integration projects. It is usually divided into the following three categories:
intelligent building weak current master control computer room, whose work includes wiring, monitoring, fire protection, computer room, building automation, etc.;
telecommunications room, weak current room and Silo;
data center computer room, including enterprise-owned data centers, operator hosting or Internet data centers. Large data centers can contain tens of thousands of servers.

Computer room
engineering combines the environmental conditions, fire protection and safety, interior decoration, power transmission and distribution, integrated wiring, air conditioning, lighting, grounding and other aspects of the computer room. The computer room construction project is not only a decoration project, but more importantly, it is an interdisciplinary and interprofessional field integrating electrical engineering, electronics, architectural decoration, aesthetics, HVAC major, computer major, weak current control major, fire protection major, etc. Comprehensive engineering, and engineering involving computer network engineering, PDS engineering and other professional technologies.
Computer room engineering design principles: 1) Practicality and advancement. 2) Safety and reliability. 3) Flexibility and scalability. 4) Standardization. 5) Economical, investment protection. 6) Manageability. The computer room should be located on the 2nd and 3rd floors of the building. AC work is grounded, and the grounding resistance is not greater than 4Ω. Safety protection grounding, grounding resistance not greater than 4Ω. Anti-static grounding, grounding resistance not greater than 4Ω.
"Building Communication Integrated Wiring System" is suitable for wiring areas with a span distance of no more than 3,000 meters and a total building area of ​​no more than 1 million square meters. The number of people in the area is 500,000 to 50,000.
The following points need to be considered in the computer room wiring design: Consider the energy saving, environmental protection, and safety of the computer room environment; Adapt to the arrangement of equipment in hot and cold aisles; The setting of column head cabinets; Open wiring and cable fire protection; Long jumpers, short links, and performance testing; Network architecture and external networks, network interoperability between multiple operators; Special circumstances of high-end product applications; Grounding of computer rooms and cabling systems.
Premises Distribution System (PDS) is a structured information transmission system established on a unified transmission medium within buildings and fixed areas that can connect telephones, computers, conference televisions, surveillance televisions and other equipment. The standard widely followed in the field of integrated cabling is TIA/EIA 568 A.
The integrated wiring system is divided into six subsystems:
1. Work area subsystem: It is composed of equipment between terminal equipment in the work area and information sockets, including information sockets, connecting cords, adapters, computers, network hubs, and telephones , alarm probes, cameras, monitors, speakers, etc.
2. Horizontal subsystem: The horizontal subsystem is arranged on the same floor, with one end connected to the information socket and the other end connected to the jumper rack in the wiring room. Its function is to extend the trunk subsystem lines to the user work area. Lead the user workspace to the management room subsystem and provide users with an information point outlet that complies with international standards and meets voice and high-speed data transmission requirements.
3. Inter-management subsystem: It is a bridge between the main line subsystem and the horizontal subsystem, and at the same time it can provide conditions for the same layer networking. These include twisted pair jumpers and jumpers (including quick-connect jumpers and simple jumpers).
4. Vertical trunk subsystem: Usually it provides multiple line facilities from the main equipment room to each layer of management room, especially at the public system equipment located at the central point. It uses a large number of cable feeders or optical cables, with both ends separately It is terminated on the jumper frame between the equipment room and the management room. The purpose is to realize the connection between computer equipment, program-controlled exchange (PBX), control center and various management subsystems. It is the routing of the building trunk cable.
5. Equipment room subsystem: This subsystem is composed of cables, connecting jumpers and related supporting hardware, lightning protection devices, etc. in the equipment room. It can be said to be the central unit of the entire wiring system. Therefore, whether its layout, shape, and environmental conditions are properly considered will directly affect the normal operation, maintenance, and flexibility of use of the information system in the future.
6. Building group subsystem: It is a wiring system that connects the data communication signals of multiple buildings into one. It uses overhead or underground cable ducts or directly buried outdoor cables and optical cables to interconnect. It is a structured wiring system. Part of the support provides the hardware required for communication between building groups.
Commonly used formulas for integrated wiring:
(1) Demand for RJ-45 connectors: m=n 4+n 4 15% m: represents the total demand for RJ-45 connectors; n: represents the total number of information points; n 4 15% : Indicates the surplus left.
(2) Demand for information modules: m=n+n
3% m: indicates the total demand for information modules; n: indicates the total amount of information points; n3%: Indicates the margin.
(3) Amount of wires used on each floor: C=[0.55
(L+S)+6]*n L: The farthest information point distance between this floor and the management room; S: The closest information point distance between this floor and the management room ; n: total number of information points on this floor; 0.55: spare coefficient. Network Planning, Design and Implementation Network planning is to give network construction and users a well-informed design result.
There are three principles that should be considered first in network planning: the principle of practicality, the principle of openness and the principle of advancement.
The principles that must be considered during the design and implementation of the plan are as follows:

  1. Reliability Principle. The operation of the network is solid.
  2. Security Principles. This includes choosing a secure operating system, setting up network firewalls, network anti-virus, data encryption and confidentiality of information work systems.
  3. The principle of efficiency. The performance index is high, and the performance of software and hardware is fully utilized.
  4. Scalability. Able to scale in both scale and performance.

network control

Network Storage

THE

Direct Attached Storage, Direct Attached Storage, direct mode, is not easy to expand. A set of large-capacity hard disks are attached to the server, and the storage device is connected to the server host using a SCSI channel with bandwidths of 10MB/s, 20MB/S, 40MB/S, and 80MB/S. Direct-attached storage directly connects the storage device to the server. This method is difficult to expand the storage capacity and does not support data fault tolerance. When the server is abnormal, data will be lost.

NAS

Network attached storage, network attached storage, Network Attached Storage. It has its own file system, and can use TCT/IP as its browsing transmission protocol and file sharing access method. It connects storage devices to a series of computers through a standard network topology (such as Ethernet). It is a dedicated data storage server. It is data-centric and completely separates storage devices from servers. NAS is an independent device in the network, assigns IP addresses, and accesses and stores through the network.

A device that connects storage devices to existing networks to provide data storage and file access services. A NAS server is a file server that installs a simplified thin operating system (only with functions such as access control, data protection, and recovery) on a dedicated host. The NAS server has built-in protocols required for network connection and can be directly connected to the Internet. Users with permissions can access files in the NAS server through the network.

SAN

Storage Area Network, Storage Area Network, block-level storage, not file sharing.

A dedicated network that connects storage devices and storage management subsystems to provide data storage and management functions. The SAN can be thought of as the back-end network responsible for data transmission, while the front-end network (or data network) is responsible for normal TCP/IP transmission. A SAN can also be viewed as a separate data network composed of several storage servers connected through a specific interconnection method to provide enterprise-level data storage services.

DNS resolution

Translate domain names into IP addresses that can be read directly by computers. Depending on the query object, DNS resolution can be divided into two methods: recursive resolution and iterative resolution.

Domain name services that can be provided include local cache, local domain name server, authority domain name server, top-level domain name server and root domain name server. The search sequence of DNS host name resolution is to first search the client's local cache. If it is unsuccessful, a resolution request is sent to the DNS server. The local cache is an area in memory that holds an image of recently resolved host names and their IP addresses. Because the parser cache is resident in memory, it is faster than other parsing methods. When a host sends a DNS query message, the query message is first sent to the host's local domain name server. The local domain name server is closer to the user. When the host being queried also belongs to the same local ISP, the local domain name server can immediately convert the queried host name into its IP address without having to query other domain names. server. Each zone is equipped with a domain name server, that is, an authority server, which is responsible for converting the domain name of the host within its jurisdiction into the IP address of the host. The mapping of all host domain names to IP addresses in the jurisdiction is stored on it. The top-level domain name server is responsible for managing all second-level domain names registered on this top-level domain name server. When a DNS query request is received, the second-level domain name under its jurisdiction can be converted into the IP address of the second-level domain name. Or the IP address of the domain name server that should be found next. The root domain name server is the highest level domain name server. Each root domain name server must store the IP addresses and domain names of all top-level domain name servers. When a local domain name server cannot resolve a domain name, it will directly find the root domain name server, and then the root domain name server will tell it which top-level domain name server it should go to for query.

recursive query

The most common and default parsing method. If the local domain name server (Local DNS server) configured by the client cannot resolve, the subsequent query process will be performed by the local domain name server instead of the DNS client until the local domain name server obtains the correct resolution result from the authoritative domain name server, and then the local domain name server will perform the query. The local name server tells the DNS client the results of the query.

During the entire recursive query process, except for the initial query request initiated by the client to the local domain name server, the remaining links are iterative queries centered on the local domain name server. The DNS client remains in a waiting state until the local domain name server sends back Final query results. Equivalently, the local domain name server plays the role of an intermediary agent in the entire query process .

The query process of recursive parsing is roughly as follows:
Insert image description here

  1. The client initiates a DNS domain name query request to the local domain name server configured on the machine.
  2. After receiving the request, the local domain name server will first query the local cache. If there is a record value, it will be returned directly to the client; if there is no record, the local domain name server will initiate a request to the root domain name server.
  3. After receiving the request, the root domain name server will return the corresponding top-level domain name server (such as .com, .cn, etc.) to the local domain name server according to the suffix in the domain name to be queried.
  4. The local domain name server initiates a query request to the corresponding top-level domain name server based on the returned results.
  5. After receiving the DNS query request, the corresponding top-level domain name server also first queries its own cache. If there is a resolution record for the requested domain name, it will directly return the record to the local domain name server, and then the local domain name server will return the record to the client. terminal to complete the entire DNS resolution process
  6. If the top-level domain name server does not record a value, the server address corresponding to the second-level domain name will be returned to the local domain name server, and the local domain name server will initiate a request to the second-level domain name server again, and so on, until the authoritative domain name server in the corresponding area finally returns the result to Local domain name server. Then the local domain name server returns the record value to the DNS client, and caches the local query record so that when the user queries again within the TTL value, the record is returned directly to the client.

Iterative query

Except for the initial query request initiated by the client in the recursive query, all other aspects of the recursive query are performed by the local domain name server instead of the client. Iterative query means that all query work is performed by the client itself. In addition, the entire query path and steps are not much different from recursive query.
Insert image description here
First, the client initiates a request to the local domain name server. If the local domain name server does not have a cache record, the client will initiate iterative queries to the root domain name server, top-level domain name server, and second-level domain name server in sequence until the final query result is obtained.

When one of the following conditions is met, the iterative parsing method will be used:

  1. When querying the local domain name server, if the client's request message does not apply for recursive query, that is, the RD field in the DNS request message is not set to 1.
  2. The client applies for recursive query in the DNS request message, but the configured local domain name server prohibits the use of recursive query, that is, the RA field in the header of the response DNS message is set to 0.

sniffer

Sniffers are a method of analyzing network traffic data. They are commonly used in network security attack and defense technologies, and are also used in the field of business analysis.

https://dun.163.com/news/p/233ea1ba40a14791a5fb8d3b6847b663

The 80/20 rule means that 80% of the total traffic is traffic within the network segment, and 20% of the total traffic is traffic outside the network segment.

network isolation

Common network isolation techniques:

  1. Firewall: Isolating network data packets through ACL is the most common isolation method. Limited to the control below the transport layer, it has no way to attack the application layer such as viruses, Trojans, worms, etc.; it is suitable for network isolation, not suitable for large-scale, two-way access business network isolation
  2. Multiple security gateways: Also called Unified Threat Management (UTM), known as a new generation firewall, it can achieve comprehensive detection from the network layer to the application layer. UTM functions include ACL, anti-intrusion, anti-virus, content filtering, traffic shaping, and anti-DOS
  3. VLAN division: VLAN division technology avoids broadcast storms and solves the problem of effective data transmission; by dividing VLAN, it isolates various departments with security requirements.
  4. Manual strategy: Disconnect the physical connection to the network and use manual methods to exchange data. This method has the best security.

gatekeeper

Insert image description here
A gatekeeper is an information security device that connects two independent host systems using solid-state switch read-write media with multiple control functions. Because there are no physical connections, logical connections, information transmission commands, information transmission protocols for communication between two independent host systems connected by a physically isolated gatekeeper, there is no packet forwarding based on the protocol, only protocol-free ferrying of data files . , and there are only two commands for solid-state storage media: read and write. Therefore, the physical isolation gatekeeper physically isolates and blocks all connections with potential attacks, making it impossible for hackers to invade, attack, or destroy, achieving true security. Any response from this device to the external network is a response to the request of the internal network user.

The significance of using security isolation gate:

  1. When the user's network needs to ensure high-strength security while exchanging information with other untrusted networks, if a physical isolation card is used, the user must use a switch to switch back and forth between the internal and external networks. Not only is it very troublesome to manage, but it is also very troublesome to use. It is also very inconvenient to get up. If a firewall is used, since the security of the firewall itself is difficult to guarantee, the firewall cannot prevent the leakage of internal information and the infiltration of external viruses and hacker programs, and the security cannot be guaranteed. In this case, the security isolation gatekeeper can meet these two requirements at the same time and make up for the shortcomings of physical isolation cards and firewalls, and is the best choice.
  2. The isolation of the network is achieved by disconnecting the two networks at the link layer through the gatekeeper isolation hardware. However, in order to exchange data, the designed isolation hardware is used to switch at the corresponding levels of the two networks. By reading the memory chip on the hardware, Write to complete the data exchange.
  3. After installing the corresponding application module, the security isolation gatekeeper can enable users to browse the web, send and receive emails, exchange data between databases on different networks, and exchange customized files between networks while ensuring security. .

IPv6

Start another article for reference

data exchange

The switching method adopted by the Internet core is packet switching, which is also called packet switching. It stores the received packets and then forwards them.

Data exchange methods are divided into line switching and store-and-forward. The key difference between line switching and store-and-forward is that the former allocates lines statically, while the latter allocates lines dynamically.
Store and forward is divided into message switching and packet switching, and packet switching is divided into two methods: virtual circuit and datagram.
Line switching: The concept of switch originated from the telephone system. When a user makes a phone call, the switch in the telephone system finds and establishes an objectively existing physical path between the caller and the recipient. Once the path is established, a call can be established. The line is exclusive to the sending and receiving end until the end of the call. This method of data exchange is called circuit switching. Advantages: The transmission delay is small, and the only delay is the propagation time of the electromagnetic signal. Once the line is connected, there will be no conflicts. Disadvantages: It takes a long time to establish the line. Exclusive use of the line causes channel waste.
Message switching: Without establishing a line in advance, when the sender has a data block to send, the data block is handed over to the switching device (IMP) as a whole (also called message, message), and the switching device selects a suitable idle output line. , sending the data block through this output line. In this process, no physical connection is established between the input and output lines of the switching device. At each switching device, the message is first saved and forwarded when appropriate. Disadvantages: There is no limit on the size of the transmitted data block. When the message is large, IMP needs to use the hard disk for caching. A single large packet occupies the line for too long.
Packet switching: Packet switching technology strictly limits the upper limit of data block size, so that packets can be stored in the memory of the IMP, ensuring that no user can monopolize the line for more than tens of milliseconds, and is suitable for interactive communication. Advantages: High throughput rate. In a message with multiple packets, the first packet can continue to be transmitted before the second packet is received, reducing delays and improving throughput rate. Disadvantages: There is congestion, packet fragmentation and reassembly, packet loss and disorder, etc. Packet switching is a technology used by the vast majority of computers. There are also a very small number of computers that use message switching, but they never use line switching. According to different internal mechanisms, packet switching subnets are divided into two categories: one is connected oriented and the other is connection less. In a connected subnet, the connection becomes a virtual circuit. , similar to physical lines in a telephone system; independent packets in a connectionless subnet are called datagrams, similar to telegrams in the postal system.
Cell switching technology is a fast packet switching technology that combines the advantages of small delay of circuit switching technology and flexibility of packet switching technology. A cell is a fixed-length packet. ATM uses cell switching technology, and its cell length is 53 bytes.

attack

Communication on computer networks faces the following four threats:

  • Interception: An attacker eavesdrops on other people's communications on the network
  • Interruption: An attacker intentionally interrupts other people's communications on the network
  • Tampering: The attacker deliberately tamperes with the messages transmitted in the network
  • Forgery: An attacker forges the transmission of information over the network

The above four threats can be divided into two categories, namely passive attacks and active attacks.

Attacks that intercept information are passive attacks, while attacks that interrupt, tamper with, and forge information are called active attacks. Traffic analysis is also an active attack.

reference

Guess you like

Origin blog.csdn.net/lonelymanontheway/article/details/131837832