Summary of issues related to learning embedded

  • What does the file udhcpd.leases do?

`udhcpd.leases` is a file where the `udhcpd` DHCP server program records the IP addresses assigned to DHCP clients. When `udhcpd` allocates a new IP address to a DHCP client, it will record the client's MAC address, IP address, lease expiration time and other information in the `udhcpd.leases` file, so that when the client next When requesting DHCP allocation, `udhcpd` will allocate the same IP address to the client based on the information recorded in this file, thereby achieving stable allocation of IP addresses. In the OpenWrt system, by default, the `udhcpd.leases` file is located in the `/var/lib/misc/udhcpd.leases` directory.

  • Is the dhcpd.leases file updated in real time?

The `udhcpd.leases` file is a file used to store information such as the IP address, MAC address and lease period assigned to the client by the DHCP server `udhcpd`. When the client obtains an IP address, the DHCP server will record the client's information in the `udhcpd.leases` file, including IP address, MAC address, lease period, etc., as well as some other configuration information. This file functions similarly to a traditional DHCP server's lease list.

When the DHCP server assigns an IP address, it writes the newly assigned IP address information to the `udhcpd.leases` file, which is updated in real time. When the DHCP server receives the DHCP Release message from the client, it will delete the corresponding lease information from the file and mark the corresponding lease information as expired after the lease expires.

It should be noted that if the `udhcpd.leases` file is deleted while running `udhcpd`, all clients will re-request new IP addresses. Therefore, this file should be handled with caution to avoid unwanted repercussions from accidental deletion.

  • Linux arp table aging time?

In Linux, the aging time of the ARP table is determined by a parameter `net.ipv4.neigh.default.gc_stale_time` in the kernel. This parameter specifies the time (in seconds) that an ARP entry is retained if no response is received.

By default, this value is 60 seconds. The current value can be viewed with the following command:

sysctl net.ipv4.neigh.default.gc_stale_time

You can change the aging time of the ARP table by modifying kernel parameters, for example:

sysctl -w net.ipv4.neigh.default.gc_stale_time=120

The above command sets the aging time of the ARP table to 120 seconds. This command can be added to the system startup script to set the parameter every time the system is started

  • How is the aging time of arp messages implemented?

The aging time of the ARP cache table is implemented through a timer. The time interval of the timer is generally determined by the operating system kernel parameters, and is generally between a few minutes and half an hour. When the timestamp of a record in the ARP cache table differs from the current time by more than this time interval, the record will be automatically deleted by the operating system to prevent too many invalid records from appearing in the ARP cache table. When a host does not use a network interface for a period of time, the ARP records related to the interface will also be released along with the interface, and the ARP records will also be automatically deleted.

It should be noted that the aging time of the ARP cache table may vary depending on different operating system versions, different kernel versions, and different hardware configurations. The aging time of the ARP cache table can be adjusted through operating system configuration parameters to meet specific needs.

  • Will the arp request be reissued after the arp table ages?

When the validity of an entry in the ARP table has expired, an ARP request will occur the next time the entry is accessed. That is, the destination address of the ARP request is the address that needs to be updated, and the source address is the MAC address of the device sending the request. The ARP request message is sent in the same way as the request message. When the required ARP request has been completed, the corresponding entry in the ARP table will be updated with the correct MAC address for future communications.

  • What is the difference between udhcpc and udhcpd ?

`udhcpc` and `udhcpd` are both DHCP clients and servers in UDP protocol stack implementations for embedded devices. The difference between the two is as follows:

1. `udhcpc` is a tool in `udhcp`. It is a client program that is responsible for sending requests to the DHCP server and notifying the kernel network protocol stack of the received reply response information so that the kernel can assign an IP address to the interface. `udhcpd` is a DHCP server program responsible for assigning IP addresses and other network parameters to devices on the network.

2. `udhcpc` and `udhcpd` are both streamlined DHCP implementations for embedded systems. Compared with conventional DHCP clients and servers, they have fewer configurable items and are smaller in size, making them more suitable for embedded systems, etc. Resource constrained devices.

3. The usage of `udhcpc` and `udhcpd` is similar to that of regular DHCP clients and servers. You can use command line parameters or configuration files to specify related settings of network parameters.

Therefore, `udhcpc` and `udhcpd` are two DHCP implementations for embedded devices, where `udhcpc` is a DHCP client program and `udhcpd` is a DHCP server program, each of which is responsible for different functions.

  • What does this sentence mean? system("killall host_monitor >/dev/null 2>&1");

This sentence calls the system command `killall` in the C/C++ program to close the process named `host_monitor`. Specifically, the `system()` function is one of the standard library functions in the C/C++ language, which can call system commands. `killall` is a Linux command used to kill all processes with the same name. The following parameter `>/dev/null 2>&1` means to redirect the command output and error information to a null device, that is, not output to the terminal. Therefore, the function of this sentence is to call the `killall host_monitor` command in the program to shut down the process named `host_monitor` and mask the output and error messages to avoid affecting the program.

  • OpenWrt prompts that the udhcpd command cannot be found?

In OpenWrt, the `udhcpd` command is used to start a small DHCP server, which can assign information such as IP addresses, gateways, and DNS servers to devices connected to the router. If you are prompted that the `udhcpd` command cannot be found, it may be because the command is not installed or the path to the command is not added to the system environment variable.

You can check whether `udhcpd` is installed by running the following command:

opkg list-installed | grep udhcpd

If the output is empty, `udhcpd` is not installed. You can install `udhcpd` using the following command:

opkg update

opkg install udhcpd

After the installation is complete, you can use the following command to start `udhcpd`:

udhcpd /etc/udhcpd.conf

Among them, `/etc/udhcpd.conf` is the configuration file of `udhcpd`. You need to configure it according to your own needs.

  • Openwrt firmware does not contain udhcpd server, how to add it?

Installing the udhcpd server in OpenWrt can be completed by following these steps:

1. Log in to the OpenWrt router via SSH. If you are not familiar with SSH, please refer to the relevant tutorials to learn.

2. Run the `opkg update` command to update the package list.

3. Run the `opkg install udhcpd` command to install the udhcpd server.

4. Modify the `/etc/config/dhcp` configuration file and enable the udhcpd server.

In the `/etc/config/dhcp` configuration file, set `option dhcpd '1'` to `option dhcpd '2''`, and set `option start '100'` and `option limit '150'` respectively. The starting address and maximum number of IP addresses assigned to the DHCP server. The modified configuration file content is similar to:

config dhcp 'lan'

option interface 'lan'

option start '100'

option limit '150'

option lease time '12h'

option dhcpv6 'server'

option ra 'server'

option ra_management '1'

option dhcpd '2'

5. Save the changes and restart the router for the new configuration to take effect. Run the `service dhcp restart` command to restart the DHCP server.

  • Compiling openwrt firmware does not include the udhcpd server. How to add it during compilation?

If you did not select the udhcpd server when compiling the OpenWrt firmware, you can add it by recompiling. Here are the steps to add a udhcpd server:

1. Enter the OpenWrt source code directory and execute the following command to update feeds:

./scripts/feeds update -a

2. Install the udhcpd package:

./scripts/feeds install udhcpd

3. Configure compilation options and enter the menu configuration interface:

make menuconfig

4. Select "Network"->"udhcpd" and confirm. This will cause udhcpd to be compiled into the firmware.

5. Save and exit the configuration interface, and then start compilation:

make -j8

6. After the compilation is completed, find the firmware file you need in the bin directory.

NOTE: In step 4, if you select "M", udhcpd will be compiled as a module and the firmware will not include udhcpd. So make sure you select "*" and not "M" to compile udhcpd directly into the firmware.

  • What does this sentence mean, local name=$1?

This sentence is a simple Shell script statement, where $1 represents the first parameter passed in when the script is run, and local name=$1 defines a local variable named name, assigning its value to the first The value of the parameter. The local keyword is used to declare a local variable, which is only valid in the current function scope and will not affect other functions or global variables.

In the shell, we will see symbols like $0, $1, and $2. What do they mean? Simply put, $0 is the name of the shell script you wrote, and $1 is the first parameter you pass to the shell script you wrote. , $2 is the second parameter you pass to the shell script you wrote.

For example, you create a new shell script Test.sh with the following content:

#!/bin/sh
echo "shell脚本本身的名字: $0"
echo "传给shell的第一个参数: $1"
echo "传给shell的第二个参数: $2"

After saving and exiting, enter bash Test.sh 1 2 in the directory where Test.sh is located.

The result is:

shell脚本本身的名字: Test.sh
传给shell的第一个参数: 1
传给shell的第二个参数:  2
  • What does this sentence mean, local tmpname=${name%_*}?

This command is used in shell scripts, where `local` means defining a local variable.

`${name%_*}` is a string operation, which means to delete the shortest `_` character and all characters following it from the end of variable `name`.

Return the processed string and assign it to the `tmpname` variable. The purpose of this operation is usually to get the basic part of the file name,

Remove the version number and other information in the file name to facilitate file management and operation.

  • Description of the udhcpd.conf configuration file

poolname pool-default

poolorder 0XFFFFFFFF

interface br0

start 192.168.1.100

end 192.168.1.200

opt subnet 255.255.255.0

opt lease 86400

opt router 192.168.1.1

opt dns 192.168.1.1

opt domain Realtek

half

This configuration code is used to set up the DHCP service, where:

- `poolname pool-default` defines an IP address pool named "pool-default".

- `poolorder 0XFFFFFFFF` sets the order of address allocation, where `0XFFFFFFFF` means the default random allocation method.

- `interface br0` specifies the network interface corresponding to the DHCP service as "br0".

- `start 192.168.1.100` and `end 192.168.1.200` respectively specify the start address and end address of the IP address pool, that is, the range of IP addresses that can be allocated by the DHCP service.

- `opt subnet 255.255.255.0` sets the subnet mask.

- `opt lease 86400` sets the lease time, which is the validity period of the IP address obtained by the client. Here it is set to 86400 seconds (one day).

- `opt router 192.168.1.1` specifies the IP address of the default gateway.

- `opt dns 192.168.1.1` specifies the IP address of the DNS server.

- `opt domain Realtek` sets the domain name, here it is "Realtek".

- `poolend` indicates the end of IP address pool setting.

Therefore, the purpose of this configuration code is to set up a DHCP service to assign IP addresses to clients connected to the "br0" interface,

The assigned IP address range is 192.168.1.100 ~ 192.168.1.200, the subnet mask is 255.255.255.0, and the lease time is one day.

The DNS server address is 192.168.1.1 and the domain name is "Realtek".

  • What does this sentence mean /bin/p0f -i br0 -p -f /etc/p0f.fp -s /var/run/p0f.so?

This command will start a network traffic sniffer named p0f for use on br0

Capture network traffic on the interface and match the traffic through the rules in the /etc/p0f.fp file.

The results are then output to the /var/run/p0f.so file. where the -p flag parses the captured network packets into a human-readable format,

The -f flag specifies the location of the p0f rules file.

  • What does this sentence mean, /binsntp -s time.windows.com -s time.nist.gov -i 8?

This sentence is to run the sntp command under the Linux system, which is used to access time.windows.com and time.nist.gov

The two time servers obtain the current time and continuously synchronize the time at 8-second intervals. The specific meaning is as follows:

- /bin/sntp: Specifies the executable file path of the sntp command.

- -s time.windows.com : Specifies the time to be synchronized from the time.windows.com time server.

- -s time.nist.gov: Specifies the time to be synchronized from the time.nist.gov time server.

- -i 8: Specify the sntp command synchronization time interval as 8 seconds.

  • What does this sentence mean? bin/udhcpc -i br0 -p ar/run/udhcpc.pid.br0 -s /et

This sentence is a command that starts the `udhcpc` process and assigns an IP address to the `br0` interface. The specific explanation is as follows:

- `bin/udhcpc`: The path where the `udhcpc` command is located. `udhcpc` is a lightweight DHCP client program used to request allocation of IP addresses from the DHCP server.

- `-i br0`: Specify the interface name that the DHCP client needs to use. Specified here as `br0`, which means using the `br0` interface in Linux.

- `-p /var/run/udhcpc.pid.br0`: Specify the storage path of the PID file of the `udhcpc` process. The PID file is a file containing the process ID, which is used to record the running status of the process to facilitate subsequent management of the process.

- `-s /etc/udhcpc.script`: Specify the path to the script file executed when `udhcpc` is started. This script file contains the specific configuration information of the DHCP client, such as IP address, subnet mask, gateway, etc.

  • What does this sentence mean, /bin/hostapd_cli -a /var/hostapd_wps.sh -B -P /var/r?

This sentence is running the `hostapd_cli` command, and its specific meaning is as follows:

- `/bin/hostapd_cli`: specifies the path of hostapd_cli

- `-a /var/hostapd_wps.sh`: Use the specified script to handle events from hostapd

- `-B`: run hostapd_cli in the background

- `-P /var/run/hostapd_cli-br0.pid`: Specify the path and name of the PID file, which is used to record the PID number of the hostapd_cli process.

In general, this command is used to run hostapd_cli in the background and send received events to the specified processing script.

In this command, the parameters of the `-a` and `-P` options need to be modified accordingly according to the specific environment and requirements.

  • What does this sentence mean, /bin/radvd -C var/radvd.conf?

This sentence runs the `radvd` command and uses the `/var/radvd.conf` file as the configuration file. `radvd` is a daemon process used on routers to send Router Advertisement messages to automatically configure IPv6 addresses. Its configuration file contains the content of the RA message that needs to be sent, such as IPv6 prefix, router address, etc. In this command, the `-C` option specifies the path of the configuration file as `/var/radvd.conf`.

  • What does this sentence mean /bin/hostapd -g var/run/hostapd/global -P var/run/?

This command is to start the hostapd daemon, which is used to implement the WLAN Access Point function. Its parameters are explained as follows:

- `-g var/run/hostapd/global`: Specifies the path of the global control interface, used to interact with the hostapd_cli tool.

- `-P var/run/hostapd.pid`: Specifies the path of pidfile, which is used to record the process ID of hostapd.

- `-B`: Start hostapd daemon and return immediately without blocking the command line.

The default configuration file of hostapd is `/etc/hostapd/hostapd.conf`. You can specify other configuration file paths and names through the `-B -P` parameters.

  • chat: What does this sentence mean, /bin/crond

This sentence is to execute the crond command under the Linux system to start the crond daemon process. crond is a service process in the Linux system. Its main function is to regularly perform some specified tasks, such as backing up data, regularly cleaning logs, regularly sending reports, etc.

  • When ps checks the process, [cfg80211], why does this process have []

`[cfg80211]` is a mark added before the process name and is not part of the process. It indicates that the process is a module in the kernel rather than a user space application. In the Linux kernel, the code of some modules will run in the form of processes to handle specific tasks, such as network protocol stacks, file systems, device drivers, etc. The process names of these modules will be preceded by a mark to distinguish them from ordinary user processes. `cfg80211` is a wireless network protocol stack for the Linux kernel. This mark indicates that the process is a kernel module that handles wireless network-related tasks.

  • What is the difference between wpa_supplicant and hostapd?

Both wpa_supplicant and hostapd are open source software used to implement Wi-Fi connections, but their functions are different.

wpa_supplicant is a Wi-Fi client used to connect to wireless networks. It supports multiple encryption methods and authentication protocols, including WEP, WPA, WPA2, 802.1X, etc. In Linux systems, wpa_supplicant usually serves as the backend of the system network manager and is used to automatically connect to wireless networks.

hostapd is a Wi-Fi access point software that allows a computer to become an access point over Wi-Fi so that other devices can connect to the computer and use its Internet connection. hostapd supports multiple encryption methods and authentication protocols, including WEP, WPA, WPA2, 802.1X, etc. hostapd is commonly used to create wireless hotspots or wireless network labs.

Therefore, the main difference between wpa_supplicant and hostapd is their functionality. wpa_supplicant is a Wi-Fi client used to connect to wireless networks, while hostapd is a Wi-Fi access point software used to create wireless hotspots.

  • chat:wpa_supplicant code explanation

wpa_supplicant is an open source WiFi client for connecting to wireless networks. It supports security protocols such as WPA/WPA2 and 802.1X authentication. The following is an explanation of the main code of wpa_supplicant:

1. main function

The main function of wpa_supplicant is implemented in wpa_supplicant.c. This function mainly completes operations such as initialization, configuration, and running event loops.

2. Configuration file reading

wpa_supplicant supports using configuration files to set WiFi connection parameters. In the main function, the wpa_supplicant_init_conf function is called to read the configuration file and assign the parameters in the configuration file to the corresponding member variables in the wpa_supplicant structure.

3. Interface management

wpa_supplicant uses DBus to manage WiFi interfaces. DBus is a cross-process communication mechanism that allows wpa_supplicant to communicate with other processes in the operating system. wpa_supplicant uses the DBus interface to enable, disable and configure WiFi interfaces.

4. Event loop

wpa_supplicant uses an event loop to handle WiFi events such as scanning, connecting, disconnecting, authentication, etc. In the main function, the wpa_supplicant_run function will be called to start the event loop, which will block until the end of the event loop.

5. Interface driver

wpa_supplicant supports a variety of WiFi interface drivers, including the Linux kernel's own driver, wext driver, nl80211 driver, etc. Different drivers have different implementations, but they all need to implement a set of interfaces to communicate with wpa_supplicant.

6. Security Protocol

wpa_supplicant supports multiple WiFi security protocols, such as WPA-PSK, WPA2-PSK, WPA-Enterprise, WPA2-Enterprise, etc. Different security protocols require different authentication methods, but they all need to implement a set of interfaces to communicate with wpa_supplicant.

The above is the main code explanation of wpa_supplicant. The code structure of wpa_supplicant is clear and easy to expand and maintain.

Hostapd code explanation?

hostapd is a daemon used to create and manage Wi-Fi access points. It provides support for IEEE 802.11 access control, authentication and encryption, and also supports IPv4 and IPv6 networks.

The code of hostapd is written in C language and is mainly divided into the following modules:

1. hostapd.c: The main file of hostapd, including configuration file parsing, daemon initialization, signal processing, etc.

2. driver_*: Code related to the underlying driver, which mainly implements communication with the wireless network card.

3. hostapd.conf: The hostapd configuration file contains various parameter configurations of the access point.

4. eapol_auth/eapol_sm: implements EAP authentication and key management.

5. utils: some tool functions of hostapd.

During operation, hostapd first reads the configuration file, parses various parameters, then initializes the underlying driver and listens to various events. When a client connects, hostapd will call the EAP authentication and key management module to complete the authentication and encryption of the client. At the same time, hostapd is also responsible for managing data transmission between clients, maintaining the transmission status of data packets, etc.

In the hostapd code, the core part is the communication with the wireless network card. Different wireless network cards require different drivers to communicate with hostapd. Therefore, hostapd provides some interface functions for interacting with the driver to facilitate different wireless network cards to implement these functions. At the same time, hostapd also provides some common wireless interface functions, such as EAP authentication with clients, key management, etc., to facilitate the implementation of various functions of wireless access points.

In general, the code structure of hostapd is relatively clear, mainly divided into several modules such as configuration file parsing, underlying driver, EAP authentication and key management. When using hostapd, we need to modify the configuration file according to the actual situation, and then start the hostapd daemon process to create our own Wi-Fi access point.

  • How to get started with WiFi driver?

Learning WiFi drivers requires a certain foundation in Linux system programming and an understanding of wireless networks. Here are some suggestions for getting started with WiFi drivers:

1. Understand the Linux kernel network protocol stack: Learn the basic knowledge of the network protocol stack, such as TCP/IP protocol, network device driver, etc. These are the basis for understanding WiFi driver.

2. Learn the Linux device driver framework: Be familiar with the Linux device driver framework and driver model, and understand the loading and unloading process of Linux drivers.

3. Learn WiFi technology: Understand wireless network technology, such as WiFi standards, WiFi protocols, channels, radio frequencies, etc., master the basic principles of the 802.11 protocol stack, and be familiar with the working principle of WiFi chips.

4. Read the WiFi driver source code: Reading the WiFi driver source code is the best way to learn WiFi driver. You can first learn about open source WiFi driver projects, such as the drivers that come with the Linux kernel, hostap, ath9k, etc. Start with simple drivers and gradually understand code implementation and debugging techniques.

5. Master the tool chain and debugging tools: Master the use of basic development tools such as cross-compilation and debugging tools, such as gcc, gdb, make, etc.

  • Linux kernel netlink communication

Netlink is a mechanism used by the Linux kernel for inter-process communication (IPC) to transfer network-related information between kernel space and user space, such as network interfaces, routing tables, ARP tables, sockets, etc. Netlink socket is a special socket used to pass messages between user space and kernel space.

Netlink provides a scalable, efficient, message-oriented inter-process communication mechanism. User space processes can use Netlink sockets to send messages to the kernel, request certain operations (such as adding routes, creating sockets, etc.), and receive responses from the kernel. The kernel can also use Netlink sockets to send asynchronous notifications to user space so that user space applications can be notified about network events in a timely manner.

Netlink socket supports multiple message types, such as configuration messages, error messages, multicast messages, statistics messages, etc. Each message type has a specific format. User space applications can communicate with the kernel using Netlink sockets and use generic socket APIs such as send(), recv(), etc. to send and receive Netlink messages.

In Linux systems, some tools and network management applications (such as ifconfig, route, iptables, tcpdump, etc.) are implemented based on Netlink. Netlink sockets can be used to communicate with the kernel to obtain or modify network configuration information.

In summary, Netlink provides a mechanism for efficient, scalable and flexible network communication between the Linux kernel and user space processes.

  • Implement WiFi timing?

The WiFi timing function can be implemented by writing a script for a scheduled task, or by using the system's own scheduled task tool. The specific implementation steps are as follows:

1. Write scripts for scheduled tasks

You can use shell scripts or scripting languages ​​such as Python to write scheduled task scripts for WiFi switches. For example, you can write a script to set a timer to turn WiFi on and off:

```bash

#!/bin/bash

# Turn on WiFi

nmcli radio wifi on

# Wait 10 minutes

sleep 600

# Turn off WiFi

nmcli radio wifi off

2. Use the system’s built-in scheduled task tool

The scheduled task tool that comes with the Linux system can help us execute scheduled tasks conveniently. In Ubuntu system, you can use the crontab command to create and manage scheduled tasks.

For example, to turn on WiFi at 9 a.m. and turn it off at 5 p.m. every day, you can use the following command to create a scheduled task:

```bash

# Edit scheduled tasks

crontab -e

#Add the following two lines to the file

0 9 * * * nmcli radio wifi on

0 17 * * * nmcli radio wifi off

The above command means to execute the corresponding command at 9 am and 5 pm every day, where `0 9 * * *` means to be executed at 9 am every day, and `0 17 * * *` means to be executed at 5 pm every day. The command executed is to use the `nmcli` command to turn WiFi on and off.

In actual applications, it is also necessary to write corresponding scheduled task scripts according to specific needs and use scheduled task tools to execute them regularly.

  • Introduction to the principle of WiFi stealth?

Wi-Fi stealth, also known as SSID hiding, is a method of hiding a wireless network by disabling the broadcast of the network name (also known as SSID). This means that others cannot see the network when scanning for available wireless networks, but can still connect to the network if the network name is known.

The principle of hiding the SSID is simple: Under normal circumstances, a Wi-Fi access point periodically sends beacon frames to notify nearby devices of the access point's existence. These beacon frames include the network's SSID (network name) and other information about the access point. If the SSID is hidden, the SSID field in the beacon frame will be set to empty, and the access point will not broadcast the SSID to devices.

To connect to a hidden Wi-Fi network, users must manually enter the network's name (i.e. SSID) and other network authentication information, such as a pre-shared key (PSK) or 802.1X authentication credentials. In some cases, it may be necessary to enable your device's "Hide SSID" option so that the network can also be found when scanning for available networks.

It's important to note that hiding the SSID does not provide true security. While hiding the SSID may make the network invisible to attackers, this technique does not truly protect the network from malicious attacks. Network administrators still need to take other security measures, such as strong passwords, encryption, and access controls, to ensure the security of the network.

  • The laptop and mobile phone are connected to the same WiFi at the same time. The laptop can access the router web through IPV6, but the mobile phone cannot. What may be the reason?

This issue could have multiple causes, here are some possible explanations:

1. Mobile phone network settings problem: It may be that IPv6 is not enabled in the phone’s network settings, or IPv6 is disabled. You can check your phone's network settings to make sure IPv6 is enabled.

2. Router configuration problem: It may be a problem with the router configuration. You need to check the IPv6 configuration of the router. If IPv6 is not configured correctly on your router, it may prevent some devices from accessing the network via IPv6.

3. Mobile phone hardware problems: The mobile phone hardware may not support IPv6, resulting in the inability to access the network through IPv6. This situation is relatively rare, and generally speaking, most modern mobile phones support IPv6.

4. Network problems between the router and the mobile phone: There may be a problem with the network connection between the router and the mobile phone, resulting in the inability to access the network through IPv6. You can try restarting the router and phone, or try using other network connections.

To sum up, this problem may be caused by a variety of reasons and needs to be investigated according to the specific situation. If you cannot solve the problem, you can try to contact the network service provider or manufacturer's technical support department for help.

  • What is the principle of ptables matching physdev?

In Linux systems, the `iptables` command is a very powerful tool that allows us to manage firewall rules. When we need to filter certain specific packets, we can use the `iptables` command to match some attributes of the data packet, such as source IP, destination IP, protocol type, port, etc. In addition, `iptables` also supports matching `physdev`. This parameter can be used to match the physical interface of the data packet, that is, which network card interface the data packet comes in from or which network card interface it goes out from.

When using the `physdev` parameter, it is generally used in scenarios where the Linux system is used as a router or gateway device. For example, if we have two network interfaces, one connected to the Internet and the other connected to the internal LAN, we can use the `physdev` parameter to implement packet filtering only for a specific network card.

Here is an example of a rule that uses the `physdev` parameter to match packets, which would deny incoming ICMP packets from an interface named `eth1`:

iptables -A FORWARD -p icmp -m physdev --physdev-in eth1 -j DROP

This rule is added to the `FORWARD` chain, indicating that this is a rule for filtering forwarded packets. `-p icmp` means that this rule only matches the data packets of the ICMP protocol. `--physdev-in eth1` means that it only matches the data packets entering from the `eth1` interface. `-j DROP` means that the data packets are matched and discarded directly. .

Simply put, using the `physdev` parameter can help us filter packets for specific network cards, thereby controlling network traffic more granularly.

In iptables, use the `physdev` matching module to match specific physical network devices, such as network cards. It is used in rules as a matching condition to filter traffic based on physical devices.

To match `physdev`, you can use the following command:

iptables -A INPUT -m physdev --physdev-in eth0 -j ACCEPT

This command indicates that data packets input through the eth0 network card are allowed.

Among them, the `--physdev-in` option specifies the input interface that matches the physical device, and the `-j` option specifies the action. Here it is `ACCEPT`, which means accepting the packet.

You can also use the `--physdev-out` option to specify an output interface that matches the physical device, for example:

iptables -A OUTPUT -m physdev --physdev-out eth0 -j DROP

This command means to block data packets output through the eth0 network card.

It should be noted that the `physdev` module can only be used with kernel-based firewalls, not user-space firewalls such as `nftables`. Additionally, the physical interface name must start with `eth`, `wlan` or `wlp`, otherwise this rule will not work.

  • What does the Linux function iw_get_ext() do?

According to my knowledge, the `iw_get_ext()` function is not a standard function in Linux systems, but a function in the `wireless-tools` toolkit, used to obtain extended information of wireless network interfaces.

Specifically, the `iw_get_ext()` function is a function used to obtain the specified extension information of the specified wireless network interface. Its function prototype is as follows:

int iw_get_ext(int sockfd, const char *ifname, int request, struct iwreq *wrq);`

The parameters of this function are explained below:

- `sockfd`: Socket file descriptor.

- `ifname`: The name of the wireless network interface.

- `request`: The type of extended information requested.

- `wrq`: Pointer to the structure `iwreq` used to store extended information.

The `iw_get_ext()` function returns a value of 0 to indicate success, and other return values ​​to indicate an error. For the specific implementation and usage of this function, you can view the documentation or source code of `wireless-tools`.

It should be noted that the `wireless-tools` tool package has been replaced by some new tools and libraries, so the `iw_get_ext()` function may no longer be used in new Linux systems.

  • What is this SIOCGIWRTLSTAINFO?

`SIOCGIWRTLSTAINFO` is a macro definition in the Linux system, which is used to obtain real-time connection status information of the wireless network interface.

Specifically, `SIOCGIWRTLSTAINFO` is a parameter used in the `ioctl()` system call to indicate the type of extended information to be obtained. In the context of a wireless network interface, the `SIOCGIWRTLSTAINFO` parameter is used to obtain real-time status information for all client devices currently connected to the interface.

To use `SIOCGIWRTLSTAINFO` you need to create an `iwreq` structure and set its type field `u.data.flags` to `IWRTL_STAINFO` to indicate that you want to obtain connection status information. Then, set the parameters of the `ioctl()` system call to the socket file descriptor, the `SIOCGIWRTLSTAINFO` macro definition and the pointer to the `iwreq` structure to obtain the connection status information.

It should be noted that `SIOCGIWRTLSTAINFO` is an extended information type specific to some wireless network cards, so not all wireless network cards support this function. In addition, in new Linux systems, the `wireless-tools` toolkit and `SIOCGIWRTLSTAINFO` macro definition may no longer be used, but more modern tools and APIs may be used to obtain the connection status information of the wireless network interface.

  • What is this SIOCGIWNAME for?

SIOCGIWNAME` is a macro definition in Linux system, which is used to obtain the name of the wireless network interface.

Specifically, `SIOCGIWNAME` is a parameter used in the `ioctl()` system call to indicate the type of extended information to be obtained. In the context of a wireless network interface, the `SIOCGIWNAME` parameter is used to obtain the name of the wireless network card driver used by the interface.

Using `SIOCGIWNAME` requires creating an `iwreq` structure and setting its type field `u.data.flags` to `0` to indicate that the name of the wireless network card driver is to be obtained. Then, set the parameters of the `ioctl()` system call to the socket file descriptor, `SIOCGIWNAME` macro definition and a pointer to the `iwreq` structure to obtain the name of the wireless network card driver.

It should be noted that `SIOCGIWNAME` is an extended information type specific to the wireless network card driver, so not all wireless network cards support this feature. In addition, in new Linux systems, the `wireless-tools` tool package and `SIOCGIWNAME` macro definition may no longer be used, but more modern tools and APIs may be used to obtain wireless network interface information.

  • What does this code mean? fp = popen("hostapd_cli -i wlan0 all_sta | gredot11RSNAStatsSTAAddress", "r");

This line of code uses the `popen()` function in C language to call the Linux system command line tool `hostapd_cli` to obtain the RSNA (Robust Security Network Association) status of all connected client devices on the wireless network port `wlan0` and then use The `grep` command filters out the `dot11RSNAStatsSTAAddress` field and outputs the result to the file stream `f.

Specifically, what this line of code does is:

1. Use the `popen()` function to open a child process in read-only mode and associate its standard output stream with the file stream `fp`.

2. Execute the `hostapd_cli -i wlan0 all_sta` command in the child process. This command will return detailed status information of all connected client devices on the specified wireless network interface.

3. Pass the output of this command to the `grep dot11RSNAStatsSTAAddress` command through a pipe, which will filter out the value of the `dot11RSNAStatsSTAAddress` field and output the result to the standard output stream.

4. The `popen()` function will read the data in the standard output stream into the file stream `fp` and return the file stream pointer `fp`.

It should be noted that the execution of this line of code depends on the `hostapd_cli` command line tool that has been installed and correctly configured in the Linux system. In addition, the command line parameter `all_sta` in this line of code is a parameter of the `hostapd_cli` command and is used to indicate that status information of all connected client devices is to be obtained.

  • Explain in detail what is the difference between router relay mode and bridge mode?

Both relay mode and bridge mode are modes for connecting multiple network devices, but their implementation and functionality differ.

1. Relay mode

Relay mode, also known as wireless relay, forwards the signal from one router to another router via wireless signal. In repeater mode, the second router receives the first router's wireless signal and rebroadcasts it. Repeater mode is typically used to extend Wi-Fi signal coverage.

The advantage of repeater mode is that it can extend the Wi-Fi signal coverage without the need for a wired connection. But relay mode also has some disadvantages, including:

- Reduced speed: Repeater mode reduces speed because the signal has to be transmitted twice.

- Reduced security: Relay mode also reduces security because data must travel through two routers.

2. Bridge mode

Bridge mode, also known as transparent bridge mode, connects two or more network devices so that they appear to be one large LAN. In bridge mode, communication between devices can occur as if they were on the same network, without routing.

The advantage of bridge mode is that it can extend the LAN range without using a router, and the communication speed between devices will also increase. But bridge mode also has some disadvantages, including:

- Complex configuration: Bridge mode requires manual configuration, and it is necessary to ensure that the IP address of each device is not repeated.

- Security issues: Since all devices are on the same LAN, network security may be reduced.

In short, relay mode and bridge mode are both methods of connecting multiple network devices, but their implementation methods and application scenarios are different. The relay mode is suitable for scenarios where the Wi-Fi signal range needs to be extended, while the bridge mode is suitable for scenarios where the LAN range needs to be extended or the communication speed between devices needs to be increased.

  • An explanation of the principle of router qos speed limit

QoS (Quality of Service) is a network technology that can allocate and limit network bandwidth according to the priority of network traffic to ensure that high-priority network traffic receives sufficient bandwidth resources.

The principle of implementing QoS rate limiting on the router is as follows:

1. Traffic Classification: The router first classifies network traffic and divides it into different types, such as VoIP, video, data, etc.

2. Traffic Policing: The router sets different bandwidth limits based on classified traffic types to limit the bandwidth usage of each traffic type. For example, limit the bandwidth of video traffic to 1Mbps.

3. Traffic Shaping: In order to avoid the impact of burst traffic, the router can shape the traffic, that is, adjust the traffic according to certain rules. For example, limit the bandwidth of video traffic to 1Mbps, and allow it to consume this 1Mbps bandwidth resource as smoothly as possible within 1 second.

4. Queueing and Scheduling: When multiple types of traffic enter the router at the same time, the router needs to queue and schedule the traffic to ensure that high-priority traffic can be processed as soon as possible. For example, put VoIP traffic at the front of the queue for priority processing.

Through the above steps, the router can implement QoS speed limiting to ensure network stability and optimize network usage experience.

  • An explanation of the implementation principle of qos speed limit in Linux kernel

In the Linux kernel, qos speed limiting is mainly implemented through Traffic Control (TC).

Traffic Control is a very important network traffic management subsystem in the Linux kernel, which can control and limit network data traffic. Its implementation principle is based on the class hierarchy and filter framework in the kernel, by registering TC classes in the network device driver and defining different queues and filtering rules in each TC class to achieve control and control of different traffic flows. limit.

When implementing qos rate limiting, you first need to create a TC class and define different queues and filtering rules in it. For example, the HTB (Hierarchical Token Bucket) scheduling algorithm can be used to implement bandwidth limitation and limit the data flow in each queue to a certain bandwidth range. At the same time, you can use tools such as iptables to define filtering rules, classify different data flows, and then put them into different queues for management.

Specifically, the HTB scheduling algorithm in the flow controller will set a Token Bucket for each queue to calculate and limit the sending rate and volume of the queue. Token Bucket contains two key parameters: Token Rate and Bucket Size, where Token Rate represents the number of data packets that can be sent per second, and Bucket Size represents the maximum capacity of Token Bucket. When sending a data packet, the Token Bucket will reduce the corresponding number of Tokens and save the remaining Tokens in the Token Bucket. If the number of Tokens in the Token Bucket is insufficient, data packets cannot be sent until enough Tokens are reaccumulated in the Token Bucket.

In addition, you can also use the tc qdisc command to create different queues and rate limiting rules, for example:

#Create a virtual device named eth0:1 and add it to the qdisc tc class

tc qdisc add dev eth0 handle 1:0 root htb default 1

# Create a queue with a bandwidth limit of 1Mbps and mount it on eth0:1

tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1mbit

# Limit packets with source IP address 192.168.1.1/24 to the 1Mbps bandwidth of eth0:1

tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 192.168.1.1/24 flowid 1:1

Through the above command, a virtual device named eth0:1 can be created and added to the TC class. Then, create a queue with a bandwidth limit of 1Mbps and mount it on eth0:1. Finally, use the tc filter command to limit the data packets with the source IP address 192.168.1.1/24 to the 1Mbps bandwidth of eth0:1.

In short, qos rate limiting mainly achieves control and management of different traffic flows by classifying and marking data packets, and setting different priorities and rate limiting policies. In the Linux kernel, the qos rate limiting function is mainly implemented through Traffic Control (TC). TC can classify, mark, rate limit and other operations on data packets at the transport layer of network equipment. TC adopts a class-based management method to divide different traffic into different classes, and implement rate limiting through different filtering rules and queue scheduling algorithms. Specifically, the Linux kernel implements qos rate limiting by configuring different qdiscs and classes on network devices. qdisc is mainly used for queue scheduling, and class is mainly used for traffic classification and marking. In qdisc and class, different rate limiting strategies are implemented by configuring different parameters and algorithms, such as HTB, TBF, PRIO, etc.

  • The main steps for the Linux kernel to implement qos speed limiting are as follows:

1. Configure qdisc

First, you need to configure a qdisc (queue discipline) on the network device that needs to be rate limited. qdisc is responsible for queuing in the sending queue of the network device, assigning a priority to each packet, and sending it according to certain rules. Commonly used qdiscs include pfifo_fast and htb.

2. Create class

Create a class to represent a specific traffic type. For example, you can create a class to represent VoIP traffic and another class to represent video traffic. Each class can have a different qdisc to control its sending behavior.

3. Allocate bandwidth

Allocate bandwidth to each class to determine the maximum bandwidth that each class can send. In the Linux kernel, bandwidth allocation can be performed through the tc command (traffic control). For example, you can use the tc command to allocate 20% of the bandwidth to the VoIP class, allocate 40% of the bandwidth to the video class, and the remaining 40% of the bandwidth can be used by the default class.

4. Mark the packet

In the Linux kernel, you can use iptables or tc commands to mark data packets so that subsequent qdisc can process them based on the marks. For example, you can use the iptables command to mark VoIP packets and then send these packets to the VoIP class to ensure that VoIP traffic receives sufficient bandwidth.

5. Implement speed limit

Finally, specific rate limiting strategies need to be implemented on each class. In the Linux kernel, you can use the tc command to configure qdisc to implement different rate limiting strategies. Commonly used rate limiting strategies include htb and tbf. For example, you can use htb qdisc to set a maximum bandwidth for each class, thereby limiting the sending rate of each class.

To sum up, the process of implementing qos speed limiting in the Linux kernel involves many aspects such as qdisc, class, bandwidth allocation, marked packets, and speed limiting strategies.

  • What exactly is connection tracking in Linux used for?

Linux's connection tracking is a kernel mechanism that can track every packet in a network connection and record their status, protocol and other information. Its main role is to provide support for network security and network management.

Specifically, Linux connection tracking can achieve the following functions:

1. Firewall rules: Connection tracking can track the status of network connections, and firewall rules can filter network traffic based on these statuses to protect network security.

2. Load balancing: Connection tracking can track the status of network connections, and can use these statuses to identify and load balance traffic to improve network performance.

3. Network diagnosis: Connection tracking can provide detailed network connection information, including source address, destination address, port number, protocol, etc., which can be used for network diagnosis and debugging.

In summary, connection tracking is an important kernel mechanism that supports network security, network management, and network performance.

  • Linux connection tracking implementation principle?

Linux connection tracking (Connection Tracking) refers to a technology that implements tracking and management of network connection status in the Linux kernel. The connection tracking module is called nf_conntrack, which is mainly used to implement network address translation (NAT), firewall, load balancing and other functions.

nf_conntrack stores the connection status of IP data packets flowing through the router in memory, including the start time of the connection, target IP address, port number, connection status (connected, disconnected, etc.) and other information. When a data packet enters the router, nf_conntrack will query the connection status table saved in the memory to determine whether the data packet belongs to an established connection, and decide how to process the data packet based on the connection status, such as whether to allow through the firewall, perform NAT, etc. .

The nf_conntrack module works as follows:

1. nf_conntrack is registered in a hook function in the network protocol stack. When a data packet passes through the protocol stack, nf_conntrack will be called.

2. When nf_conntrack is called, it will first check whether the packet is of IP protocol, and if not, ignore the packet.

3. If the data packet is an IP protocol, check the connection status of the data packet. If it is a new connection, add a new connection status to the connection status table; if there is an existing connection, update the connection in the connection status table. state.

4. When the data packet needs to be processed by NAT or firewall, nf_conntrack will query the relevant connection information from the connection status table and perform corresponding processing.

5. When the connection status in the connection status table is deleted, nf_conntrack will delete the corresponding connection status information from the memory.

Connection tracking is a very important part of the Linux kernel. It provides key technical support for network applications such as NAT, firewalls, and VPNs.

  • What is the essence of skb_buff in the Linux kernel protocol stack?

In the Linux kernel protocol stack, skb_buff refers to the data structure used to describe network data packets. It is essentially a linked list used to store various information of network data packets, such as the protocol of the data packet, source address, destination address, data length, etc.

skb_buff is a very important data structure that is used throughout the kernel protocol stack. When a network packet is received, it is encapsulated into an skb_buff and passed to various network protocol stacks in the kernel for processing. During the processing, various information in skb_buff will be modified until the data packet is finally sent.

In addition to describing information about network packets, skb_buff has some other uses. For example, it can be used to describe some control information, such as routing information in the kernel, the status of network devices, etc. In addition, skb_buff can also be used for cache management of data packets, which can implement operations such as fragmentation, merging, and copying of network data packets.

Guess you like

Origin blog.csdn.net/buhuidage/article/details/129097555