Summary of basic computer knowledge: threads, linux, Docker, network protocols

[1] Summary of Linux commands commonly used in deep learning

1.man: man command, you can view the help document of a certain command, press q to exit the help document
2.cd: used to switch directories, cd - can be used in the last two Switch back and forth between secondary directories
3.touch: touch file creates a file.
4.ls: ls -lh can list detailed information of files in the current directory.
5.pwd: The pwd command displays the user's current working directory in an absolute path.
6.cat: cat file displays the file content.
7.mkdir: mkdir dir can create a directory; mkdir -p dir/xxx/xxx can create a directory recursively.
8.cat: cat file displays the file content, press q to exit.
9.more: more file displays the file content, press q to exit.
10.grep: filtering command, for example, I want to find the py file in the current directory (ls -lh | grep .py)
11.whereis: can be searched Files containing specified keywords, such as whereis python3
Redirect> and >>: Linux allows the command execution results to be redirected to a file, and the content that should be displayed on the terminal Output/append to the specified file. Among them, > means output, which will overwrite the original file; >> means append, which will append the content to the end of the existing file.
12.cp: cp dst1 dst2 copies files; cp -r dst1 dst2 copies folders.
13.mv: mv dst1 dst2 can move files and directories, and can also rename files or directories.
14.zip: zip file.zip file compressed file; zip dir.zip -r dir compressed folder.
15.unzip: unzip file.zip decompresses the .zip file compressed by the zip command.
16.tar:
tar -cvf file.tar dir packaging folder
tar -xvf file.tar unpacking< /span> 30.find: This command is used to find files and has a powerful function. find . -name "*.c" means to find all files with a .c extension in the current directory and its subdirectories. 29.nohup sh test.sh &: The program runs in the background and does not hang up. 28.nvidia-smi: Check GPU usage. 27.vim: Edit the file content. 26.apt-get: used to install, upgrade and clean packages. 25.top: We can use the top command to monitor the status of the system processor in real time. 24.ln: ln -s dst1 dst2 creates a soft link to the file, similar to the Windows shortcut; ln dst1 dst2 creates a hard link to the file. Regardless of the link, it's best to use an absolute path for dst1. 23wget: wget url downloads files from the specified url. 22.top: View the running status of the system in real time, such as CPU, memory, and process information. 21.du: du -h dir to check the folder size. 20.df: df -h checks the disk space. 19.kill: kill PID kills the process based on PID. 18.ps: ps aux lists detailed information of all processes. 17.chmod :chmod -R 777 data will modify the entire data folder to be readable and writable by anyone. tar -zxvf file.tar.gz decompression
tar -czvf file.tar.gz dir compressed folder














[2] What is the difference between computer multi-threading and multi-process?

Basic concepts of processes and threads:

Process: It is the basic unit for allocating and managing resources during the execution of concurrently executed programs. It is a dynamic concept and the basic unit for competing for computer system resources.

Thread: It is an execution unit of the process and a scheduling entity within the process. A basic unit that is smaller than a process and operates independently. Threads are also called lightweight processes.

A program has at least one process, and a process has at least one thread.

The meaning of thread:

Each process has its own address space, that is, process space. In a network environment, a server usually needs to receive concurrent requests from an unknown number of users. Creating a process for each request obviously does not work (system High overhead and low efficiency in responding to user requests), so the concept of threads was introduced in the operating system. The execution process of a thread is linear. Although interruptions or pauses may occur in the middle, the resources owned by the process only serve the linear execution process. Once a thread switch occurs, these resources need to be protected. Processes are divided into single-threaded processes and multi-threaded processes. Single-threaded processes are also linear execution processes from a macro perspective, and there is only a single execution process from a micro perspective. Multi-threaded processes are linear at the macro level and perform multiple operations at the micro level.

The difference between processes and threads:

Address space: Threads in the same process share the address space of this process, while processes have independent address spaces.

Resource ownership: Threads in the same process share the resources of the process such as memory, I/O, CPU, etc., but the resources between processes are independent. (After a process crashes, it will not affect other processes in protected mode, but the crash of one thread may cause the entire process to die.So multi-process is more robust than multi-threading . When processes switch, resources are consumed and efficiency is poor. Therefore, when frequent switching is involved, it is better to use threads than processes. Similarly, if concurrent operations are required to be performed at the same time and share certain variables, you can only Use threads not processes.)

Execution process: Each independent thread has an entry point for program running, a sequential execution sequence and a program exit. However, threads cannot execute independently and must exist in the application program, and the application program provides multiple thread execution control. (Threads are process-based)

Threads are the basic unit of processor scheduling, but processes are not.

Both can be executed concurrently.

Advantages and Disadvantages of Processes and Threads:

Thread execution overhead is small, but it is not conducive to resource management and protection.

The process execution overhead is high, but it can manage and protect resources well.

When to use multi-processing and when to use multi-threading:

When there are high requirements for resource management and protection, and there are no restrictions on overhead and efficiency, multi-processing is used. (CPU intensive tasks)

When high efficiency is required, when frequent switching is required, and when resource protection and management requirements are not very high, multi-threading is used. (I/O intensive tasks)

[3] Related concepts of TCP/IP four-layer model

TCP/IP four-layer model:

  1. Application layer: Responsible for protocols between various applications, such as File Transfer Protocol (FTP), Remote Login Protocol (Telnet), Email Protocol (SMTP), Network File Service Protocol (NFS), Network Management Protocol (SNMP), etc. .

  2. Transport layer: TCP protocol responsible for reliable transmission and UDP protocol for efficient transmission.

  3. Network layer: IP, ICMP, ARP, RARP and other protocols responsible for addressing (accurately finding the other party's device).

  4. Data link layer: Responsible for accurately transmitting digital signals in the physical channel (network cable).

Four-level model meeting

The sending end works from top to bottom, adding the data (radicals) of each layer protocol to the header of the data from the upper layer, and then sends it to the lower layer.

The receiving end works from bottom to top, decrypts the data received from the lower layer and removes the radicals in the header before sending it to the upper layer.

After layers of encryption and decryption, the application layer finally gets the required data.

[4] Related concepts of the OSI seven-layer model

Insert image description here

[5] Types of process status in Linux

  1. Running (running or waiting in the run queue)
  2. Interrupt (sleeping, blocked, waiting for a certain condition to form or waiting for a signal to be received)
  3. Uninterruptible (does not wake up and cannot run when receiving a signal, the process must wait until an interrupt occurs)
  4. Zombie (the process has terminated, but the process descriptor exists until the parent process calls the wait4() system call and is released)
  5. Stop (the process stops running after receiving SIGSTOP, SIGSTP, SIGTIN, SIGTOU signals)

[6] The ps aux command and grep command cooperate to manage the process in Linux

ps related commands

The ps command (Process Status) is the most basic and very powerful process viewing command.

  • ps a displays all programs under the current terminal, including other users' programs.
  • ps -A displays all programs.
  • When ps c lists programs, it displays the actual instruction name of each program, without including the path, parameters, or identification of resident services.
  • ps -e The effect of this parameter is the same as specifying the "A" parameter.
  • When ps e lists programs, displays the environment variables used by each program.
  • ps f uses ASCII characters to display a tree structure and express the relationship between programs.
  • ps -H displays a tree structure, indicating the relationship between programs.
  • ps -N displays all programs except programs under the terminal that execute the ps command.
  • ps s uses the program signal format to display the program status.
  • ps S includes interrupted subroutine information when listing programs.
  • ps -t <Terminal number> Specifies the terminal number and lists the status of the programs belonging to the terminal.
  • ps u Display program status in user-oriented format.
  • ps x Displays all programs, not distinguished by terminal.

ps aux | more command

This command displays detailed status of the process.

Parameter explanation:

  • USER: The owner of the process.
  • PID: ID of the process.
  • PPID: parent process.
  • %CPU: The percentage of CPU occupied by the process.
  • %MEM: The percentage of memory occupied by the process.
  • NI: The NICE value of the process. The larger the value, the less CPU time it takes up.
  • VSZ: The amount of virtual memory used by this process (KB).
  • RSS: Fixed amount of memory occupied by this process (KB).
  • TTY: On which terminal is the process running? If not related to the terminal, is it displayed? . If it is pts/0, etc., it means that the host process is connected by the network.
  • WCHAN: Check whether the current process is running. If it is -, it means it is running.
  • START: The time when the process is triggered to start.
  • TIME: The time the process actually uses the CPU to run.
  • COMMAND: The name and parameters of the command.
  • Common status characters for STAT status bits:
    D Uninterruptible sleep state (usually IO process);
    R Running and available Passable in the queue;
    S is in sleep state;
    T is stopped or tracked;
    W enters memory swap ( Invalid starting from kernel 2.6);
    X dead process (basically rare);
    Z zombie process;
    < Process with high priority
    N Process with lower priority
    L Some pages are locked into memory;
    s The leader of the process (there are child processes under it);
    l Multi-process (using CLONE_THREAD, similar to NPTL pthreads); + Process group located in the background;

ps aux | grep xxx command

If you use the ps command directly, the status of all processes will be displayed. It is usually combined with the grep command to view the status of a certain process.

grep (global search regular expression (RE) and print out the line, comprehensive search for regular expressions and print out the line) is a powerful text search tool that can use regular expressions to search for text and print out matching lines come out.

For example, if I want to view all Python processes, I can enter the following command in the terminal:

ps aux | grep python

All Python-related processes can be printed to the terminal for us to view. The relevant parameters are consistent with the previous ps aux | more.

process end command

We can use the kill command to end the process.

As shown in the following instructions:

kill   PID  //杀掉进程
kill  -9 PID //强制杀死进程

[7] Related knowledge of Git, GitLab, and SVN

Git

Git is currently a mainstream open source distributed version control system, which can effectively and quickly manage project versions.

Git does not have a central server, unlike SVN, a centralized version control system that requires a central server.

Functions of Git: version control (version management, remote warehouse, branch collaboration)

Git workflow:

Git workflow

Common Git commands:

git init 创建仓库

git clone 克隆github上的项目到本地

git add  添加文件到缓存区

git commit 将缓存区内容添加到仓库中

GitLab

GitLab is an online code warehouse software based on Git. You can build a warehouse similar to GitHub based on GitLab. However, GitLab has a complete management interface and permission control, and has more High security, can be used in scenarios such as enterprises and schools.

SVN

SVN’s full name is Subversion, which is an open source version control system. Different from Git, SVN is a centralized version control system.

SVN has only one centrally managed server that saves revisions of all files, and people who work together connect to this server through clients to take out the latest files or submit updates.

The characteristics of SVN aresecurity, efficiency, and resource sharing.

Common operations of SVN:

Checkout 检出代码

Update 更新代码

Commit 提交代码

Add 提交新增文件

Revert to this version + commit 撤销已经提交的代码

[8] Related concepts of coroutines

Coroutine (also known as micro-thread) runs on threads and is more lightweight. Coroutine does not increase the total number of threads, but only builds on the threads. Running multiple coroutines through time-sharing multiplexing greatly improves project efficiency.

Features of coroutines:

  1. A coroutine is similar to a subroutine, but during execution, the coroutine can be interrupted internally, and then switch to executing other coroutines, and then return to continue execution at an appropriate time. Switching between coroutines does not need to involve any system calls or any blocking calls.
  2. Coroutines are only executed in one thread and are a logic that occurs in user mode. Moreover, the switching between coroutines is not a thread switching, but is controlled by the program itself. Compared with threads, coroutines save the cost of thread creation and switching.
  3. There is no need for a multi-threaded lock mechanism in the coroutine, because there is only one thread, and there is no conflict in writing variables at the same time. Shared resources are controlled in the coroutine without locking, and only the status needs to be determined, so the execution efficiency is higher than that of multi-threading. a lot of.

Coroutines are suitable for scenarios with a large number of I/O operations, and can achieve very good results. First, they reduce system memory, and second, they reduce system switching overhead, so system performance will also be improved.

Try not to call methods that block I/O in coroutines, such as printing, reading files, etc., unless you change to asynchronous calling, and coroutines will only play a role in I/O-intensive tasks.

[9] Concepts related to Linux systems

The Linux system is an operating system (Operating System, referred to as OS). It is part of the software and is the first layer of software based on the hardware, that is, the bridge between hardware and application software.

Linux system will control the operation of other programs, manage system resources, provide the most basic computing functions, such as managing and configuring memory, determining the priority of system resource supply and demand, etc., and also provides some basic service programs. Linux system kernel refers to the system core program that provides hardware abstraction layer, hard disk and file system control and multi-tasking functions. Linux distribution system is a collection of Linux system kernel and various commonly used application software.

Everything is a file in Linux system. In the Linux system, directories, character devices, block devices, sockets, printers, etc. are abstracted into files. All files in the Linux system start from the "root (/)" directory and follow the tree structure. To store files and define the uses of common directories. File and directory names are strictly case-sensitive.

File directory structure of Linux system

  • /usr: This is a very important directory that contains the vast majority of (multiple) user tools and applications. Many user applications and files are placed in this directory, similar to the program files directory under Windows.
  • /lib: Stores function libraries that will be used when the system is started, as well as function libraries that will be called by commands under /bin and /sbin. Almost all applications need to use these shared libraries.
  • /var: Stores ever-expanding content, such as directories and files (including various log files) that are frequently modified.
  • /boot: Stores some core files (linux kernel files) required to start Linux, including some boot program files, link files, image files, etc.
  • /home: The user's home directory. In Linux, each user has his own directory. The directory name is generally named after the user account and contains saved files, personal settings, etc.
  • /sbin: s means Super User, which stores system management commands used by system administrators.
  • /bin: This stores the system management commands of the current user (cat, cp, ps, etc.).
  • /etc: Stores all configuration files and subdirectories required for system management (such as personnel account and password files, start files for various services, etc.).
  • /tmp: Stores some temporary files. The temporary files will be deleted when the system is restarted.
  • /snap: Ubuntu 16.04 and later versions introduced the snap package manager, and related directories and files (including installation files) are located in /snap.
  • /lost+found: This directory is usually empty. When the system is shut down illegally, some lost fragments will be generated in this directory.
  • /media: The Linux system will automatically recognize some devices, such as U disks, CD-ROM drives, etc. After recognition, Linux will mount the recognized devices to this directory.
  • /srv: This directory stores some data that needs to be extracted after the service is started.
  • /root: This directory is the home directory of the system administrator user.
  • /opt: This directory stores installed third-party software. For example, Oracle database can be installed in this directory.
  • /mnt: Directory for mounting other file systems (including hard disk partitions).
  • /lib64: Similar to the lib directory, storing 64-bit library files.
  • /srv: can be regarded as the abbreviation of service. It is the data directory that these services need to access after some network services are started. Common services such as www, ftp, etc.
  • /proc: This directory itself is a virtual file system. The data it places is in the memory and does not occupy the capacity of the hard disk.
  • /sys: This directory is actually very similar to /proc. It is also a virtual file system that mainly records kernel-related information and does not occupy hard disk capacity.
  • /dev: Any device and interface device in Linux exists in the form of files in this directory. As long as you access a file in this directory, it is equivalent to accessing a device.

Linux system type

  • Red Hat Enterprise Linux: Red Hat is the most widely used Linux system in the world. It has extremely strong performance and stability and is a (paid) system used in many production environments.
  • Fedora: A desktop system suite released by Red Hat. Users can experience the latest technologies or tools for free. These technologies or tools will be added to the RedHat system when they mature, so Fedora has also become an experimental version of the RedHat system.
  • CentOS: By recompiling the RedHat system and releasing it to users for free, it has a wide range of users.
  • Deepin: Released in China, integrating and configuring excellent open source finished products.
  • Debian: It has strong stability and security, provides free basic support, and has high recognition and usage rate abroad.
  • Ubuntu: It is an operating system derived from Debian. It has strong compatibility with new hardware. Ubuntu and Fedora are both excellent Linux desktop systems, and Ubuntu can also be used in the server field.

【10】What is the difference between Linux system and Windows system?

  • Linux systems are more stable and efficient.
  • The Linux system is free (or a small fee), while the Windows system is commercially dominated.
  • Linux systems have fewer vulnerabilities and are quickly patched.
  • The Linux system supports multiple users using the computer at the same time.
  • Linux systems have more secure user and file permission policies.
  • Linux systems can access the source code and modify the code according to the user's needs, while Windows systems cannot access the source code.
  • Linux systems can better support a variety of deep learning supporting software, but windows systems can support a large number of video game software.

【11】The concept of POC verification testing

POC (Proof of Concept), that is, proof of concept. It is usually a kind of product or supplier capability verification work carried out by enterprises when selecting products or before launching external implementation projects. Main verification content:

  1. product functionality. Product functions are provided by the enterprise. The enterprise can provide a list of functions according to its own needs, or it can list the functions it needs after communicating with multiple suppliers.
  2. product performance. Performance indicators are also provided by the enterprise, and it is recommended to provide the environment and hardware equipment and other testing environment requirements for specific performance indicators.
  3. Product API suitability.
  4. Standardization and completeness of product-related technical documents.
  5. If it involves the development of custom functions, it is also necessary to verify the openness of the API and the supplier's implementation capabilities.
  6. Enterprise qualification scale and enterprise implementation cases, etc.

In the final analysis, verification content is to prove that the products or suppliers selected by the enterprise can meet the needs of the enterprise, and the information provided is accurate and reliable.

Prerequisites for POC testing work:

  1. Preliminary research has been sufficient, and a relatively in-depth communication and understanding of the product or supplier has been achieved.
  2. Enterprises have relatively clear needs for their products.

POC testing work participants:

Use user representatives, business leaders, project leaders, technical architects, test engineers, business managers, etc.

POC test work preparation documents:

  1. POC testing work description document. The content includes test content, test requirements (such as privatized deployment), test standards, time schedule, etc.
  2. Functional test cases. Mainly confirm functional reliability and accuracy. The content includes function name, function description, etc.
  3. Scenario test cases. It mainly tests the response speed, implementation ability and integration ability of the enterprise team. This part is usually determined according to the needs of the enterprise. It is not recommended to be too complicated. After all, it needs to be implemented by the supplier. If it is delayed for too long, the patience of the enterprise will be affected and the time will be lengthened.
  4. Technical evaluation plan. Mainly verify product performance, functional coverage, integration efficiency, and quality of technical documents.
  5. Business assessment plan. It mainly includes corporate strength, corporate technical talent capabilities, copyright verification, market background, product quotations, etc.

The main process of POC testing work:

Phase One: Work Starts

The business or external representative issues a formal invitation to the supplier and attaches a POC test work description.

Establish a POC collaboration group. To meet the needs of fast communication and response.

When it comes to privatized deployment, it is necessary to collect the supplier's deployment environment requirements and carry out deployment work with the supplier. At the same time, enterprise participants should keep records of the deployment work.

The second stage: product promotion and on-site centralized testing

The supplier conducts on-site product testing and demonstration based on the POC test work instructions provided by the enterprise and the use cases or solutions of the corresponding test modules.

Enterprise participants participate in functional testing and fill in records and opinions. At this stage, suppliers often need to provide on-site operational guidance.

The third stage: technical evaluation

The supplier provides relevant supporting technical documents based on the technical requirements provided by the enterprise, and the enterprise conducts on-site comparison and makes statistical records based on the actual situation. And keep the information and comparison records provided by suppliers.

When it comes to scenario demo design, it is recommended that enterprises compare the capabilities of implementation personnel, implementation time, and implementation accuracy.

Phase Four: Intermittent Test Work

This phase is started when the first phase is started. In addition to testing functions, it also includes key users’ experience and usability evaluations. This section allows corporate users to make subjective evaluations. It is recommended that intermittent testing be organized in an expanded scope and test user records should be kept. The intermission period can be one day or multiple days according to the actual situation.

The fifth stage: business verification

Suppliers actively cooperate with the work based on the business evaluation plan provided by the company. If it involves customer verification, the company also needs to conduct verification. This part of the work can also start from the first stage.

The sixth stage: endorsement filing, analysis and summary

At each stage of work, it is necessary to record the participants, time, and working hours, and classify and archive the company's and supplier's documents during the testing process. Analyze and compare each stage and summarize and evaluate. Finally, conduct an overall work analysis and summary.

POC work depends on different enterprises and levels, and the testing methods and investment intensity are different. But the purpose is the same - to verify whether the product or supplier capabilities meet the needs of the enterprise.

【12】Docker related concepts and common commands

Introduction to Docker

Docker is an open source application container engine, based on the Go language and open source in compliance with the Apache2.0 protocol.

Docker can package code and related dependencies into a lightweight, portable container, and then publish it to any popular Linux machine, which can also be virtualized.

The containers completely use the sandbox mechanism and do not have any interfaces with each other (similar to iPhone apps). More importantly, the container performance overhead is extremely low.

Docker cartoon

Docker application scenarios:

  1. Automated packaging and publishing of web applications.
  2. Automated testing and continuous integration and release.
  3. Deploy/tune databases or other backend applications in server environments.

Docker architecture

Docker consists of three basic units:

  1. Image: Docker image is equivalent to a root file system. For example, the official image ubuntu:16.04 contains a complete set of the root file system of the simplest system of Ubuntu16.04.
  2. Container (Container): The relationship between image (Image) and container (Container) is like classes and instances in object-oriented programming. Image is a static definition , the container is the entity when the image is running. Containers can be created, started, stopped, deleted, paused, etc.
  3. Warehouse (Repository): The warehouse can be regarded as a code control center, used to save images.

Docker container usage

Docker client

The Docker client is very simple. We can directly enter the docker command to view all command options of the Docker client. You can also use the command docker command --help to learn more about how to use the specified Docker command.

docker

Container usage

Get an image that is not available locally. If we don’t have the image we want locally, we can use the docker pull command to load the image:

docker pull 镜像

Start the container. The following command uses the ubuntu image to start a container, and the parameter is to enter the container in command line mode:

docker run -it 镜像 /bin/bash

Parameter explanation:

  • -i: Allows you to interact with the standard input (STDIN) within the container.
  • -t: Specify a pseudo terminal or terminal within the new container.
  • /bin/bash: The command is placed after the image name. Here we hope to have an interactive shell.

We can specify which version of this software is the image through the format of <warehouse name>:<label>. If no label is given, latest will be used as the default label.

To exit the terminal, directly enter exit or CTRL+D.

Start a stopped container. The command to view all containers is as follows:

docker ps -a

We can also use the docker ps command to view running containers.

docker ps

We can use docker start to start a stopped container:

docker start 容器

If we want to run the container in the background, we can pass -d to specify the running mode of the container:

docker run -itd --name 指定创建的容器名 容器 /bin/bash

Adding the -d parameter will not enter the container by default. If you want to enter the container, you need to use the following command to enter the container:

  • docker attach
  • docker exec: It is recommended that you use the docker exec command, because using this command to exit the container terminal will not cause the container to stop.
docker attach 容器  //如果从这个容器退出,会导致容器的停止。

docker exec -it 容器 /bin/bash   //如果从这个容器退出,不会导致容器的停止。

To stop the container, the command is as follows:

docker stop 容器ID

Stopped container restart command:

docker restart 容器ID

Delete container:

docker rm -f 容器ID

Docker image usage

List the images. We can use docker images to list images on the local host.

docker images

Explanation of each parameter:

  • REPOSITORY: Represents the warehouse source of the image
  • TAG: The tag of the image
  • IMAGE ID: Image ID
  • CREATED: Image creation time
  • SIZE: Image size

Find a mirror:

docker search 镜像

Explanation of each parameter:

  • NAME: The name of the mirror warehouse source
  • DESCRIPTION: Description of the image
  • OFFICIAL: Is docker officially released?
  • stars: Similar to the star in Github, it means likes and likes.
  • AUTOMATED: Automatically built.

Delete image:

docker rmi 镜像

Modification and customization of Docker images

Docker image update

After starting the docker image, write some files, codes, update software, etc., exit the docker image, and then enter the following command in the terminal:

docker commit -m="..." -a= "..." 容器ID 指定要创建的目标镜像名称

Parameter explanation:

  • commit: fixed format
  • -m: submitted description information
  • -a: Specify the image author

Then you can use docker images to check whether the image has been updated successfully. (Note: Do not create an image with an existing image name, as this will cause the existing image name to be none, making it unusable)

Modify image name and add new tags

Change the image name (REPOSITORY):

docker tag 容器ID 新名称

Change the image tag without modifying the name:

docker tag IMAGEID(镜像id) REPOSITORY:TAG(仓库:标签)

File transfer between Docker container and local machine

When transferring files between the host and the container, the full ID of the container is required.

Transfer from local to container:

docker cp 本地文件路径 容器name:/root/(容器路径)

Transfer from container to local:

docker cp 容器name:/root/(容器路径) 本地文件路径

Docker mounts host file directory

Docker can support mounting a directory on the host to a mirror directory.

When starting the docker image, enter the following command:

docker run -it -v /宿主机绝对路径:/镜像内挂载绝对路径 容器REPOSITORY /bin/bash

With the -v parameter, the host directory is before the colon, and the path after the colon is the mounting path in the image, which must be an absolute path.

If the host directory does not exist, it will be automatically generated, and the same is true for the image.

The default mounting path permissions are read and write. If specified as read-only, you can use: ro

docker run -it -v /宿主机绝对路径:/镜像内挂载绝对路径:ro 容器REPOSITORY /bin/bash

[13] Summary of file formats commonly used in deep learning

  1. csv: can be used to conveniently store data and labels.
  2. txt: The most common file format, which can be used to store data paths and data labels.
  3. Json: It is a lightweight data exchange format, often used to save data labels.
  4. Yaml: It is a data serialization language, usually used for writing configuration files, such as decoupling network model configuration parameters and training parameters into Yaml files to facilitate training and optimization.
  5. Cfg: The classic file format for saving network model structure in Darknet.
  6. Protobuf: It is an efficient structured data storage format that can store the weight information of neural networks. It often appears in caffe.

【14】What is the difference between TCP and UDP?

  1. TCP is connection-oriented, and UDP is connectionless;
  2. TCP provides reliable services, that is, the data transmitted through the TCP connection is error-free, not lost, not repeated, and arrives in order; UDP uses best-effort delivery, that is, reliable delivery is not guaranteed;
  3. The logical communication channel of TCP is a full-duplex reliable channel; UDP is an unreliable channel;
  4. Each TCP connection can only be point-to-point; UDP supports one-to-one, one-to-many, many-to-one and many-to-many interactive communications;
  5. TCP is byte stream oriented (sticky packet problems may occur). Essentially, TCP treats data as a series of unstructured byte streams; UDP is message oriented (sticky packet problems will not occur);
  6. UDP has no congestion control, so network congestion will not reduce the sending rate of the source host (useful for real-time applications, such as IP telephony, real-time video conferencing, etc.);
  7. The TCP header overhead is 20 bytes; the UDP header overhead is small, only 8 bytes.

Guess you like

Origin blog.csdn.net/weixin_51390582/article/details/134996581