[Linux] Summary of vim (with sudo configuration), yum, gcc, g++, gdb, make/Makefile tools

1. Linux package management - yum

1. Yum background knowledge

(1) Historical precipitation

Before we install a software, we need to download the corresponding software package. However, this software package does not exist on our local computer disk, but on the remote server. So how does the computer know where the software exists? What about on a server?

For computers, we usually get the corresponding software package by searching the official website of the corresponding software; for mobile phones, ostensibly we download the software package through the app store that comes with the phone, but in fact there is no software package in the app store , but there is only a link to the official website of the corresponding software. In the end, our software package was downloaded from the official website;

Secondly, who provides these packages? The answer is obvious. They are provided by some companies, organizations or individuals; these companies, organizations and individuals write software packages for some benefit and then place them on the corresponding servers.

yum (Yellow dog Updater, Modified) is a very commonly used package manager under Linux; it is mainly used in Fedora, RedHat, Centos and other distributions; software packages and software package managers are like "App" and "App Store" "Such a relationship.

(2) Get to know yum

There are roughly three ways to install software under Linux:

1) Download the source code of the program, compile it yourself, and get the executable program.
2) Obtain the rpm installation package and install it through the rpm command. (Software dependencies are not resolved)
3) Install the software through yum. (commonly used)

Note: A server only allows one yum to be installed at the same time, and multiple software cannot be installed at the same time. 

(3) Localization of yum software

Some universities and companies in our country mirror foreign software services, that is, they copy the software on foreign servers to the servers of their own domestic companies, so that we can directly access domestic servers to download software;

But just copying the software is not enough, because the default links accessed when downloading software in yum are still foreign, so these universities/companies also provide a set of domestic download link configuration files – yum source configuration files;

In Linux, the yum source configuration file is the CentOS-Base.repo file that exists in the /etc/yum.repos.d/ directory:

ll /etc/yum.repos.d/

 If you are using a cloud server, then the yum source is generally configured. The configuration method of the virtual machine will be more troublesome, and you also need to use the ping value to check whether the network is connected.


2. Basic use of yum three-blade ax

(1) View software packages

yum list

But what comes out is a lot and complicated. 

We can use the yum list command to list the current software packages; but because there are so many packages, we generally use the grep command to filter out the packages we are concerned about; such as:

grep command http://t.csdn.cn/g1aDI

yum list| grep 要搜索的内容

 Precautions:

  • The software package name consists of: major version number. minor version number. source program release number - software package release number. host platform. cpu architecture;
  • The "x86_64" suffix indicates the installation package for 64-bit systems, and the "i686" suffix indicates the installation package for 32-bit systems. When selecting the package, it must match the system;
  • "el7" represents the operating system release version: "el7" represents centos7/redhat7, "el6" represents centos6/redhat6;
  • Base in the last column represents the name of the "software source", similar to concepts such as "Xiaomi App Store" and "Huawei App Store";

Extension: File transfer between rzsz local machine and cloud server

rz: Upload files (in fact, it is also possible to drag Windows directly into xshell)

 

 sz: Give the xshell file to Linux. There is no way to drag it because the file under Linux is not like Windows, and the entity can be found.

(2) Install software

Instruction:  sudo yum install (you can add -y) software name 

(-y means direct installation without asking) (sudo is because only root can install)

 

 

Installed 

The sl command is different from the ls command. The sl command is an interesting command that presents a small train in the form of animation. Although it has no practical value, it can also cultivate sentiment and cheer up the mood.

There are some software that are not included in the official software collections of Centos, Ubuntu, Kail and other related ecological platforms. If we want to use these software, we need to install the unofficial software collection list – epel-  realse

Precautions:

  • When installing software, because you need to write content to the system directory, you generally need to sudo or switch to the root account to complete;
  • You can only install one software with yum before installing another; during the process of installing a software with yum, if you try to install another software with yum, yum will report an error;
  • There is a correlation between software and software, that is, there is a certain degree of coupling; yum In order to solve the problem of interdependence between software, sometimes installing one software will also install some other software.

(3) Uninstall software

Instruction:  sudo yum remove -y software name

Note: All operations on yum must ensure that the host (virtual machine) network is smooth; you can verify the network through the ping command

You can use ping + website name to check whether the network is healthy now.

Since I configured my Linux to use ctrl+c to copy, that is to say 

ctrl+shift+c is the code to force exit the executing program


 2. Linux editor-vim (don’t use root configuration)

Why not use root configuration please see point 5

1. Basic concepts of vim

vim is an editor and there is no difference between it and notepad under windows

One benefit of this editor is that it is a powerful multi-mode editor 

There are a total of 12 modes in vim. We currently only need to master three of them, namely command mode, insert mode and last line mode. The functions of each mode are as follows:

  • Normal/Normal/Command mode (Normal mode)

Meaning of existence: Use various commands to simplify operations and speed up usage efficiency

Control the movement of the screen cursor and the deletion of characters, words or lines;

Move and copy a section and enter Insert mode, or last line mode;

  • Insert mode

Text input is only possible in Insert mode;

Press the "ESC" key to return to the command line mode, which is the most frequently used editing mode we will use later ;

  • last line mode

Save or exit the file, you can also perform file replacement, find strings, list line numbers, etc.;

In command mode, enter : to enter this mode;

We can enter ":help vim-modes" in the bottom line mode to view all modes of vim.

2. Switching in vim mode

When we use vim to open a file, it is in command mode by default, and then we can switch to other modes through the following instructions:

vim test.c (后面写的是 touch的文件名字)

 When you open it, it will be black. This is the command mode.

[Command Mode] Switch to [Insert Mode]

  • Enter "i": Enter insert mode at the current cursor (I usually use i insert to make it easy to remember, but don't forget other things)
  • Enter "a": enter insert mode at the position after the current cursor
  • Enter "o": start a new line at the current cursor and enter insert mode

After pressing it, there is insert in the lower left corner.

[Command Mode] Switch to [Bottom Line Mode]

  • Just enter "Shift+;", which is actually typing ":"

If the lower left corner becomes like this, then it is the bottom row mode. 

[Insert mode] or [Bottom row mode] switch to [Command mode]

  • To switch from insert mode or bottom line mode to command mode, just click the "Esc" key.

 3. Summary of commands in vim command mode

(1) Move the cursor

  1. $: Move the cursor to the end of the line;
  2. ^: Move the cursor to the beginning of the line;
  3. G: Move the cursor to the end of the file;
  4. gg: Move the cursor to the beginning of the file;
  5. n + G: Move the cursor to the nth line;
  6. hjkl: Move the cursor to the left, lower, up and right; j: jump, jump down k: king, the emperor is supreme

(2) Copy and paste

  1. yy: Copy the current line (nyy: Copy n lines downward starting from the current line );
  2. p: paste once (np: paste n times);

(3) Undo operation

  1. u: Undo the operation (step back);
  2. ctrl + r: Cancel the undo operation (go forward one step);

(4) Cutting

  1. dd: delete the current line (ndd: delete n lines starting from the current line downward);
  2. ndd: Cut n lines starting from the line where the cursor is located
  3. p: Paste the cut content on the line next to the cursor
  4. np: Paste the cut content n times in the line next to the cursor

Xiaoyang's note: All deletion operations under vim are equivalent to cut operations under Windows.

(4) Case switching

  1. ~: Switch the case of the character where the cursor is located (you can click and press one to switch one)
  2. n~: Complete the case switching of n characters starting from the cursor position.

(5) Replacement

  1. rx: Replace the character where the cursor is with the x character (nrx: Replace n characters starting from the character where the cursor is with the x character);
  2. R: Batch replacement, that is, switch to replacement mode , replace the character where the cursor is with the character entered on the keyboard, and the cursor will automatically move back after the replacement, waiting for the next character to be replaced; we finally need to enter [Esc] to switch back to the command mode from the replacement mode ;

(6) Changes

  1. w: Jump the cursor to the first character of the next word (nw: Jump the cursor to the first character of the next n words);
  2. cw: Change the word where the cursor is to the end of the word. Like R, this command will jump to insert mode.
  3. cnw: change n words;

 (7) Delete

  1. x: Delete the character where the cursor is (nx: Delete n characters after the character where the cursor is);
  2. X: Delete the character before the character where the cursor is (nX: Delete the n characters before the character where the cursor is);
  3. dd: delete the current line (ndd: delete n lines starting from the current line downward);

(8) Turn the page

  1. Ctrl+b: turn up one page
  2. Ctrl+f: turn down one page
  3. Ctrl+u: turn up half a page
  4. Ctrl+d: turn down half a page

4. Summary of commands in vim bottom line mode

Lamb's Note: Before using the bottom line mode, remember to press the "Esc" key to make sure you are in the command mode, and then press ":" to enter the bottom line mode.

(1) Line number setting

  1. set nu: display line number
  2. set nonu: cancel line number

(2) Save and exit

  1. w: Save the file.
  2. q: Exit vim. If you cannot exit vim, you can follow "q" with a "!" to force exit.
  3. wq: save and exit

(3) Split screen instructions

  1. vs file name: enables editing of multiple files.
  2. Ctrl+w+w: The cursor switches between multiple screens.

(4) Execution of instructions

  • !+ command: Without exiting vim, you can add "!" in front of the command to execute Linux commands, such as viewing the directory, compiling the current code, etc.

It will be used when elevating sudo privileges in the future!

5. Simple and one-click configuration of vim

 Under the directory /etc/, we can find a   file named vimrc . This is the public vim configuration file in the system and is valid for all users.

So we need to make some other basic configurations for vim so that we can conveniently write C/C++ programs in vim; pay special attention to:

Although all users in Linux use the same vim program, they use different vim configurations, because each user will have his own .vimrc file in his home directory, and what is saved in this file is the vimrc file. vim configuration belonging to this user. So I wrote in the basic use of vim not to use root to avoid future configuration problems.

Some simple vim configuration:

  • Set syntax highlighting: syntax on;
  • Display line number: set nu;
  • Set the number of spaces for indentation to 4: set shiftwidth=4;

First use vim to enter .vimrc

After checking, I found that everything has been written.


But is there a one-click configuration method? ? The answer is yes

curl -sLf https://gitee.com/HGtz2222/VimForCpp/raw/master/install.sh -o ./install.sh && bash ./install.sh

Everyone, copy this to xshell and execute the following configuration. I will configure it with you:

Then we enter the root password:

 Then we enter "source ~/.bashrc" or restart the terminal to invalidate the vim configuration:

It can be seen that after the configuration is completed, vim will not only display the current mode, file name, and number of characters, but also support automatic indentation and automatic completion (grammar, brackets, quotation marks, etc.)

 But there is a small problem here: the automatic indentation here defaults to two characters, and when we write C/C++ programs, we generally require indentation of four characters, so we need to open the .vimrc file to modify the default indentation:

 

 At this point, our one-click configuration of vim is complete! ! ! ! ! cheer! ! !

 6. sudo privilege escalation

We often use su or su- to allow us to become root and execute yum and other commands that only root can execute.

But when we escalated our privileges, our identity changed to root after being queried through whoami.

sudo is a short-term privilege escalation command that does not change the identity but has root power. However, it often needs to be configured. This is the case without configuration:

Then at this time you need to use vim to open the file for configuration.

vim(空格)/etc/sudoers

 Note that there must be a space after vim

 Things to note during this wave of operations :

1. xiao_yang is not among the trusted users, so it must be added

2. Because it is a read-only file, the exclamation point comes in handy at this time. I save it first and then force quit.

Now, we can use sudo to escalate the command as a normal user:

At this time, it is easy to find after the configuration is completed, and you can successfully escalate the privileges!


3. Linux compiler-gcc/g++

1. The role of gcc/g++

gcc and g++ are GNU's C and C++ compilers respectively. gcc and g++ generally have the following four steps when performing compilation :
1) Preprocessing (header file expansion, comment removal, macro replacement, conditional compilation).
2) Compilation (C code translated into assembly language).
3) Assembly (assembly code converted into binary object code).
4) Linking (linking the binary code generated by the assembly process).

2. gcc/g++ syntax

Syntax:  gcc/g++ option file

Common options:

-E only performs preprocessing. This does not generate a file. You need to redirect it to an output file (otherwise the preprocessed results will be printed to the screen. We add -o to gcc -E because everything is written in The screen is cluttered)


-S compiles to assembly language without assembly and linking, that is, only preprocessing and compilation.

(The file is still not generated and is printed on the screen)


-c compile to object code


-o Outputs the processing results to the specified file. This option must be followed by the output file name . (That is, the ability to generate files)


-static This option applies static linking to the generated files.


-g generates debugging information (if this option is not carried, the release version will be generated by default).


-shared This option will try to use dynamic libraries and generate smaller files.


-w does not generate any warning messages.


Wall generates all warning messages.


-O0/-O1/-O2/-O3 There are four levels of compiler optimization options. -O0 means no optimization, -O1 is the default value, and -O3 has the highest optimization level.

Installation of gcc/g++: (most likely all have been installed latest)

sudo yum install -y gcc
yum install -y gcc-c++ libstdc++-devel

3. Execute the four steps of compilation (Esc generates iso)

(1) Preprocessing

gcc -E test.c -o test.i
  • Preprocessing functions mainly include header file expansion, comment removal, macro replacement, conditional compilation, etc.
  • Preprocessing directives are lines of code starting with #.
  • The -E option causes gcc/g++ to stop the compilation process after preprocessing.
  • The -o option refers to the target file, and the "xxx.i" file is the preprocessed original program.

Now, we can see two files at the same time in vs test.c in bottom line mode

 (2) Compilation

gcc -s test.i -o test.s
  • In this stage, gcc/g++ first checks the standardization of the code, whether there are grammatical errors, etc., to determine the actual work of the code. After the check is correct, the code is translated into assembly language.

  • Users can use the -s option to view it. This option only compiles without assembling and generates assembly code.

  • The -o option refers to translating to XXX files, and the "xxx.s" file is the translated original program.

 

 (3) Compilation

gcc -c test.s -o test.o
  • The assembly phase is to convert the "xxx.s" file generated during the compilation phase into a target file
  • Use the -c option to convert the assembly code into the binary object code of "xxx.o"

(4) Link (not connection)

gcc test.o -o test
  • After successfully completing the above steps, you enter the linking stage.
  • The main task of linking is to link each generated "xxx.o" file to generate an executable file.
  • When gcc/g++ does not have the -E, -S, or -c options, it will generate files after the entire process of preprocessing, compilation, assembly, and linking by default .
  • If you do not specify the file name of the generated file without the -o option, the default generated executable file name is a.out.

 Note:  The generated files after linking are also binary files.

(5) One step to achieve the goal


4. Linking methods and function libraries

(1) Dynamic link (going to an Internet cafe) and static link (playing with a laptop at home)

When we write code, in addition to implementing functions ourselves, we also call code in the function library, such as scanf/printf/malloc/fopen; but we must understand that the code in the library is written for us by others. What we use directly, that is, we only have the call of the function, but not the implementation of the function;

At the same time, the program processes code written by ourselves in the preprocessing, compilation and assembly stages. Only when linking, will the implementation of the library function be associated with our code (relocation of the symbol table); therefore, linking The essence is how we relate to the standard library when we call library functions

There are two linking methods for programs: dynamic linking and static linking.

Dynamic linking: When executing code, if a library function call is encountered, it will jump to the definition of the corresponding function in the dynamic library , and then execute the function. After the execution is completed, it will jump back to the original program and continue execution.

Advantages: the executable program formed is small

Disadvantages: It is affected by dynamic library changes (deletion, upgrade, etc.).

Static linking: Directly copy the library functions to be used within this program from the corresponding static library.

Advantages: Not associated with static libraries, that is, not affected by static library changes (deletion, upgrade, etc.)

Disadvantages: The resulting executable program is very large

(2) Dynamic library and static library

A function library is a collection of functions written in advance for reuse by others. Function libraries are generally divided into two types: static libraries and dynamic libraries:

Static library refers to copying all the required library file code into the executable file when compiling and linking. Therefore, the generated file is very large, but the library file is no longer needed at runtime.

Its suffix is ​​".a" under Linux and ".lib" under Windows;

Dynamic libraries are also called shared libraries . Contrary to static libraries, the corresponding library file code is not added to the executable file when compiling and linking. Instead, the library is loaded by the runtime link file when the program is executed. In this way It can save system overhead. Its suffix is ​​".so" under Linux and ".dll" under Windows;

Note: Dynamic linking must use a dynamic library, and static linking must use a static library ; that is, when performing dynamic linking, you can only jump to the implementation of the corresponding function in the dynamic library, and when performing static linking, you can only copy the functions in the static library.

(3) Dynamic linking (default) and static linking (-static)

By default, dynamic libraries are used for dynamic linking in Linux: The binary program generated by gcc by default is dynamically linked. This can be verified through the file command.

Here’s why:

  • The executable program formed by the program not only occupies a large disk space, but also takes up a very large memory space when it is loaded into the memory at runtime. The memory of the machines we use currently is basically 8/16GB. Therefore, executable programs that are too large cannot be run (static programs are large)
  • Although dynamic linking is affected by changes in the function library, the function library generally rarely changes, and even if it changes, it must be compatible with the previous version, so the impact is not big (even if I go to an Internet cafe, Internet cafes rarely affect me, unless they go bankrupt, etc. )

Linux generally automatically installs C language dynamic libraries, because most instructions  under Linux and the executable programs we compile using gcc by default are dynamically linked and rely on C dynamic libraries.

However, C static libraries and C++ static libraries may need to be installed by ourselves.

sudo yum install -y glibc-static
sudo yum install -y libstdc++-static

 Although gcc and g++ use dynamic linking by default, if we need to use static linking, just bring the -static option.

 gcc test.c -o test.s -static

 It can be seen that the executable program formed by static linking is 100~200 times larger than that formed by dynamic linking . That is, a dynamic linking file is only 100M, and the static linking will become more than ten G. There is a difference between the two. Very big


4. Linux debugger - use of gdb

1、debug 与 release

 1. Debug version: More debugging information will be added to the program itself to facilitate debugging.
 2. Release version: No debugging information will be added and it is not debuggable.

In Linux, the executable program generated by gcc/g++ by default is the release version and cannot be debugged. If you want to generate a debug version, you need to add the -g option when using gcc/g++ to generate an executable program.

Insert image description here

 For the same source code, executable programs of release version and debug version are generated separately, and through the ll command, you can see that the size of the executable program released by the debug version is slightly larger than that of the executable program released by the release version . The reason is that the executable program released in the debug version contains more debugging information.

2. Summary of gdb commands

(1) Enter gdb

Command: gdb filename

(2) Debugging

  1. run/r: Run the code (start debugging).
  2. next/n: process-by-process debugging: do not enter the function, go directly from line n to line n+1, F10
  3. step/s: statement-by-statement debugging: enter function: F11
  4. until line number: jump to the specified line.
  5. finish: Stop after executing the function currently being called (cannot be the main function). You can quickly check whether there is a code problem in the function.
  6. continue/c: Run to the next breakpoint: F5
  7. set var variable=x: Modify the value of the variable to x.

(3) Display

  1. list/l n: Display the source code starting from the nth line, displaying 10 lines at a time. If n is not given, it defaults to displaying downwards from the last position.
  2. list/l function name: displays the source code of the function.
  3. print/p variable: print the value of the variable.
  4. print/p  & variable: Print the address of the variable.
  5. print/p expression: prints the value of the expression, and the value of the variable can be modified through the expression.
  6. display variable: equivalent to the monitoring window, adding constant display
  7. display & variable: Add the address of the variable to the constant display.
  8. undisplay number: Cancel the constant display of the specified number variable.
  9. bt: View function calls and parameters at all levels, view the call stack
  10. info/i locals: View the values ​​of local variables in the current stack frame.

(4) Breakpoint

Adding a breakpoint and deleting a breakpoint are different operations. When adding a direct line number, but when deleting it, the breakpoint has its own serial number, which has started from 0123.

  1. break/b n: Set a breakpoint on line n.
  2. break/b function name: Set a breakpoint on the first line of a function body.
  3. info breakpoint/b: View broken point information.
  4. delete/d number: Delete the breakpoint with the specified number.
  5. disable number: turn off the enablement of breakpoints (the purpose is to retain the traces of breakpoints, breakpoints exist but have no effect)
  6. enable number: Enable the breakpoint with the specified number.

(5) Exit gdb

  • quit/q: Exit gdb.

 5. Linux project automation creation tool - use of make/makefile

1. The importance of make/Makefile

  • Whether or not you can write Makefiles shows from the side whether a person has the ability to complete large-scale projects.
  • There are countless source files in a project, which are placed in several directories according to their type, function, and module. The Makefile defines a series of rules to specify: which files need to be compiled first, which files need to be compiled later, and even updated. Complex functional operations
  • The benefit of Makefile is "automated compilation". Once it is written, only one make command is needed, and the entire project is completely automatically compiled, which greatly improves the efficiency of software development.
  • mak is a command tool that interprets the instructions in Makefile. Generally speaking, most IDEs have this command, such as: Delphi's make, Visual
  • C++'s nmake, GNU's make under Linux. It can be seen that Makefile has become a compilation method in engineering.
  • make is a command and Makefile is a file . Use the two together to complete the automated construction of the project.

2. How to write makefile (dependencies are very important)

Write makefile, the most important thing is to write  dependencies and dependent methods

When we ask dad for money: Dependency: I am your son; Method: Make a phone call

Dependency: refers to a file that depends on another file, that is, if you want to get a file, there must first be another file in the directory.

For example, the test.o file is a file generated by the test.c file after preprocessing, compilation and assembly, so changes to the test.c file will affect test.o, so the test.o file depends on the test.c file

Dependency method: refers to how to obtain the target file based on the dependent file

For example, test.o depends on test.c, and test.c
can be obtained through the gcc -c test.c -o test.o command. Then the dependency method for test.o to depend on test.c is gcc - c test.c -o test.o

When writing mkfile, there are some things that need to be paid attention to. Xiaoyang’s note:

  • The file name of the makefile must be makefile/Makefile and cannot be other names, otherwise make will not recognize it;
  • There can be multiple dependent files or none;
  • Dependent methods must start with the [Tab] key , and pay special attention to not being four spaces;

Step 1:  Create a file named Makefile/makefile in the directory where the source file is located

 Step 2: Write a Makefile (vim Makefile)
The simplest format for writing a Makefile is to first write out the dependencies of the files, then write out the dependency methods between these files , and then write them down in sequence.

 As above: mytest depends on test.c, the dependency method is gcc compilation, clean does not depend on any file, the dependency method is the rm -f command; among them, .PHONY modifies clean to indicate that it is a pseudo target and is always executed (specific details are explained below)

 After writing the Makefile, save and exit: wq, and then execute the make command on the command line to generate an executable program and the intermediate products generated by the process.

3. How make works

Just a rough explanation of the example:

mytest: test.c
    gcc test.c -o mytest

This rule tells the Make tool how to build the target `mytest`. It means that `mytest` depends on the file `test.c`. If the modification time of the `test.c` file is newer than `mytest` (or `mytest` does not exist), then Make will execute the next command to build `mytest`.

.PHONY: clean
clean:
    rm -f mytest

This rule defines a pseudo target `.PHONY` and a target `clean`. `.PHONY` is used to declare that `clean` is a pseudo target, that is, not a real file. The `clean` goal is used to clean the intermediate and target files generated by the build, as well as the final generated executable file `mytest`. When executing the `make clean` command, Make will execute the command of the `clean` target, that is, delete the file named `mytest`. When simply writing make, clean will not be executed.

work process:

1. When you execute the `make` command, the Make tool will look for the `Makefile` file in the current directory.

2. Make reads the rules in the `Makefile` and checks whether the target `mytest` needs to be rebuilt. It finds that the target `mytest` depends on `test.c`, and `test.c` exists or its modification time is newer than `mytest`, so `mytest` needs to be rebuilt.

3. Make execute the `gcc test.c -o mytest` command to compile `test.c` and generate the executable file `mytest`.

4. If you execute the `make clean` command, the Make tool will execute the command of the `clean` goal, that is, delete the `mytest` file.

The purpose of a `Makefile` is to automate and simplify the build process. By defining the appropriate rules and dependencies, the Make tool can compile only the files that need to be updated, thus speeding up the build process and ensuring the correctness and consistency of the project.

(1) Use of make

Under Linux, after we enter the make command, make will look for a file named "Makefile" or "makefile" in the current directory; if found, it will use the first target file in the file as the final target file; if If it cannot be found, a prompt message will be printed; in the above C language example, there are two target files in the makefile.

Clean is not generated, although clean is also a target file:

This will allow you to execute two pieces of code:

 

 (2) Dependencies of make

Modify the makefile:

test.out:test.o
	gcc test.o -o test.out 
test.o:test.s
	gcc -c test.c -o test.o
test.s:test.i
	gcc -S test.i -o test.s
test.i:test.c
	gcc -E test.c -o test.i

.PHONY:clean
clean:
	rm -f test.i test.s test.o test.out

We know that after we enter the make command, make will look for a file named "Makefile" or "makefile" in the current directory; if found, it will use the first target file in the file as the final target file (the above example test.out), but if the test.o file that test.out depends on does not exist, then make will find the dependency of the target test.o file in the current file, and then generate test.o according to this rule. File (similar to a data structure stack – last in, first out);

If the dependent file of test.o does not exist, the rule will continue to be executed until a target file with a dependent file is found. After obtaining the target file, other target files on the path will be returned layer by layer; or the last dependent file cannot be found. , exit directly and report an error;

This is the dependency of the entire make. Make will look for file dependencies layer by layer until it finally compiles the target file we need at the beginning.

In the above example, test.o, which test.out depends on, does not exist, so make will look for dependencies with test.o as the target file; test.o, which depends on test.s, does not exist either, and make will look for dependencies with test.o as the target file. test.s is the dependency of the target file; then test.s depends on test.i. Finally, the dependency file test.c of test.i finally exists, and make will form test.i according to the dependency method of test.i, and then Gradually form test.s, test.o, until finally test.out is formed .

 (3) Project cleanup

In the makefile, we often use clean as the target file for project cleaning . At the same time, since project cleaning does not need to rely on other files, clean does not have dependencies.

clean is not directly or indirectly associated with the first target file, then the commands defined after it will not be automatically executed, so we need to explicitly specify: make clean

 Finally, for target files like clean, we usually use .PHONY to set it as a pseudo target.

The characteristics of the pseudo target are:

This target file is always executed

(4).PHONY pseudo target

When we make multiple times on the same source file, we will find that the first time the program compiles normally, but the second time and subsequent times no longer compile, but prompts: "make: `test.out' is up to date."

But when we modified the content in test.c, we found that although we could make again, we still could not make multiple times.

In fact, the above phenomenon is made to prevent us from wasting time by repeatedly compiling source files that have been compiled and not modified (don't think that the compilation time is very short); that is, if test.c has been compiled, test .out, and we have not made changes to test.c, then make will not be executed when we make again; in fact, it is necessary for make to do this, because at work, compiling a project often takes dozens of minutes or even Several hours, if we make and recompile every time, it will definitely waste a lot of time.

So how does make determine that the source program does not need to be recompiled? The answer is based on the modification time of the file .

In Linux, there are three types of files: stat + file name

Access time (Access): The time changes when we view the file content, such as cat, vim, less;
Modify time (Modify): The time changes when we modify the file content, such as nano, vim;
Change time (Change): When we modify file attributes or permissions, the time changes, such as nano/vim (file size changes), chmod/chown/chown (file permissions change);

make determines whether the source program needs to be recompiled based on the comparison of the modification time of the executable program and the modification time of the source file.

Note: make determines whether the source file needs to be recompiled. It is only related to changes in the modification time of the source file , and has nothing to do with changes in the content of the source file. We can verify it through the touch command: (touch file: If the file already exists, update all the contents of the file. time)

But here touch is not the file generated by the Makefile, but the file after the colon that the file is dependent on.

So: The function of .PHONY is actually very similar to touch, which is to modify the file time so that the program can be executed.

 We can also use .PHONY to modify test.out so that test.out is recompiled every time

 


Finished spreading flowers! The basic use of Linux tools comes to an end here. I hope you can give me some advice! ! !

Guess you like

Origin blog.csdn.net/weixin_62985813/article/details/132073493