linux command (continually updated)

The articles record linux common commands

Commonly used commands and usage scenarios

Create a directory mkdir

mkdir (选项) (参数)
-m 建立目标的同时设置属性
-p 多层级建立
-v 显示建立信息 

use

[root@test-ceph ~]# mkdir -m 777 -pv kk/kk/kkk
mkdir: 已创建目录 "kk"
mkdir: 已创建目录 "kk/kk"
mkdir: 已创建目录 "kk/kk/kkk" 
[root@test-ceph alpaca]# mkdir -pv  ~/alpaca/tt/{,name/}{n,a,m,e}
mkdir: 已创建目录 "/root/alpaca/tt"
mkdir: 已创建目录 "/root/alpaca/tt/n"
mkdir: 已创建目录 "/root/alpaca/tt/a"
mkdir: 已创建目录 "/root/alpaca/tt/m"
mkdir: 已创建目录 "/root/alpaca/tt/e"
mkdir: 已创建目录 "/root/alpaca/tt/name"
mkdir: 已创建目录 "/root/alpaca/tt/name/n"
mkdir: 已创建目录 "/root/alpaca/tt/name/a"
mkdir: 已创建目录 "/root/alpaca/tt/name/m"
mkdir: 已创建目录 "/root/alpaca/tt/name/e"

Tool wget to download files

Linux system is a wget to download files tool which is used in the command line. For Linux users is an essential tool, we often have to download some software or restore the backup to the local server from a remote server. wget supports HTTP, HTTPS and FTP protocols, you can use an HTTP proxy. It refers to the so-called automatic download, wget can be performed in the background after the user exits the system. This means that you can log in and start a task wget to download, and then exit the system, wget will be executed until the task is completed, relative to most other browsers need to have the involvement of users when downloading large amounts of data in the background, which eliminates the need for extreme big trouble.
wget can follow links on HTML pages in order to create a remote server to download a local version, completely rebuilt the directory structure of the original site. This in turn often referred to as "recursive downloading." When recursive downloads, wget to follow the Robot Exclusion Standard (/robots.txt). Wget can be downloaded at the same time, to convert the link to point to a local file, to facilitate off-line browsing.
wget is very stable, it is unstable and has a strong network adaptability in a very narrow bandwidth of the situation. If it is due to a network failure download, wget will keep trying until the entire file is downloaded. If the server is interrupted download process, it will be linked again to a place on the server from stopping to continue downloading. This is useful for defining those downloaded from the link-time large file server

wget (参数) (地址)

-V 显示版本
-h 语法帮助
-b 启动后转入后台执行
-o --output-file=file 把下载记录写到文件中
-a 追加写到文件中
-q 静默输出
-i --input-file=file 下载在file文件中出现过的urls
-F 把输入文件当作html格式来对待
-O 下载并以不同的文件名保存

use

# 下载一个包
wegt  http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 下载一个文件并重命名,默认以最后一个/后面的内容为文件名
wegt -O test.tar.gz http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz 

# 限速下载
wget --limit-rate=300k http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 断点续传
wget -c http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 后台运行 , 并使用tail观察进度
wget -b http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz
tail -f wget-log

# 指定下载文件存放地址
wget --directory-prefix/sources http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 伪装代理名称
wget --user-agent="Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.204 Safari/534.16" http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 测试下载
wget --spider http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 增加重试下载次数
wget --tries=40 http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 多链接下载
wget -i urllist.txt

# 过滤格式下载
wget --reject=gif http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 把下载信息存入日志文件
wget -o downloads.log http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 限制下载文件大小,超过5m就退出,对单个下载无用,需要递归下载
wget -Q5m -i urllist.txt

# 下载指定格式文件
wget -r -A .pdf http://downloads.sourceforge.net/project/tcl/Tcl/8.6.3/tcl8.6.3-src.tar.gz

# 下载ftp上的内容
wget ftp-url
wget --ftp-user=username --ftp-passwd=passwd xxx.ftp

Find File find

Used to find files in the specified directory. Any string parameters located before will be considered directory name you want to find. If you use this command does not set any parameters, the find command will find subdirectories and files in the current directory. And will look into all the subdirectories and files are displayed

find .  # 列出当前目录以及子目录下所有文件和文件夹
find /home -name "*.txt" #列出/home目录下所有.txt结尾的文件名
find /home -iname "*.txt" #同上,忽略大小写
find /home \(-name "*.txt" -o -name "*.pdf"\) # 满足逻辑关系 and
find /home -name "*.txt" -o -name "*.pdf" # 同上
find /usr/ -path "*local" # 查找文件或路径
find . -regex ".*\(\.txt\|\.pdf\)$" # 正则匹配
find /home ! -name "*.txt" # ! 否定
find . -type 参数类型
  f 普通文件
  l 软链接
  d 目录
  c 字符设备
  b 快设备
  s 套接字
find . -maxdepth 3 -type f # 定义目录最大深度 3
find . -mindepth 2 -type f # 定义目录最小深度 2
find . -type f -atine -7 # 最近7天内
find . -type f -atime 7 # 7天前当天被访问过的文件
find . -type f -atime +7 # 7天前被访问过的文件
find /var/log -size +1G # 查找大于1G的文件
find /data -owner user # 找到user用户的文件
find ./ -iname '_macosx' -depth -exec rm -rf {} \; #  删除自动生成的文件

https://wangchujiang.com/linux-command/c/find.html

Copy files cp

cp (选项) (参数) 

-r 可以复制目录
-i 覆盖文件之前询问
-f 强行复制文件或目录,无论是否存在
-u 使用这项参数后只会在源文件的更改时间较目标文件更新时或是名称相互对应的目标文件并不存在时,才复制文件;
-v 看详细信息

Guess you like

Origin www.cnblogs.com/Alpacapyer/p/11880526.html