ceph Object Storage

https://edu.51cto.com/center/course/lesson/index?id=333061

https://www.cnblogs.com/xd502djj/p/3604783.html

 

1, configuration, mainly Access Key ID and Secret Access Key

1
s3cmd --configure

2, list all Buckets. (Bucket equivalent to the root folder)

1
s3cmd ls

3, create a bucket, and the bucket names are unique and can not be repeated.

1
s3cmd mb s3://my-bucket-name

4, delete the empty bucket

1
s3cmd rb s3://my-bucket-name

5, include the content Bucket

1
s3cmd ls s3://my-bucket-name

6, upload file.txt to a bucket,

1
s3cmd put file.txt s3://my-bucket-name/file.txt

7 and upload permissions for all readable

1
s3cmd put --acl-public file.txt s3://my-bucket-name/file.txt

8, bulk upload file

1
s3cmd put ./* s3://my-bucket-name/

9, download files

1
s3cmd get s3://my-bucket-name/file.txt file.txt

10, bulk download

1
s3cmd get s3://my-bucket-name/* ./

11, delete files

1
s3cmd del s3://my-bucket-name/file.txt

12 to obtain the size of the space occupied by the corresponding bucket

1
s3cmd du -H s3://my-bucket-name

Third, the directory processing rules

The following command can upload files in dir1 to my-bucket-name, but the effect is only different.

1) dir1 without "/" slash, dir1 will as part of the file path, equivalent to upload the whole dir1 directory, similar to the "cp -r dir1 /"

1
2
~/demo$ s3cmd put -r dir1 s3://my-bucket-name/
dir1/file1-1.txt -> s3://my-bucket-name/dir1/file1-1.txt  [1 of 1]

2) with a "/" dir1 slash, the equivalent of all the files in the upload directory dir1, that is similar to the "cp ./*"

1
2
~/demo$ s3cmd put -r dir1/ s3://my-bucket-name/
dir1/file1-1.txt -> s3://my-bucket-name/file1-1.txt  [1 of 1]

Fourth, the synchronization method

This is s3cmd the use of difficulty, but it is most useful features. Official instructions, see "s3cmd sync HowTo"

First clear, synchronous operation is to be performed MD5 checksum only if the file is not the same, will not be transferred.

4.1, the conventional synchronous operation

1, all the files in the current directory synchronization

1
s3cmd sync  ./  s3://my-bucket-name/

2, add "--dry-run" parameter list only items need to be synchronized, not the actual synchronization.

1
s3cmd sync  --dry-run ./  s3://my-bucket-name/

3, add "--delete-removed" parameter, deletes files do not exist locally.

1
s3cmd sync  --delete-removed ./  s3://my-bucket-name/

4, after adding "--skip-existing" parameter, not MD5 checksum, skip the file already exists locally.

1
s3cmd sync  --skip-existing ./  s3://my-bucket-name/

4.2, advanced synchronous operation

4.2.1, exclusion, inclusion rules (--exclude, - include)

file1-1.txt is excluded, file2-2.txt txt format but also can be included.

1
2
3
~/demo$ s3cmd sync --dry-run --exclude '*.txt' --include 'dir2/*' ./  s3://my-bucket-name/
exclude: dir1/file1-1.txt
upload: ./dir2/file2-2.txt -> s3://my-bucket-name/dir2/file2-2.txt

4.2.2, loaded exclude or include rules from the file. (--Exclude-from, - include-from)

1
s3cmd sync  --exclude-from pictures.exclude ./  s3://my-bucket-name/

pictures.exclude file contents

1
2
3
# Hey, comments are allowed here ;-)
*.jpg
*.gif

4.2.3, exclude or include support for regular expression rules

1
--rexclude 、--rinclude、--rexclude-from、--rinclude-from

Guess you like

Origin www.cnblogs.com/alpha1981/p/11347957.html