Lambda storage capabilities Tutorials

s3 gateway

s3 gateway (gateway) provides s3 compatible api  to an order for storage management, developers can develop applications for s3api. That is, using the standard s3 sdk or tools, or request their own standard construction s3 s3 can interact with the gateway.

limit

s3 api gateway uses minio provided, which does not support a number of api, detail can refer to . At present stage, lambda s3 gateway support functions, interfaces and limited, basic file operations outside the api api majority does not currently support, please avoid using.

s3 gateway currently does not support multipart api, so when using a tool or sdk need to be configured to avoid using the following example to 64M, for example.

Configuration and Operation

S3 configured for the default gateway in  ~/.lambda_storagecli/config/user.toml the  [gateway] portion, is explained as follows:

[gateway]
# 服务监听的地址
address = "127.0.0.1:9002"
# 用于访问服务的key
access_key = "accesskey"
secret_key = "secretkey"

When user.tomlproperly configured, it can be invoked from the command line  ./storagecli gateway run --account env --broker.extra_order_id XXX --debug to start, which started s3 gateway services for an order

More parameters can start by ./storagecli gateway run -hviewing

aws cli example

First, install awscli .

After that, the key is configured to access the gateway s3:

$ aws configure
AWS Access Key ID [None]: accesskey
AWS Secret Access Key [None]: secretkey
Default region name [None]:
Default output format [None]:

Then, the configuration of the multipart threshold, aws configure set default.s3.multipart_threshold 64MBindicating greater than 64M before use multipart

Then you can perform basic file operations up.

Create a bucket

aws s3 --endpoint=http://localhost:9002/ mb s3://awstest

upload files

aws s3 --endpoint=http://localhost:9002/ cp /path/to/your/file s3://awstest

Content bucket list

aws s3 --endpoint=http://localhost:9002/ ls s3://awstest

download file

aws s3 --endpoint=http://localhost:9002/ cp s3://awstest/your-file /tmp/new-file

Delete Files

aws python sdk examples

First, install boto3 pip install boto3

Then, the adjustment of the threshold multipart

#!/usr/bin/env python
# coding: utf-8

"""
refer https://docs.min.io/docs/how-to-use-aws-sdk-for-python-with-minio-server.html
"""

import boto3
from botocore.client import Config
from boto3.s3.transfer import TransferConfig

s3 = boto3.resource('s3',
                    endpoint_url='http://localhost:9002',
                    aws_access_key_id='accesskey',
                    aws_secret_access_key='secretkey',
                    config=Config(signature_version='s3v4'),
                    region_name='')


# create bucket
s3.Bucket('awstest').create()

# list bucket
print("buckets:", [bucket.name for bucket in s3.buckets.all()])

# upload file
# https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3.html#multipart-transfers
MB = 2 ** 20
cfg = TransferConfig(multipart_threshold=64*MB)
s3.Bucket('awstest').upload_file('/path/to/your/file','images/your-file', Config=cfg)

# list file
print("objects in bucket: awstest", [obj.key for obj in s3.Bucket('awstest2').objects.filter(Prefix='images/')])

# download file
s3.Bucket('awstest').download_file('images/your-file', '/tmp/newfile')

You can also use sdk minio provided, Minio documents have detailed examples, not repeat them here

Published 29 original articles · won praise 0 · Views 406

Guess you like

Origin blog.csdn.net/LambdaHe/article/details/104842490