aws import image and import snapshot, detailed explanation of smooth cloud migration

 

Table of contents

1. Install the AWS CLI

1. Linux installs AWS CLI

2. Configure the AWS CLI

2. Create a role with import and export permissions

3. Import image

1. Upload the image to S3

2. Import image

3. Check the imported image

4. Cancel the import image task

4. Import snapshot

5. Export the image

1. Export the image to s3

2. Check the export status

3. Cancel the export image task

6. Summary


1. Install the AWS CLI

1. Linux installs AWS CLI

Official website: https://docs.amazonaws.cn/cli/latest/userguide/install-cliv2-linux.html

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

2. Configure the AWS CLI

First, the AWS background obtains the account key pair -> use aws configure to configure authentication

# aws configure
AWS Access Key ID [None]: AKIAQGMfdfsefd7Odf
AWS Secret Access Key [None]: JCKbGTfkkdjfdgrrZdpo8weSenCxooY
Default region name [None]: cn-northwest-1
Default output format [None]: json

2. Create a role with import and export permissions

1. Create a new file trust-policy.json and put it in

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": { "Service": "vmie.amazonaws.com" },
         "Action": "sts:AssumeRole",
         "Condition": {
            "StringEquals":{
               "sts:Externalid": "vmimport"
            }
         }
      }
   ]
}

 

2. Create a service role

 
# aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/centos/trust-policy.json
{
    "Role": {
        "Path": "/",
        "RoleName": "vmimport",
        "RoleId": "AROAQGM5NM2MH4OH5OAVP",
        "Arn": "arn:aws-cn:iam::013751903896:role/vmimport",
        "CreateDate": "2020-05-14T08:34:46+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "vmie.amazonaws.com"
                    },
                    "Action": "sts:AssumeRole",
                    "Condition": {
                        "StringEquals": {
                            "sts:Externalid": "vmimport"
                        }
                    }
                }
            ]
        }
    }

3. Write character strategy

Create a file named role-policy.json and write the following policy, where migrate-cloud-image is the bucket where the disk image is stored:

# cat role-policy.json
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetBucketLocation",
            "s3:GetObject",
            "s3:ListBucket" 
         ],
         "Resource":[
            "arn:aws-cn:s3:::migrate-cloud-image",
            "arn:aws-cn:s3:::migrate-cloud-image/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetBucketLocation",
            "s3:GetObject",
            "s3:ListBucket",
            "s3:PutObject",
            "s3:GetBucketAcl"
         ],
         "Resource":[
            "arn:aws-cn:s3:::export-image",
            "arn:aws-cn:s3:::export-image/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}

4. Strategy and role association

Use the put-role-policy command to mount the policy to the previously created role, please specify the full path to the location of the role-policy.json file

aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json

3. Import image

1. Upload the image to S3

Reference: https://wangfanggang.com/AWS/AWS-CLI-S3-upload/

# aws s3 cp CentOS-7.6-64bit-huawei.vmdk s3://migrate-cloud-image/CentOS-7.6-64bit-huawei.vmdk
upload: ./CentOS-7.6-64bit-huawei.vmdk to s3://migrate-cloud-image/CentOS-7.6-64bit-huawei.vmdk

2. Import image

Create the import policy file containers.json

[
  {
    "Description": "offline-data-disk",
    "Format": "vhd",
    "UserBucket": {
        "S3Bucket": "migrate-cloud-image",
        "S3Key": "offline-data-disk.vhd"
    }
  }
]

Execute the import image

# aws ec2 import-image  --description "huawei server" --disk-containers file://containers.json             {
    "Description": "huawei server",
    "ImportTaskId": "import-ami-06f701b644dc0fd77",
    "Progress": "2",
    "SnapshotDetails": [
        {
            "DiskImageSize": 0.0,
            "Format": "VMDK",
            "UserBucket": {
                "S3Bucket": "migrate-cloud-image",
                "S3Key": "CentOS-7.6-64bit-huawei.vmdk"
            }
        }
    ],
    "Status": "active",
    "StatusMessage": "pending"
}

Check import progress

Use the describe-import-image-tasks command to return the status progress of the import tasks.

包括的状态值如下:
active — 正在运行导入任务。
deleting — 正在取消导入任务。
deleted — 导入任务已取消。
updating — 导入状态正在更新。
validating — 正在验证导入的映像。
validated — 已验证导入的映像。
converting — 正在将导入的映像转换成 AMI。
completed — 导入任务已完成,并且 AMI 已准备就绪,随时可以使用。
# aws ec2 describe-import-image-tasks --import-task-ids import-ami-06f701b644dc0fd77
{
    "ImportImageTasks": [
        {
            "Description": "huawei server",
            "ImportTaskId": "import-ami-06f701b644dc0fd77",
            "SnapshotDetails": [],
            "Status": "deleted",
            "StatusMessage": "ClientError: Disk validation failed [Unsupported VMDK File Format]",
            "Tags": []
        }
    ]
}

The vmdk format imported above was not successful, and the vhd imported later was successful.

import complete

# aws ec2 describe-import-image-tasks --import-task-ids import-ami-0ab311a7d84fddee2
{
    "ImportImageTasks": [
        {
            "Architecture": "x86_64",
            "Description": "book-sync-server",
            "ImageId": "ami-0e3bceeecf00f1c13",
            "ImportTaskId": "import-ami-0ab311a7d84fddee2",
            "LicenseType": "BYOL",
            "Platform": "Linux",
            "SnapshotDetails": [
                {
                    "Description": "book-sync-service-image",
                    "DeviceName": "/dev/sda1",
                    "DiskImageSize": 41949169152.0,
                    "Format": "VHD",
                    "SnapshotId": "snap-0a52c455e54514e5f",
                    "Status": "completed",
                    "UserBucket": {
                        "S3Bucket": "migrate-cloud-image",
                        "S3Key": "book-sync-service-sysdisk.vhd"
                    }
                }
            ],
            "Status": "completed",
            "Tags": []
        }
    ]
}

3. Check the imported image

In Start Instance -> My AMI, you can see the image I imported, and it can be used as an installation machine.

4. Cancel the import image task

If you need to cancel an active import task, use  the cancel-import-task  command.

aws ec2 cancel-import-task --import-task-id import-ami-0ab311a7d84fddee2
 

4. Import snapshot

AWS official website: https://docs.amazonaws.cn/vm-import/latest/userguide/vmimport-import-snapshot.html

1. Usage scenarios and conditions

Usage scenario: When we do cloud migration, we can import the disk as an Amazon EBS snapshot through a snapshot, then create an EBS volume from the snapshot, and mount it to an EC2 instance, so as to achieve the purpose of migrating data from the data disk.

Conditions of Use:

  • The following disk formats are supported: Virtual Hard Disk (VHD/VHDX), ESX Virtual Machine Disk (VMDK), Raw.

  • First, you must upload the disk to Amazon S3.

  • An instance with the AWS CLI command already installed

2. Create an import policy json file

The contents of containers-disk.json are as follows: (offline-data-disk.vhd s3 upload steps are omitted, see the import image section)

{
    "Description": "offline-data-disk",
    "Format": "vhd",
    "UserBucket": {
        "S3Bucket": "migrate-cloud-image",
        "S3Key": "offline-data-disk.vhd"
    }
}

3. Command to import snapshot

# aws ec2 import-snapshot  --description "offline-disk-data" --disk-container file://containers-disk.json
{
    "Description": "offline-disk-data",
    "ImportTaskId": "import-snap-0f9f1a54ae3cfaac3",
    "SnapshotTaskDetail": {
        "Description": "offline-disk-data",
        "DiskImageSize": 0.0,
        "Format": "VHD",
        "Progress": "3",
        "Status": "active",
        "StatusMessage": "pending",
        "UserBucket": {
            "S3Bucket": "migrate-cloud-image",
            "S3Key": "offline-data-disk.vhd"
        }
    }
}

4. View snapshot import status

# aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-0f9f1a54ae3cfaac3
{
    "ImportSnapshotTasks": [
        {
            "Description": "offline-disk-data",
            "ImportTaskId": "import-snap-0f9f1a54ae3cfaac3",
            "SnapshotTaskDetail": {
                "Description": "offline-disk-data",
                "DiskImageSize": 371493888.0,
                "Format": "VHD",
                "SnapshotId": "snap-06f9d2432a33e00c7",
                "Status": "completed",
                "UserBucket": {
                    "S3Bucket": "migrate-cloud-image",
                    "S3Key": "offline-data-disk.vhd"
                }
            },
            "Tags": []
        }
    ]
}

When the status changes to completed, the import is complete. We go to the background ELASTIC BLOCK STORE->snapshot, and we can see the snapshot we just imported.

5. Create a volume and attach it to an EC2 instance

Availability Zone of the instance to mount the volume to

#  aws ec2 create-volume --availability-zone cn-northwest-1c --snapshot-id snap-06f9d2432a33e00c7
{
    "AvailabilityZone": "cn-northwest-1c",
    "CreateTime": "2020-05-17T11:35:00+00:00",
    "Encrypted": false,
    "Size": 101,
    "SnapshotId": "snap-06f9d2432a33e00c7",
    "State": "creating",
    "VolumeId": "vol-0b01d8b1a69db9b1d",
    "Iops": 303,
    "Tags": [],
    "VolumeType": "gp2"
}

View the volume we created above, manage the background ELASTIC BLOCK STORE->volume

Associate the volume to your target instance EC2

# aws ec2 attach-volume --volume-id vol-0b01d8b1a69db9b1d --instance-id i-07fec72fd87af22ff --device /dev/sdb
{
    "AttachTime": "2020-05-17T11:43:01.774000+00:00",
    "Device": "/dev/sdb",
    "InstanceId": "i-07fec72fd87af22ff",
    "State": "attaching",
    "VolumeId": "vol-0b01d8b1a69db9b1d"
}

View the newly mounted volume in the EC2 instance description

 

But note that you need to manually mount this volume to the EC2 instance at the end

mount -t ext4 /dev/xvdb /opt
# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        472M     0  472M   0% /dev
tmpfs           495M     0  495M   0% /dev/shm
tmpfs           495M   13M  482M   3% /run
tmpfs           495M     0  495M   0% /sys/fs/cgroup
/dev/xvda1      180G  161G   20G  90% /
tmpfs            99M     0   99M   0% /run/user/1000
/dev/xvdb        99G  139M   94G   1% /opt

ps: In fact, after the volume is mounted to the availability zone of the instance, we can associate the instance in the background, so let’s study it by ourselves.

5. Export the image

Official website: https://docs.amazonaws.cn/vm-import/latest/userguide/vmexport_image.html

The parameter S3Bucket is the exported bucket name, and S3Prefix is ​​the bucket path

1. Export the image to s3

# aws ec2 export-image --image-id ami-0c32dcd3405ad47eb --disk-image-format VHD --s3-export-location S3Bucket=export-image,S3Prefix=/
{
    "DiskImageFormat": "vhd",
    "ExportImageTaskId": "export-ami-00bfb78a9f3fce2a3",
    "ImageId": "ami-0c32dcd3405ad47eb",
    "RoleName": "vmimport",
    "Progress": "0",
    "S3ExportLocation": {
        "S3Bucket": "export-image",
        "S3Prefix": "/"
    },
    "Status": "active",
    "StatusMessage": "validating"
}

2. Check the export status

# aws ec2 describe-export-image-tasks --export-image-task-ids export-ami-00bfb78a9f3fce2a3
{
    "ExportImageTasks": [
        {
            "ExportImageTaskId": "export-ami-00bfb78a9f3fce2a3",
            "Progress": "85",
            "S3ExportLocation": {
                "S3Bucket": "export-image",
                "S3Prefix": "/"
            },
            "Status": "active",
            "StatusMessage": "converting"
        }
    ]
}

3. Cancel the export image task

If necessary, you can use the following  cancel-export-task  command to cancel an ongoing image export.

aws ec2 cancel-export-task --export-task-id export-ami-00bfb78a9f3fce2a3

If the export job is complete or the last disk image is being transferred, the command will fail with an error.

Six, aws command upload and download

download

/usr/local/bin/aws s3 cp s3://export-image//export-ami-00bfb78a9f3fce2a3.vhd /home/centos/
download: s3://export-image//export-ami-00bfb78a9f3fce2a3.vhd to ./export-ami-00bfb78a9f3fce2a3.vhd

7. Summary

After testing, AWS fails to import vmdk, the interface prompts that it is not supported, and vmdk is not encrypted. Imported successfully using vhd format.

{
    "ImportImageTasks": [
        {
            "Description": "huawei server",
            "ImportTaskId": "import-ami-06f701b644dc0fd77",
            "SnapshotDetails": [],
            "Status": "deleted",
            "StatusMessage": "ClientError: Disk validation failed [Unsupported VMDK File Format]",
            "Tags": []
        }
    ]
}

Aside from other factors, using AWS to import images and import snapshots, we can migrate the business smoothly.

reference:

https://mhl.xyz/Windows/aws-AMI.html

https://blog.csdn.net/weixin_33796177/article/details/92989241

Guess you like

Origin blog.csdn.net/citycloudpeter/article/details/106122455