Cloning (Save and Restore) DynamoDB Table to Another AWS Account

Published on September 12, 2022

By Hyuntaek Park

Senior full-stack engineer at Twigfarm

At Twigfarm, we created a new AWS account and some of the DynamoDB tables should be copied from the old account to the new account. I thought it would be a very simple process like other database systems have a dump and restore feature. But it wasn’t that simple.

I tried a few different ways to achieve our goal but nothing really was simple enough. Luckily, one of the AWS Solutions Architects introduced me to a new solution that AWS just released: https://aws.amazon.com/blogs/database/amazon-dynamodb-can-now-import-amazon-s3-data-into-a-new-table. You should read the above article first to get a sense of how it is done.

In this article I will demonstrate how I save the DynamoDB table into the S3 which is in a different AWS account, then create the DynamoDB table from what is saved in S3.

Architecture

Architecture is fairly simple. In this article, I will explain how to save and restore. Some are done using console and some are done through the AWS CLI.

image

Prerequisites

You need to have your AWS CLI profiles for both source and destination accounts. In this article, I use source-user and destination-user for profile names. Please refer to https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html.

DynamoDB Table

I have a simple DynamoDB table in the source account. Our final goal is to have a table with those two items in the destination account.

image

Enable Point-in-time recovoer (PITR)

Choose the source DynamoDB table > Bakcups > Edit button under Point-in-time recovery (PITR). Then enable the point-in-time-recovery feature.

Destination S3 Bucket

Log into the destination AWS account. It is convenient if you use a different web browser or open a new window with incognito mode.

Create a bucket as the following:

Bucket name: <YOUR_UNIQUE_BUCKET_NAME>

Choose ACL enabled and Object writer for Object Ownership.

image

S3 Bucket Policy

Copy and paste the following JSON for your destination bucket. Ensure to replace SOURCE_ACCOUNT_NO, SOURCE_USER_NAME, and DESTINATION_BUCKET_NAME with your own.

{
    "Version": "2012-10-17",
    "Id": "",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<SOURCE_ACCOUNT_NO>:user/<SOURCE_USER_NAME>"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::<DESTINATION_BUCKET_NAME>",
                "arn:aws:s3:::<DESTINATION_BUCKET_NAME>/*"
            ]
        }
    ]
}

Export table to Amazon S3 using Command Line Interface (CLI)

Enter the following command in your terminal.

aws dynamodb export-table-to-point-in-time \
  --profile source-user \
  --region <YOUR_AWS_REGION>
  --table-arn <SOURCE_DYNAMODB_TABLE_ARN> \
  --s3-bucket <DESTINATION_BUCKET_NAME> \
  --s3-bucket-owner 304884871948 \
  --s3-prefix backup-folder

image

If there is no error after entering the command, you would see one line under Exports to S3. Wait a few minutes. The status will change from Exporting to Completed. Now the exporting part is done.

Restore from the S3

Now you go to DynamoDB > Imports from S3. Then click Import from S3 button.

Click Browse S3 button and drill down folders until you see the data folder and choose the file with json.gz extension.

image

image

Then You fill out the form to create a new DynamoDB table.

Verification

Go to DynamoDB. Check the if table items are successfully imported such as the following.

image

If you see the identical table contents, congratulations! Imports from S3 feature is a new feature. Without the feature, you would have struggled with many complicated AWS services and permissions. Here with the Imports from S3 feature, only S3 and DynamoDB are the services that we need to consider.

Thanks!