Amazon S3 Bucket via Access Key
About Amazon S3 Bucket
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
Required permissions
s3:PutObject - required, for backup archive upload to Amazon S3 bucket
s3:GetObject - optional, for backup restore and instant download from Amazon S3 bucket
s3:DeleteObject - optional, for retention policy, automatic removal of outdated backups from Amazon S3 bucket
s3:GetBucketLocation - optional, required to automatically determine the
Service Endpoint URL
s3:PutObjectRetention - optional, required for the S3 Object Lock header
x-amz-object-lock-mode
s3:PutObjectLegalHold - optional, required for the S3 Object Lock header
x-amz-object-lock-legal-hold
s3:PutObjectTagging - optional, required for the S3 Object Tagging header
x-amz-tagging
Set up Amazon S3 Bucket Access Key as a customer managed storage
To set up Amazon S3 Bucket Access Key as a storage for your backups, follow the steps below.
Create a new Amazon S3 Access Key storage
Open the Cloudback Dashboard
Navigate to the
Storages
page by clicking on theStorages
link in the left-side navigation pane.Click on the
+ New storage
button:
Type a storage name in the
Storage name
field. Use a name that will help you identify this storage in the future.Select
Amazon S3 AccessKey
from a Storage Provider dropdown:
Set up storage settings
Choose the settings for the storage:
Deduplication type - enable or disable data deduplication. For more details, please refer to the Deduplication documentation
Archive type - enable or disable archive password protection. For more details, please refer to the Password-Protected Archives documentation
Archive name pattern - configure the archive name pattern, which is used to generate the name of the backup archive. For more details, please refer to the Archive Name Pattern documentation
Create Amazon S3 Bucket
To upload backups to Amazon S3 Bucket, you need to create a bucket. You can skip this step if you already have a bucket. You can find more information on how to create a bucket in the Amazon S3 documentation.
Cloudback needs the ARN of the bucket to access it. To find the ARN, click on the name of your bucket in the S3 console and open the Properties
tab. Copy the ARN and paste it in the Step 1
field on the Cloudback site.
Create AWS Access Key
Cloudback accesses your Amazon S3 bucket using an AWS Access Key. You need to create a new Access Key in the AWS Management Console. You can find more information on how to create an Access Key in the Amazon S3 documentation.
Type it's id and secret to Step 2
fields on the New Storage page.
Provide the Service Endpoint URL (Optional)
The service endpoint URL can be used to access the Amazon S3 bucket. If you don't provide the service endpoint URL, Cloudback will try to determine it automatically using the s3:GetBucketLocation
permission. In general, the service endpoint URL can be used for every S3-compatible storage.
Insert the service endpoint URL in the Step 3
field on the New Storage page.
Provide additional HTTP headers (Optional)
You can provide additional HTTP headers to be used when uploading backups to the Amazon S3 bucket. The headers can be used to set the S3 Object Lock or S3 Object Tagging headers.
Save storage
Click on Save
button to save the new storage. You can also use a Test
button to check if the storage is configured correctly. After saving the storage, you can use it for storing backups of your repositories.
All storage settings can be changed later in the Storages
page. To edit the storage settings, click on the Edit
button next to the storage you want to edit.
Change the storage for a repository
You can change the storage for a particular repository in the Repository Details page. Also, you can assign it to multiple repositories through the Bulk Operations.
Learn more
Last updated