TeamCity On-Premises 2024.12 Help

Amazon S3 and S3-compatible Storages

TeamCity comes bundled with the Amazon S3 Artifact Storage plugin which allows storing build artifacts in Amazon S3 buckets, as well as S3-compatible buckets such as MinIO, Backblaze B2, and others. S3-compatible storages can be hosted in both AWS and non-AWS environments.

Create and Set Up a New AWS S3 Storage

  1. Navigate to the Administration | <Your_Project> page and switch to the Artifacts Storage tab.

    • Open settings of a <Root project> if you want your new storage to be available for all TeamCity projects.

    • Edit one specific project if your new storage should be available only for this project and its sub-projects.

  2. The built-in TeamCity artifacts storage is displayed by default and marked as active. Click Add new storage button to create a new storage.

  3. Specify the custom storage name and, if needed, its internally used ID.

  4. Set the Type field to "AWS S3".

  5. Choose an existing AWS Connection that TeamCity should use to access your Amazon resources. If no suitable AWS connection exists, click the "+" icon to add one.

    A user whose credentials the selected AWS Connection uses (or an IAM Role it assumes) to access the S3 buckets should have the following permissions:

    • ListAllMyBuckets

    • GetBucketLocation

    • GetObject

    • ListBucket

    • PutObject

    • DeleteObject

    • GetAccelerateConfiguration (if Transfer Acceleration is enabled)

  6. TeamCity uses the selected AWS Connection to retrieve the list of available S3 buckets. Open the Bucket drop-down menu to choose a specific item from the list.

  1. (Optional) Specify the path prefix if you want to use the same S3 bucket for all TeamCity projects and configure prefix-based permissions.

  2. Amazon S3 buckets support two options to speed up file uploads and downloads:

    • AWS CloudFront — a content delivery network (CDN) that allows TeamCity to transfer arifacts using low-latency CloudFront servers nearby.

    • Transfer Acceleration — a bucket-level feature designed to optimize transfer speeds from across the world into centralized S3 buckets. It enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket.

    If your bucket is configured to use Transfer Acceleration or CloudFront, choose the corresponding option under the Transfer speed-up section. Otherwise, if you wish TeamCity to transfer files in the regular mode, choose the None type.

    S3 Transfer SpeedUp Mode
  1. To optimize the upload of large files to the storage, you can enable the multipart upload. To do this, tick the Customize threshold and part size setting and set the multipart upload threshold. The minimum allowed value is 5MB. Supported suffixes: KB, MB, GB, TB. If you leave this field empty, multipart upload will be initiated automatically for all files larger than 8 MB (8MB is the default value).

    Multipart upload
    Additionally, you can configure the maximum allowed size of each uploaded file part. The minimum value is 5MB. If left empty, TeamCity will use 8MB as the default value.

  1. Uncheck the Force virtual host addressing option to turn off the corresponding feature (enabled by default). Currently, both hosted-style and path-style requests are supported by TeamCity. Note that Amazon stopped supporting path-style access for new buckets since September 2020.

  2. Tick Verify file integrity after upload to allow TeamCity to perform an additional check-up on uploaded files. If the integrity verification fails, TeamCity writes a corresponding message to the build log.

  3. Click Save to save your new storage and return to the list of available storages.

When viewing a list of storages available for a project, click Make Active to start using the corresponding storage for all new builds of this project. The has N usages link allows you to view which builds used this storage to upload their artifacts.

Make storage active

Create and Set Up a New S3-Compatible Storage

  1. Navigate to the Administration | <Your_Project> page and switch to the Artifacts Storage tab.

    • Open settings of a <Root project> if you want your new storage to be available for all TeamCity projects.

    • Edit one specific project if your new storage should be available only for this project and its sub-projects.

  2. The built-in TeamCity artifacts storage is displayed by default and marked as active. Click Add new storage button to create a new storage.

  3. Specify the custom storage name and, if needed, its internally used ID.

  4. Set the Type field to "Custom S3".

  5. Specify the Access key ID and Secret access key values. See documentation for your S3-compatible storage vendor for the information on how to issue access keys.

  6. Specify the storage endpoint TeamCity should use to access your bucket.

  7. (Optional) Specify the path prefix if you want to use the same S3 bucket for all TeamCity projects and configure prefix-based permissions.

  8. To optimize the upload of large files to the storage, you can enable the multipart upload. To do this, tick the Customize threshold and part size setting and set the multipart upload threshold. The minimum allowed value is 5MB. Supported suffixes: KB, MB, GB, TB. If you leave this field empty, multipart upload will be initiated automatically for all files larger than 8 MB (8MB is the default value).

    Multipart upload

    Additionally, you can configure the maximum allowed size of each uploaded file part. The minimum value is 5MB. If left empty, TeamCity will use 8MB as the default value.

  9. Uncheck the Force virtual host addressing option to turn off the corresponding feature (enabled by default). Currently, both hosted-style and path-style requests are supported by TeamCity. Note that Amazon stopped supporting path-style access for new buckets since September 2020.

  10. Tick Verify file integrity after upload to allow TeamCity to perform an additional check-up on uploaded files. If the integrity verification fails, TeamCity writes a corresponding message to the build log.

  11. Click Save to save your new storage and return to the list of available storages.

When viewing a list of storages available for a project, click Make Active to start using the corresponding storage for all new builds of this project. The has N usages link allows you to view which builds used this storage to upload their artifacts.

Make storage active

S3 Storage Classes

Amazon S3 Storage Classes allow you to fine-tune your storage based on its desired performance, as well as availability and resilience of its data.

There are two ways to enable the required storage class:

  • On TeamCity side. When uploading artifacts to an S3 bucket, TeamCity adds the x-amz-storage-class header to its PUT requests. The header value depends on the corresponding storage setting in TeamCity (for example, x-amz-storage-class: INTELLIGENT_TIERING). This mode does not require any additional setup on the AWS side.

    Although this approach is not currently supported, we hope to implement this functionality in our future release cycles. Upvote and comment on this YouTrack ticket to support the feature and share your feedback: TW-79992.

  • On AWS side. In this mode, TeamCity uploads artifacts in a regular manner and the required storage class is applied by a pre-configured lifecycle rule after the artifacts were uploaded. To set up this rule, do the following:

    1. Open the required S3 storage and switch to the Management tab.

    2. Click Create lifecycle rule.

    3. Check Move current versions of objects between storage classes under the Lifecycle rule actions section.

    4. Choose the required storage class and the delay between the upload and transition dates. Set the Days after object creation to "0" to transition your artifacts as soon as TeamCity uploads them.

    5. Enable additional rules for stored artifacts. For example, you can check Expire current versions of objects to label previously uploaded artifacts as expired, and Permanently delete noncurrent versions of objects to periodically clean your storage.

    6. Specify the rule scope to choose whether it should apply to the entire storage or only those artifacts that match the required filter.

    7. Review your rule at the bottom of the page. It may look like the following:

      S3 lifecycle rule

    8. Click Create rule to save your lifecycle rule.

See the following AWS help article for more information: Using Amazon S3 storage classes.

Transferring Artifacts via CloudFront

Amazon CloudFront is a content delivery network that offers low latency and high transfer speeds. Enabling its support for an S3 storage will allow TeamCity to transfer artifacts through the closest CloudFront server. If your S3 bucket is located in a different region than your TeamCity infrastructure, this could significantly speed up the artifacts' upload/download and reduce expenses.

Prerequisites

TeamCity can set up CloudFront integration for you, or you can set up all the settings manually.

The CloudFront integration requires configuring:

CloudFront Settings

When you switch the storage type to AWS CloudFront, four new settings appear.

  • Use the Download distribution and Upload distribution drop-down menus to choose manually created distributions.

  • The Public key field and Upload private key... button allow you to specify corresponding keys.

Alternatively, you can click the Switch to the Sakura UI icon to let TeamCity configure all four settings automatically.

For Cloudfront settings to work properly, TeamCity needs the following permissions:

  • cloudfront:ListDistributions

  • cloudfront:ListKeyGroups

  • cloudfront:ListPublicKeys

Automatic CloudFront Setup

TeamCity can configure the settings automatically. This involves:

  • Generating a key pair and uploading a public key to CloudFront.

  • Creating a new key group in CloudFront.

  • Creating two new distributions with:

    • the Use all edge locations price class.

    • a new Origin Access Identity that can access the current bucket.

    • the default behaviour defining

      • allowed HTTP methods for uploading artifacts: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE, for downloading: GET, HEAD, OPTIONS;

      • a new key group with the viewer access;

      • a custom cache policy that allows passing all query strings.

  • Adding a new policy to an S3 bucket to allow the new distributions access to it. See the policy example.

Automatic setup requires giving Teamcity additional permissions:

  • cloudfront:CreateDistribution

  • cloudfront:CreateKeyGroup

  • cloudfront:CreatePublicKey

  • cloudfront:CreateOriginRequestPolicy

  • cloudfront:CreateCloudFrontOriginAccessIdentity

  • cloudfront:CreateCachePolicy

  • cloudfront:DeleteKeyGroup

  • cloudfront:DeletePublicKey

  • cloudfront:ListCloudFrontOriginAccessIdentities

  • cloudfront:ListCachePolicies

  • cloudfront:ListOriginRequestPolicies

  • cloudfront:GetDistribution

  • cloudfront:GetPublicKey

  • s3:GetBucketPolicy

  • s3:PutBucketPolicy

Example policy providing all necessary permissions:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": [ "cloudfront:CreatePublicKey", "cloudfront:CreateOriginRequestPolicy", "cloudfront:ListCloudFrontOriginAccessIdentities", "cloudfront:DeleteKeyGroup", "cloudfront:GetPublicKey", "cloudfront:ListCachePolicies", "cloudfront:CreateDistribution", "cloudfront:ListOriginRequestPolicies", "cloudfront:DeletePublicKey", "cloudfront:CreateCloudFrontOriginAccessIdentity", "cloudfront:CreateKeyGroup", "cloudfront:CreateCachePolicy", "cloudfront:GetDistribution", "cloudfront:ListPublicKeys", "s3:ListAllMyBuckets", "cloudfront:ListKeyGroups", "cloudfront:ListDistributions" ], "Resource": "*" }, { "Sid": "2", "Effect": "Allow", "Action": [ "s3:PutBucketPolicy", "s3:GetBucketPolicy" ], "Resource": "arn:aws:s3:::<YOUR_BUCKET_NAME>" } ] }

Manual CloudFront Setup

For security reasons, we recommend configuring two separate distributions for uploading and downloading artifacts. For each distribution:

  1. Generate a key pair in SSH-2 RSA key format.

  2. Upload the public key from the pair to CloudFront.

  3. Add a new key group in CloudFront and add the created public key to this group.

  4. Create a new cache policy with Cache key settings | Query strings set to All.

  5. If you use a private bucket, create a new OAI user.

  6. Create a distribution and attach your key group to it:

    • Make sure to choose the same S3 bucket as specified in TeamCity.

    • Allowed HTTP methods for uploading artifacts GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE; for downloading GET, HEAD, OPTIONS

    • Restrict viewer access: yes

    • Trusted authorization type: trusted key groups

    • Cache key and origin requests: Cache policy and origin request policy

    • For private buckets, enable the use OAI option and configure OAI with the following setting:

      • Bucket policy: No, I will update the bucket policy

    • For public buckets, disable the Block public access option.

  7. Add a new policy to your S3 bucket. See the policy example.

When configured, the distributions should automatically appear in the Download distribution and Upload distribution drop-down menus.

  1. Select the target CloudFront distribution.

  2. In Public key, select the public key associated with this distribution.

  3. Click the Upload private key... button upload the private key from the pair.

  4. Save the storage settings.

S3 Policy Example

For accessing a private bucket with OAI:

{ "Sid": "1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <OAI ID" }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::<S3 bucket name/*" }

For accessing a public bucket:

{ "Sid": "PublicRead", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3::<BUCKET_NAME>/*" }

Kotlin DSL

The following sample illustrates how to add an S3 bucket as custom storage for project artifacts, and set this storage as primary (default).

import jetbrains.buildServer.configs.kotlin.* import jetbrains.buildServer.configs.kotlin.projectFeatures.activeStorage import jetbrains.buildServer.configs.kotlin.projectFeatures.s3Storage project { // ... features { activeStorage { id = "PROJECT_EXT_37" activeStorageID = "PROJECT_EXT_4" } s3Storage { id = "PROJECT_EXT_4" awsEnvironment = default { awsRegionName = "eu-west-1" } connectionId = "AwsPrimary" storageName = "S3 Transfer Acceleration" bucketName = "dk-s3ta" enableTransferAcceleration = true forceVirtualHostAddressing = true verifyIntegrityAfterUpload = true } } // ... }

Migrating Artifacts To a Different Storage

TeamCity server ships with a command-line tool that transfers build artifacts from one storage to another. You can download this tool on the Project Settings | Artifacts Storage page.

Download artifacts migration tool

Currently, the tool supports the following migration routes:

  • From a local directory to an Amazon S3 bucket and vice versa

  • From one Amazon S3 bucket to another

We're working on supporting other cloud storage options as well.

Learn more: Artifacts Migration Tool.

Last modified: 03 November 2024