GitLab CI template for S3 (Simple Storage Service)¶
This project implements a GitLab CI/CD template to deploy your objects to any S3 (Simple Storage Service) compatible object storage service.
This is a basic and very cheap solution to host static pages websites as well as progressive web applications.
It uses s3cmd to control the S3 API endpoint and uploading objects.
Usage¶
This template can be used both as a CI/CD component
or using the legacy include:project
syntax.
Use as a CI/CD component¶
Add the following to your .gitlab-ci.yml
:
include:
# 1: include the component
- component: $CI_SERVER_FQDN/to-be-continuous/s3/gitlab-ci-s3@7.2.3
# 2: set/override component inputs
inputs:
# ⚠ this is only an example
deploy-files: "website/"
staging-disabled: "true"
base-bucket-name: "wonder-doc"
# use same bucket for all review envs
review-bucket-name: "wonder-doc-review"
# segregate review envs with prefixes
review-prefix: "$CI_ENVIRONMENT_SLUG"
region: "eu-west-0"
Use as a CI/CD template (legacy)¶
Add the following to your .gitlab-ci.yml
:
include:
# 1: include the template
- project: 'to-be-continuous/s3'
ref: '7.2.3'
file: '/templates/gitlab-ci-s3.yml'
variables:
# 2: set/override template variables
# ⚠ this is only an example
S3_DEPLOY_FILES: "website/"
S3_STAGING_DISABLED: "true"
S3_BASE_BUCKET_NAME: "wonder-doc"
# use same bucket for all review envs
S3_REVIEW_BUCKET_NAME: "wonder-doc-review"
# segregate review envs with prefixes
S3_REVIEW_PREFIX: "$CI_ENVIRONMENT_SLUG"
S3_REGION: "eu-west-0"
Understand¶
This chapter introduces key notions and principle to understand how this template works.
Managed deployment environments¶
This template implements continuous delivery/continuous deployment for projects hosted on S3 platforms.
It allows you to manage automatic deployment & cleanup of standard predefined environments. Each environment can be enabled/disabled by configuration. If you're not satisfied with predefined environments and/or their associated Git workflow, you may implement you own environments and workflow, by reusing/extending the base (hidden) jobs. This is advanced usage and will not be covered by this documentation.
The following chapters present the managed predefined environments and their associated Git workflow.
Review environments¶
The template supports review environments: those are dynamic and ephemeral environments to deploy your ongoing developments (a.k.a. feature or topic branches).
When enabled, it deploys the result from upstream build stages to a dedicated and temporary environment. It is only active for non-production, non-integration branches.
It is a strict equivalent of GitLab's Review Apps feature.
It also comes with a cleanup job (accessible either from the environments page, or from the pipeline view).
Integration environment¶
If you're using a Git Workflow with an integration branch (such as Gitflow), the template supports an integration environment.
When enabled, it deploys the result from upstream build stages to a dedicated environment.
It is only active for your integration branch (develop
by default).
Production environments¶
Lastly, the template supports 2 environments associated to your production branch (main
or master
by default):
- a staging environment (an iso-prod environment meant for testing and validation purpose),
- the production environment.
You're free to enable whichever or both, and you can also choose your deployment-to-production policy:
- continuous deployment: automatic deployment to production (when the upstream pipeline is successful),
- continuous delivery: deployment to production can be triggered manually (when the upstream pipeline is successful).
Using other S3-compatible systems than AWS¶
The template might be used with other storage systems provided they are implementing a compatible API.
In that case, you'll have to override the default $S3_ENDPOINT_HOST
and $S3_WEBSITE_ENDPOINT
variables.
Provider | endpoint-host / S3_ENDPOINT_HOST |
website-endpoint / S3_WEBSITE_ENDPOINT |
---|---|---|
Google Cloud Platform | storage.googleapis.com |
website hosting in GCP not supported by s3cmd |
Microsoft Azure | requires using Minio Read this article for further information |
N/A |
Flexible Engine (Orange Business Services) | oss.<region>.prod-cloud-ocb.orange-business.com ( <region> must be set) |
https://%(bucket)s.oss-website.%(location)s.prod-cloud-ocb.orange-business.com |
Buckets namespacing¶
In its default configuration, the template manages (create/sync/delete) one S3 bucket per environment.
But you may also configure it to implement other policies. Here are several examples of alternate policies.
A single bucket with separate prefixes for each env¶
Here the .gitlab-ci.yml
configuration for one shared bucket for all envs, each separated by prefix:
variables:
# use same bucket for all envs
S3_REVIEW_BUCKET_NAME: "acme-bucket-shared"
S3_INTEG_BUCKET_NAME: "acme-bucket-shared"
S3_STAGING_BUCKET_NAME: "acme-bucket-shared"
S3_PROD_BUCKET_NAME: "acme-bucket-shared"
# segregate envs with prefixes
S3_PREFIX: "$CI_ENVIRONMENT_SLUG"
Hybrid policy¶
Here the .gitlab-ci.yml
configuration for one shared bucket for review envs and separate buckets for others:
variables:
# use same bucket for all review envs
S3_REVIEW_BUCKET_NAME: "acme-bucket-review"
S3_INTEG_BUCKET_NAME: "acme-bucket-integ"
S3_STAGING_BUCKET_NAME: "acme-bucket-staging"
S3_PROD_BUCKET_NAME: "acme-bucket-prod"
# segregate review envs with prefixes
S3_REVIEW_PREFIX: "$CI_ENVIRONMENT_SLUG"
Deployment output variables¶
As seen above, the S3 template may support up to 4 environments (review
, integration
, staging
and production
).
Each deployment job produces output variables that are propagated to downstream jobs (using dotenv artifacts):
environment_type
: set to the type of environment (review
,integration
,staging
orproduction
),environment_name
: the application name (see below),environment_url
: set to$CI_ENVIRONMENT_URL
.
They may be freely used in downstream jobs (for instance to run acceptance tests against the latest deployed environment).
Configuration reference¶
Secrets management¶
Here are some advices about your secrets (variables marked with a ):
- Manage them as project or group CI/CD variables:
- In case a secret contains characters that prevent it from being masked,
simply define its value as the Base64 encoded value prefixed with
@b64@
: it will then be possible to mask it and the template will automatically decode it prior to using it. - Don't forget to escape special characters (ex:
$
->$$
).
Global configuration¶
The S3 template uses some global configuration used throughout all jobs.
Input / Variable | Description | Default value |
---|---|---|
cmd-image / S3_CMD_IMAGE |
The Docker image used to run s3cmd commands | registry.hub.docker.com/d3fk/s3cmd:latest |
endpoint-host / S3_ENDPOINT_HOST |
Default S3 endpoint hostname (with port) | s3.amazonaws.com (AWS) |
host-bucket / S3_HOST_BUCKET |
Default DNS-style bucket+hostname:port template for accessing a bucket | %(bucket)s.$S3_ENDPOINT_HOST |
region / S3_REGION |
Default region to create the buckets in (if not defined, the template won't create any) | none |
S3_ACCESS_KEY |
Default S3 service Access Key | has to be defined |
S3_SECRET_KEY |
Default S3 service Secret Key | has to be defined |
base-bucket-name / S3_BASE_BUCKET_NAME |
Base bucket name | $CI_PROJECT_NAME (see GitLab doc) |
prefix / S3_PREFIX |
Default S3 prefix to use as a root destination to upload objects in the S3 bucket | none |
scripts-dir / S3_SCRIPTS_DIR |
Directory where S3 hook scripts are located | . |
Deployment jobs¶
Each environment has its own deployment job (associated with the right branch).
It uses the following variables:
Input / Variable | Description | Default value |
---|---|---|
deploy-args / S3_DEPLOY_ARGS |
s3cmd command and options to deploy files to the bucket | sync --recursive --delete-removed --acl-public --no-mime-magic --guess-mime-type |
deploy-files / S3_DEPLOY_FILES |
Pattern(s) of files to deploy to the S3 bucket | public/ (all files from public directory) |
website-disabled / S3_WEBSITE_DISABLED |
Set to true to disable WebSite hosting by your S3 bucket |
none (enabled by default) |
website-args / S3_WEBSITE_ARGS |
s3cmd command and options to enable WebSite hosting on the bucket | ws-create --ws-index=index.html --ws-error=404.html |
website-endpoint / S3_WEBSITE_ENDPOINT |
Default WebSite endpoint url pattern (supports %(bucket)s and %(location)s placeholders).only required when website hosting is not disabled |
http://%(bucket)s.s3-website.%(location)s.amazonaws.com |
If need be you could add your own hook script s3-pre-deploy.sh
that will be triggered right before deploying files to
the S3 bucket.
If the target bucket doesn't appear to exist, the template tries to create it.
s3-cleanup-review
job¶
This job allows destroying each review environment. Simply deletes the associated objects in the bucket. After objects removal, if the bucket appears to be empty, also tries to delete the bucket.
s3-cleanup-all-review
job¶
This job allows destroying all review environments at once (in order to save cloud resources).
It is disabled by default and can be controlled using the $CLEANUP_ALL_REVIEW
variable:
- automatically executed if
$CLEANUP_ALL_REVIEW
set toforce
, - manual job enabled from any
master
branch pipeline if$CLEANUP_ALL_REVIEW
set totrue
(or any other value),
The first value force
can be used in conjunction with a scheduled
pipeline to cleanup cloud resources for instance everyday at 6pm or on friday evening.
The second one simply enables the (manual) cleanup job on the master
branch pipeline.
Anyway destroyed review environments will be automatically re-created the next time a developer pushes a new commit on a feature branch.
in case of scheduling the cleanup, you'll probably have to create an almost empty branch without any other template (no need to build/test/analyse your code if your only goal is to cleanup environments).
Review environments configuration¶
Review environments are dynamic and ephemeral environments to deploy your ongoing developments (a.k.a. feature or topic branches).
They are enabled by default and can be disabled by setting the S3_REVIEW_DISABLED
variable (see below).
Here are variables supported to configure review environments:
Input / Variable | Description | Default value |
---|---|---|
review-disabled / S3_REVIEW_DISABLED |
Set to true to disable review environments |
none (enabled) |
review-endpoint-host / S3_REVIEW_ENDPOINT_HOST |
S3 endpoint hostname (with port) for review env (only define to override default) |
$S3_ENDPOINT_HOST |
review-region / S3_REVIEW_REGION |
Region to create the review buckets in (if not defined, the template won't create any) |
$S3_REGION |
S3_REVIEW_ACCESS_KEY |
S3 service Access Key for review env (only define to override default) |
$S3_ACCESS_KEY |
S3_REVIEW_SECRET_KEY |
S3 service Secret Key for review env (only define to override default) |
$S3_SECRET_KEY |
review-bucket-name / S3_REVIEW_BUCKET_NAME |
Bucket name for review env |
"${S3_BASE_BUCKET_NAME}-${CI_ENVIRONMENT_SLUG}" (ex: myproject-review-fix-bug-12 ) |
review-prefix / S3_REVIEW_PREFIX |
S3 prefix to use for review env (only define to override default) |
prefix / S3_PREFIX |
review-autostop-duration / S3_REVIEW_AUTOSTOP_DURATION |
The amount of time before GitLab will automatically stop review environments |
4 hours |
Integration environment configuration¶
The integration environment is the environment associated to your integration branch (develop
by default).
It is enabled by default and can be disabled by setting the S3_INTEG_DISABLED
variable (see below).
Here are variables supported to configure the integration environment:
Input / Variable | Description | Default value |
---|---|---|
integ-disabled / S3_INTEG_DISABLED |
Set to true to disable the integration environment |
none (enabled) |
integ-endpoint-host / S3_INTEG_ENDPOINT_HOST |
S3 endpoint hostname (with port) for integration env (only define to override default) |
$S3_ENDPOINT_HOST |
integ-region / S3_INTEG_REGION |
Region to create the integration bucket in |
$S3_REGION |
S3_INTEG_ACCESS_KEY |
S3 service Access Key for integration env (only define to override default) |
$S3_ACCESS_KEY |
S3_INTEG_SECRET_KEY |
S3 service Secret Key for integration env (only define to override default) |
$S3_SECRET_KEY |
integ-bucket-name / S3_INTEG_BUCKET_NAME |
Bucket name for integration env |
${S3_BASE_BUCKET_NAME}-integration |
integ-prefix / S3_INTEG_PREFIX |
S3 prefix to use for integration env (only define to override default) |
prefix / S3_PREFIX |
Staging environment configuration¶
The staging environment is an iso-prod environment meant for testing and validation purpose associated to your production
branch (main
or master
by default).
It is enabled by default and can be disabled by setting the S3_STAGING_DISABLED
variable (see below).
Here are variables supported to configure the staging environment:
Input / Variable | Description | Default value |
---|---|---|
staging-disabled / S3_STAGING_DISABLED |
Set to true to disable the staging environment |
none (enabled) |
staging-endpoint-host / S3_STAGING_ENDPOINT_HOST |
S3 endpoint hostname (with port) for staging env (only define to override default) |
$S3_ENDPOINT_HOST |
staging-region / S3_STAGING_REGION |
Region to create the staging bucket in |
$S3_REGION |
S3_STAGING_ACCESS_KEY |
S3 service Access Key for staging env (only define to override default) |
$S3_ACCESS_KEY |
S3_STAGING_SECRET_KEY |
S3 service Secret Key for staging env (only define to override default) |
$S3_SECRET_KEY |
staging-bucket-name / S3_STAGING_BUCKET_NAME |
Bucket name for staging env |
${S3_BASE_BUCKET_NAME}-staging |
staging-prefix / S3_STAGING_PREFIX |
S3 prefix to use for staging env (only define to override default) |
prefix / S3_PREFIX |
Production environment configuration¶
The production environment is the final deployment environment associated with your production branch (main
or master
by default).
It is enabled by default and can be disabled by setting the S3_PROD_DISABLED
variable (see below).
Here are variables supported to configure the production environment:
Input / Variable | Description | Default value |
---|---|---|
prod-disabled / S3_PROD_DISABLED |
Set to true to disable the production environment |
none (enabled) |
prod-endpoint-host / S3_PROD_ENDPOINT_HOST |
S3 endpoint hostname (with port) for production env (only define to override default) |
$S3_ENDPOINT_HOST |
prod-region / S3_PROD_REGION |
Region to create the production bucket in |
$S3_REGION |
S3_PROD_ACCESS_KEY |
S3 service Access Key for production env (only define to override default) |
$S3_ACCESS_KEY |
S3_PROD_SECRET_KEY |
S3 service Secret Key for production env (only define to override default) |
$S3_SECRET_KEY |
prod-bucket-name / S3_PROD_BUCKET_NAME |
Bucket name for production env |
$S3_BASE_BUCKET_NAME |
prod-deploy-strategy / S3_PROD_DEPLOY_STRATEGY |
Defines the deployment to production strategy. One of manual (i.e. one-click) or auto . |
manual |
prod-prefix / S3_PROD_PREFIX |
S3 prefix to use for production env (only define to override default) |
prefix / S3_PREFIX |
Variants¶
Vault variant¶
This variant allows delegating your secrets management to a Vault server.
Configuration¶
In order to be able to communicate with the Vault server, the variant requires the additional configuration parameters:
Input / Variable | Description | Default value |
---|---|---|
TBC_VAULT_IMAGE |
The Vault Secrets Provider image to use (can be overridden) | registry.gitlab.com/to-be-continuous/tools/vault-secrets-provider:latest |
vault-base-url / VAULT_BASE_URL |
The Vault server base API url | none |
vault-oidc-aud / VAULT_OIDC_AUD |
The aud claim for the JWT |
$CI_SERVER_URL |
VAULT_ROLE_ID |
The AppRole RoleID | must be defined |
VAULT_SECRET_ID |
The AppRole SecretID | must be defined |
Usage¶
Then you may retrieve any of your secret(s) from Vault using the following syntax:
@url@http://vault-secrets-provider/api/secrets/{secret_path}?field={field}
With:
Parameter | Description |
---|---|
secret_path (path parameter) |
this is your secret location in the Vault server |
field (query parameter) |
parameter to access a single basic field from the secret JSON payload |
Example¶
include:
# main template
- component: $CI_SERVER_FQDN/to-be-continuous/s3/gitlab-ci-s3@7.2.3
# Vault variant
- component: $CI_SERVER_FQDN/to-be-continuous/s3/gitlab-ci-s3-vault@7.2.3
inputs:
# audience claim for JWT
vault-oidc-aud: "https://vault.acme.host"
vault-base-url: "https://vault.acme.host/v1"
# $VAULT_ROLE_ID and $VAULT_SECRET_ID defined as a secret CI/CD variable
variables:
# Secrets managed by Vault
S3_ACCESS_KEY: "@url@http://vault-secrets-provider/api/secrets/b7ecb6ebabc231/my-backend/s3?field=access_key"
S3_SECRET_KEY: "@url@http://vault-secrets-provider/api/secrets/b7ecb6ebabc231/my-backend/s3?field=secret_key"