self-managed to be continuous (advanced)¶
Using the official to be continuous templates, you may have some specific needs within your company:
- simply expose some of your mutualized tools configuration in Kicker (ex: a shared SonarQube server, an Artifactory, a private Kubernetes cluster),
- or maybe develop and share some template adjustments to address your specific technical context (ex: no default untagged runner, need to access the internet through a proxy, ...),
- or even develop your own internal templates.
Installing tbc in a custom group¶
By default and preferably, to be continuous shall be installed:
- in the
to-be-continuous
root group on your GitLab server, - with public visibility.
If one or both of these requirements can't be met (because you're not allowed to create a root group in your organization and/or not allowed to create projects with public visibility), then you'll have a couple of extra things to do to have to be continuous working in your self-managed server:
- Use the right GitLab Synchronization option(s) when running the GitLab Copy CLI for first the time:
--dest-sync-path
to override the GitLab destination root group path,--max-visibility
to override the maximum visibility of projects in the destination group. For more info about GitLab Copy CLI options, please refer to the doc.
- Set the right variable(s) in your local copy of the tools/gitlab-sync project
when configuring the TBC synchronization for first time:
$DEST_SYNC_PATH
to override the GitLab destination root group path,$MAX_VISIBILITY
to override the maximum visibility of projects in the destination group.
- TBC configuration shall be overridden accordingly in the
KICKER_RESOURCE_GROUPS
variable in your local copy of the doc project (see the Have your own doc + kicker chapter).
Variable presets¶
Variable presets are groups of to be continuous variable values that can be used within your company.
You can simply define variable presets:
- Create a GitLab project (if possible with
public
or at leastinternal
visibility), - Create a
kicker-extras.json
declaring your presets,
JSON schema: https://gitlab.com/to-be-continuous/kicker/raw/master/kicker-extras-schema-1.json - Make a first release by creating a Git tag.
Example kicker-extras.json
:
{
"presets": [
{
"name": "Shared SonarQube",
"description": "Our internal shared SonarQube server",
"values": {
"SONAR_HOST_URL": "https://sonarqube.acme.host"
}
},
{
"name": "Shared OpenShift",
"description": "Our internal shared OpenShift clusters (prod & no-prod)",
"values": {
"OS_URL": "https://api.openshift-noprod.acme.host",
"OS_ENVIRONMENT_URL": "https://%{environment_name}.apps-noprod.acme.host",
"OS_PROD_URL": "https://api.openshift-prod.acme.host",
"OS_PROD_ENVIRONMENT_URL": "https://%{environment_name}.apps.acme.host",
"K8S_URL": "https://api.openshift-noprod.acme.host",
"K8S_ENVIRONMENT_URL": "https://%{environment_name}.apps-noprod.acme.host",
"K8S_PROD_URL": "https://api.openshift-prod.acme.host",
"K8S_PROD_ENVIRONMENT_URL": "https://%{environment_name}.apps.acme.host"
}
},
{
"name": "Artifactory Mirrors",
"description": "Our internal Artifactory mirrors",
"values": {
"NPM_CONFIG_REGISTRY": "https://artifactory.acme.host/api/npm/npm-mirror",
"DOCKER_REGISTRY_MIRROR": "https://dockerproxy.acme.host",
"GOPROXY": "https://artifactory.acme.host/api/go/go-mirror",
"PIP_INDEX_URL": "https://artifactory.acme.host/api/pypi/pythonproxy/simple",
"PYTHON_REPOSITORY_URL": "https://artifactory.acme.host/api/pypi/python-mirror"
}
}
]
}
With this, Kicker will prompt about each applicable preset directly from the online form.
Template variants¶
Another essential extra resource is template variants. Roughly speaking, this is a template override to address a specific technical issue.
You can simply define a template variant:
- Create a GitLab project (if possible with
public
or at leastinternal
visibility), - Create a
kicker-extras.json
declaring your variant,
JSON schema: https://gitlab.com/to-be-continuous/kicker/raw/master/kicker-extras-schema-1.json - Make a first release by creating a Git tag.
Example: let's imagine in your company - in addition to the default untagged shared runners - you would also like to let users use a non-default shared runner to deploy to your private Kubernetes cluster. Let's also suppose those runners don't have a free access to the internet, but need to go through an http proxy.
That would involve:
- develop the following variant for the Kubernetes template.
For instance in file
templates/acme-k8s-variant.yml
:(developing this requires advanced to be continuous knowledge)# ========================================== # === ACME variant to use Kubernetes runners # ========================================== # override kubernetes base template job .k8s-base: # Kubernetes Runners tags tags: - k8s - shared # Kubernetes Runners proxy configuration variables: http_proxy: "http://proxy.acme.host:8080" https_proxy: "http://proxy.acme.host:8080" no_proxy: "localhost,127.0.0.1,.acme.host" HTTP_PROXY: "${http_proxy}" HTTPS_PROXY: "${https_proxy}" NO_PROXY: "${no_proxy}"
- declare it in a
kicker-extras.json
file:(the{ "variants": [ { "id": "acme-k8s-runners", "name": "ACME Kubernetes Runners", "description": "Use the ACME Kubernetes shared Runners", "template_path": "templates/acme-k8s-variant.yml", "target_project": "to-be-continuous/kubernetes" } ] }
target_project
field declares the original template the variant applies to)
This way, your variant will show up as a simple actionable component in the Kubernetes template form in Kicker.
Develop your own templates¶
You may also have to develop tooling very specific to your company.
In that case, you just have to:
- Create a GitLab project (if possible with
public
or at leastinternal
visibility), - Develop your template following the guidelines,
- Declare the template with a Kicker descriptor,
- Make a first release by creating a Git tag.
Your template can then be used like any other to be continuous one.
Have your own doc + kicker¶
If you developed any of the above (Kicker extras and/or internal templates), you'll want all developers from your company to have an easy access to a reference documentation + Kicker with your additional material.
In your local copy of the doc project:
- Declare the CI/CD project variable
GITLAB_TOKEN
: a group access token with scopesapi,read_registry,write_registry,read_repository,write_repository
and withOwner
role. - Declare the CI/CD project variable
KICKER_RESOURCE_GROUPS
: JSON configuration of GitLab groups to crawl. - create a scheduled pipeline (for instance every day at 3:00 am).
Here is an example of KICKER_RESOURCE_GROUPS
content:
[
{
"path": "acme/cicd/all",
"visibility": "public"
},
{
"path": "acme/cicd/ai-ml",
"visibility": "internal",
"exclude": ["project-2", "project-13"],
"extension":
{
"id": "ai-ml",
"name": "AI/ML",
"description": "ACME templates for AI/ML projects"
}
},
{
"path": "to-be-continuous",
"visibility": "public"
}
]
Some explanations:
path
is a path to a GitLab group with GitLab projects containing Kicker resources.visibility
is the group/projects visibility to crawl.exclude
(optional) allows to exclude some project(s) from processing.extension
(optional) allows to associate Kicker resources with a separate extension (actionable within Kicker).
By default, KICKER_RESOURCE_GROUPS
is configured to crawl the to-be-continuous
group only.
Setup tracking¶
Another optional thing you might want to setup is tracking (to collect statistics about template jobs execution).
to be continuous already provides an unconfigured project to perform this: tools/tracking.
Here is what you'll have to do to set it up:
- Install an Elasticsearch server.
- Create a dedicated user with appropriate authorization to push data to some indices.
- In your local copy of the tools/tracking:
define the
TRACKING_CONFIGURATION
CI/CD project variable as follows:{ "clients": [ { "url":"https://elasticsearch-host", "authentication": { "username":"tbc-tracking", "password":"mYp@55w0rd" }, "timeout":5, "indexPrefix":"tbc-", "esMajorVersion":7, "skipSslVerification":true } ] }
- Manually start a pipeline on the
main
(ormaster
) branch: this will (re)generate a new Docker image with your configuration that will now be used by every template job. - Set the following as an instance-level CI/CD variable:
- name:
TBC_TRACKING_IMAGE
- value:
$CI_REGISTRY/to-be-continuous/tools/tracking:master
(adapt the path if you've installed TBC in a custom root group)
This will override the default tracking image used by all TBC templates
- name:
Use custom service images¶
Apart from the tracking image (see previous chapter), TBC also uses a couple of extra service images. By default, all latest versions of the TBC templates are pulling those images from the gitlab.com public registry, but you might override this behavior and use your own built/hosted images.
Proceed the same way as explained in the previous chapter:
- (re)build the image(s) locally in your GitLab server,
- override the default image by setting the right instance-level CI/CD variable (see variable names below).
Here are the service images used by TBC templates:
Project | Description | Image variable |
---|---|---|
tools/tracking | This image can be used to collect statistics about template jobs execution. Used by all TBC templates. |
TBC_TRACKING_IMAGE |
tools/vault-secrets-provider | This image can be used to retrieve secrets from a Vault server. Used by TBC Vault variants. |
TBC_VAULT_IMAGE |
tools/aws-auth-provider | This image can be used to retrieve an authorization token for AWS. Used by TBC AWS variants. |
TBC_AWS_PROVIDER_IMAGE |
tools/gcp-auth-provider | This image can be used to retrieve an access token for GCP. Used by TBC GCP variants. |
TBC_GCP_PROVIDER_IMAGE |
Use Docker registry mirrors¶
to be continuous uses explicit Docker registries¶
By default Docker images names that do not specify a registry (e.g. alpine:latest
) are fetched from the Docker Hub.
Since Docker Hub has some quotas, some companies use Docker registry mirrors.
Some Docker registry mirrors can mirror multiple registries (e.g. Artifactory or Nexus Repository when coupling Docker Proxy and Docker Group).
In that case, when pulling an image without specifying the original registry, the mirror will look for an image with the same name in each of the upstream registries.
It will return the 1st matching image, which is not necessarily from the registry you expected.
Example:
- a developer builds the
superapp/backend:1.0.0
image and pushes it to both Docker Hub and Quay.io - the developer also tags this image with tag
latest
and pushes thelatest
tag to both Docker Hub and Quay.io - the developer then builds image
superapp/backend:1.1.0
and pushes it only to Docker Hub (e.g. because of a failure in the build pipeline) without noticing that the image has not been pushed to Quay.io - if a user pulls
superapp/backend:latest
he would expect to get thesuperapp/backend:1.1.0
image from Docker Hub - but if a mirror has been set up to proxy both Docker Hub and Quay.io with a priority to Quay.io, the returned image would be
superapp/backend:1.0.0
pulled from Quay.io
This behavior can be used by attackers in supply chain attacks: they can push a malicious image in lots of Docker registries with the same name as a trustworthy image which is only published in the Docker Hub. A mirror could return the malicious image just because it found an image with the correct name on a different registry before the Docker Hub.
In order to protect against this kind of attacks, to-be-continuous always use fully qualified image names (i.e. including the registry).
Example:
To refer to aquasec/trivy:latest
, to be continuous templates will always specify registry.hub.docker.com/aquasec/trivy:latest
Drawbacks¶
When using containerd
as a container runtime, this should have no impact, containerd will still try to use the configured Docker registry mirrors if any.
On the other hand, when using Docker as a container runtime, specifying the registry name when pulling a Docker image prevents Docker from using a Docker registry mirror. Instead, the Docker daemon will directly pull the image from the specified registry. As a consequence, Docker Hub quotas may be reached sooner.
Workarounds¶
You can simply override the image names specifying your own Docker registry mirror.
Example:
If you have a Docker registry mirror for the Docker Hub, you can use something like this:
variables:
DOCKER_TRIVY_IMAGE: "docker-proxy.mycorp.org/aquasec/trivy:latest"
In this case, both containerd
and the Docker daemon will try to pull the aquasec/trivy:latest
image through your Docker registry mirror.