Instances hosted in a K8s cluster
Installed using Jenkins' helm-chart
Configured in
gitops-repo
Deployment by Flux
Config-reload transforms config maps in files
CasC only seems immutable!️
️ UI changes get overwritten by the CasC
Acquire full autonomy on Casc-via-Helm-and-Flux Jenkins instances
I am self-taught
This should still be a strong getting started guide
Purpose: integrate devs in main branch
Acceptance criteria: fast & reliable
How to detect bad CI: toil & doubts
Continuous integration server
Developed in Java
Self-hosted
Built on plugins, lots and lots of plugins
Built on the controller-agent model
Instances hosted in a K8s cluster
Installed using Jenkins' helm-chart
Configured in gitops-repo
Deployment by Flux
Config-reload transforms config maps in files
CasC only seems immutable!️
️ UI changes get overwritten by the CasC
Global configuration
Jobs configuration
Pipelines configuration
A tool to set global configuration as code
Applied by the CasC plugin
Written in YAML
gitops-repo
Plugins contribute to it so the doc is dynamic
Validation available opt-in
Check after deployment in the logs
CasC YAML
/spec/values/controller
installPlugins
base plugins
additionalPlugins
other plugins
Plugins updated at restart
overwritePlugin
handle conflicts
initializeOnce: false # Never update plugins
installLatestPlugins: true # Update plugin to their latest version (not LTS)
installPlugins:
- configuration-as-code # Configure Jenkins as code https://plugins.jenkins.io/configuration-as-code
- git # Integration with git https://plugins.jenkins.io/git
- kubernetes # Run dynamic agents in a K8s cluster https://plugins.jenkins.io/kubernetes
- prometheus # Let Jenkins provide prometheus metrics https://plugins.jenkins.io/kubernetes
- workflow-aggregator # Add pipelines to Jenkins https://plugins.jenkins.io/workflow-aggregator
additionalPlugins:
- ansicolor # Support ANSI escape codes for console output https://plugins.jenkins.io/ansicolor
- antisamy-markup-formatter # Safe HTML subset to format descriptions https://plugins.jenkins.io/antisamy-markup-formatter
- authorize-project # Run jobs as any user https://plugins.jenkins.io/authorize-project
- basic-branch-build-strategies # Add branch strategies to job configurations https://plugins.jenkins.io/basic-branch-build-strategies
- branch-api # Add configuration options to branch jobs https://plugins.jenkins.io/branch-api
- build-timestamp # Create build timestamps and expose them in the environment https://plugins.jenkins.io/build-timestamp
- cloudbees-disk-usage-simple # Add disk usage in administration page https://plugins.jenkins.io/cloudbees-disk-usage-simple
# Use overwritePlugins to work around bugs deep in the dependency tree.
# Example value: [ 'trilead-api:1.0.5' ] to overwrite the plugin trilead-api to version 1.0.5
# De-activate with value: true
overwritePlugins: true
How to assign roles & permissions
/spec/values/controller/JCasC
Authentication usually done by github-oauth plugin
/securityRealm
Roles usually by role-strategy plugin (RBAC)
/authorizationStrategy
Connected to a GitHub OAuth app
Linked to GitHub organizations, teams, or accounts
JCasC:
securityRealm:
github:
githubWebUri: 'https://github.com'
githubApiUri: 'https://api.github.com'
clientID: '${github-oauth-client-id-jenkins-myteam:-NotSet}'
clientSecret: '${github-oauth-secret-jenkins-myteam:-NotSet}'
oauthScopes: 'read:org,user:email'
authorizationStrategy:
roleBased:
roles:
global:
- name: 'administrators'
description: 'Jenkins Administrators'
permissions:
- 'Overall/Administer'
entries:
- group: 'MyOrg*ci-masters'
- user: 'service-user'
More details on Jenkins permissions
Overall/*
for global access
Overall/Administer
become God
Overall/SystemRead
view admin pages
Overall/Manage
non-security-related administration
Credentials/*
access rights on credentials
Job/*
access rights on jobs
Permissions can be added in sub-parts of Jenkins
More information in the doc
Store secrets in a VaaS instance
/spec/values/containerEnv
CASC_VAULT_URL
location of VaaS
CASC_VAULT_PATHS
included secrets
CASC_VAULT_FILE
mounted approle credentials
The binding of secrets is explained later
containerEnv:
- name: 'CASC_VAULT_URL'
value: 'https://vault-vaas.mydomain.com'
- name: 'CASC_VAULT_PATHS'
value: 'secret/myteam/jenkins'
- name: 'CASC_VAULT_ENGINE_VERSION'
value: '2'
- name: 'CASC_VAULT_FILE'
value: '/run/secrets/jcasc_vault/approle'
persistence:
enabled: true
existingClaim: 'jenkins-myteam'
mounts:
- name: 'vault-approle'
mountPath: '/run/secrets/jcasc_vault'
readOnly: true
volumes:
- name: 'vault-approle'
secret:
secretName: 'jenkins-myteam-vault'
/spec/values/controller/JCasC/configScripts
Credentials powered by credentials plugin
Vault binding powered by hashicorp-vault-plugin
Bash-like substitutions using the Vault ID
No push events from Vault
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
scope: 'GLOBAL'
id: 'nexus-credentials'
description: 'Used to push artifacts to Nexus as service user myteam-jenkins.'
username: 'myteam-jenkins'
password: ${nexus-credentials:-notSet}
- file:
scope: 'GLOBAL'
id: 'json-full-of-secrets'
description: |
JSON file with credentials for E2E job. Encode in base64, won't work otherwise
fileName: 'json-full-of-secrets'
# The default value is notSet in base64 🪄🪄🪄 ────┐
secretBytes: ${json-full-of-secrets-base64:-bm90U2V0}
- basicSSHUserPrivateKey:
scope: 'GLOBAL'
id: 'e2e-ssh-key'
username: 'jenkins-e2e-ssh-key'
description: 'Private SSH key to connect to the VM hosting the product during E2E tests'
privateKeySource:
directEntry:
privateKey: ${e2e-ssh-key:-notSet}
- string:
scope: 'GLOBAL'
id: 'e2e-instance-ip'
description: 'IP address for the instance where the RE is running for the E2E tests'
secret: ${e2e-instance-ip:-notSet}
Written in a Groovy DSL
Applied by the Job DSL plugin
Plugins contribute to the DSL so the doc is dynamic
Built to fiddle
Has special permissions
- name: '__fiddling__'
description: 'Fiddling Folder'
pattern: '^__fiddling__.*'
permissions:
- 'Credentials/Create'
- 'Credentials/Delete'
- 'Credentials/ManageDomains'
- 'Credentials/Update'
- 'Credentials/View'
- 'Job/Build'
- 'Job/Cancel'
- 'Job/Configure'
- 'Job/Create'
- 'Job/Discover'
- 'Job/Move'
- 'Job/Read'
- 'Job/Workspace'
entries:
- group: 'MyOrg*my-team'
- name: '__fiddling/__'
description: 'Fiddling Folder'
pattern: '^__fiddling__/.*'
permissions:
- 'Job/Delete'
entries:
- group: 'MyOrg*my-team'
The Job DSL relies on plugins
One needs to load the right set of plugins to test
The best solution is to reproduce the instance plugin-wise
My solution that is implemented here
To check after deployment, see the logs
If there was no CI validation, do check the logs!
workflow-*
family of plugins
Define triggers, parameters, notifiers, reports etc…
Implemented using the Pipeline DSL
Plugins can contribute, so the doc is dynamic
The full documentation exists!
No validation currently
One can use the REST API
curl --request 'POST' \
--form "jenkinsFile=<${JENKINSFILE_PATH}" \
--user "${JENKINS_USER}:${JENKINS_TOKEN}" \
"${JENKINS_URL}/pipeline-model-converter/validate"
If broken, the build doesn’t start
Use the UI-centric test folder to iterate
Jenkins uses standard parser and compiler…
But a specific interpreter to resume jobs, CPS
Of course, it comes from a plugin, workflow-cps
It has significant overhead and limitations!
Example of error:
Scripts not permitted to use staticMethod
org.codehaus.groovy.runtime.DateGroovyMethods minus java.util.Date
More information in the documentation
Execute the build piece-by-piece with stages
Sequential by default, unless using parallel
Execute conditionally with when
Maintenance: readable, shows what failed
Conditional run: push image only if params.SHOULD_RELEASE
Iterate: More easily skipped
// Configuration goes here
pipeline {
agent {} // Configure build pod
triggers {} // Configure triggers
parameters {} // Configure build parameters
stages { // Run job
stage('Validate parameters') {} // Fail fast if parameters are busted
stage ('Compile') {}
stage('Test') {
parallel {
stage('Run UTs') {
steps { echo 'UTs OK' }
}
stage('Run ITs') {
steps { echo 'ITs OK' }
}
}
}
stage('Tag/Commit/Push') { // State-changing actions only when 99% sure they'll pass
when { // Some stages only run when it makes sense
expression {
return params.SHOULD_RELEASE
}
}
}
}
post {} // Runs after build, use for notifications, cleanup
}
Agent definition
Configuration of agent running the stages
/pipeline/agent
defaultContainer
!
The pod definition can come from yaml
or yamlFile
label
is deprecated, remove it!
agent {
kubernetes {
yaml kubernetesPodDefinition
defaultContainer defaultContainerName
}
}
Pod definition
Define Docker containers
Define required resources
Mount caches with volumes
No validation, debug with kubectl
apiVersion: 'v1'
kind: 'Pod'
spec:
imagePullSecrets:
- name: 'org-registry' # Credentials to use to pull Docker images (K8s secret)
containers: # Containers in the pod, usually one is enough
- name: 'default-container' # Use in /agent/kubernetes/defaultContainer
image: 'myorg.dockerhub.com/jenkins/asdf-builder:1.0.0'
tty: true # tty & command used to keep the image up
command: [ 'cat' ]
env:
- name: 'DOCKER_HOST' # Connect to the Docker daemon in next container
value: 'tcp://localhost:2375'
resources: # Resources requested, adjust depending on what you build
requests: { memory: '2G', cpu: '2' }
limits: { memory: '8G' } # Don't limit the CPU!
volumeMounts: # Mount volumes in the container (see volumes section below)
- name: 'asdf' # asdf cache
mountPath: '/home/jenkins/.asdf/installs'
- name: 'docker-daemon' # Container that hosts the Docker daemon/socket
image: 'docker:24.0.2-dind-rootless' # Use the rootless version
command: [ 'dockerd-entrypoint.sh' ] # Override to add parameters
args: [ '--tls=false' ] # De-activate TLS (not possible in rootless mode)
env: # No certificate since it's not used (faster startup)
- { name: 'DOCKER_TLS_CERTDIR', value: '' }
securityContext:
privileged: true # For image bootstrap, switches back to rootless at startup
volumes: # What Cloud resources the volumes map to
- name: 'asdf' # PVC's are persistent file systems
persistentVolumeClaim:
claimName: 'asdf'
Credentials come from credentials-binding plugin
File config come from config-file-provider plugin
Credentials masked in logs by default
Beware of multi-line credentials!
Properly escape credentials!
Top of the file
final def GITHUB_CREDENTIALS = usernamePassword(
credentialsId: 'github-credentials', // Jenkins ID from global configuration declaration
usernameVariable: 'GITHUB_LOGIN', // Environment variable where username is injected
passwordVariable: 'GITHUB_PASSWORD') // Environment variable where password is injected
In stage
steps {
withCredentials([ GITHUB_CREDENTIALS ]) {
sh """\
bash build.sh \\
'${GITHUB_LOGIN}' \\
"\${GITHUB_PASSWORD}"
""".stripIndent()
}
}
Generated shell script
bash build.sh \
'ci-user' \
"${GITHUB_PASSWORD}"
Parameterize build with user input
/pipeline/parameters
Specified when running the build (UI/API)
Default values when triggered by SCM
Check out the documentation
Plugins can add new types
parameters {
string( // Text input
name: 'STRING_PARAM_NAME',
defaultValue: '',
description: 'Help text')
text( // Text area
name: 'TEXT_PARAM_NAME',
defaultValue: '',
description: 'Help text')
password( // Password input
name: 'PASSWORD_PARAM_NAME',
defaultValue: '',
description: 'Help text')
booleanParam( // Check-box
name: 'BOOLEAN_PARAM_NAME',
defaultValue: false,
description: 'Help text')
choice( // Drop-down list, first value is the default
name: 'CHOICE_PARAM_NAME',
choices: [ 'choice1', 'choice2' ],
description: 'Help text')
}
/pipeline/triggers
There are a lot of those
Only a few really useful
cron is included in a core plugin
parameterizedCron comes from the parameterized-scheduler plugin
Both are based on the cron syntax
Check out crontab guru to edit them
triggers {
cron(env.BRANCH_NAME == 'main' ? '0 5 * * 1' : '')
parameterizedCron """\
0 5 * * 1 %PARAM1_NAME=value1;PARAM2_NAME=value2
0 6 * * 1 %PARAM1_NAME=value1;PARAM2_NAME=value2
""".stripIndent()
}
Use method slackSend
to send Slack messages
Colors: good, warning, danger
(or hex code)
Message: use Slack’s mrkdown syntax
The other parameters are not useful
slackSend(
color: 'success',
channel: 'my-slack-channel',
message: "KO `${env.BRANCH_NAME}-${env.GIT_COMMIT.take(7)}` <${env.BUILD_URL}|Open>"
)
Shell steps add a significant overhead!
Beware of quoting! Empty and unset parameters differ
Shameless plug Bash > /dev/null
In the Jenkinsfile
sh """\
bash ci/scripts/build.sh \\
'${params.MAVEN_PROFILE}' \\
"\${MAVEN_SETTINGS}"
""".stripIndent()
In the shell script
#!/user/bin/env bash
set -euxo # Verbose, fails fast, forbids unset variables
main() ( # Main method, sub-shelled
profile="${1:?Missing Maven profile}" # Hard, explicit fail in case of error
settings="${2:?Missing Maven settings}"
mvn verify \ # Not install, verify!
--activate-profiles "${profile}" \ # Long flags, 1 purpose/line
--settings "${settings}" # Generic, simple, stupid
)
main "$@" # Execute the method
Allow centralization of Jenkinsfile parts
Libs are repositories configured in global configuration
Referenced with git refs in the Jenkinsfiles
Allow putting logic in the lib and configuration in the product
Abstract the Jenkinsfiles, harder to validate
Require more team discipline
Shared libs centralize common patterns in Jenkinsfiles
Git repository with the following structure
.
├── resources # Static files that can be used in vars scripts
├── src # Groovy source files
│ └── common
│ └── Semver.groovy
└── vars # Groovy scripts that can be used in Jenkinsfiles
└── myFunction.groovy
In Jenkins' configuration, set up your shared lib:
unclassified:
globallibraries:
libraries:
- name: 'team-shared-lib'
defaultVersion: 'master'
implicit: false
includeInChangesets: true
allowVersionOverride: true
retriever:
modernSCM:
scm:
git:
remote: 'https://github.com/MyOrg/shared-lib.git'
credentialsId: 'github-credentials'
In app repository’s Jenkinsfile:
@Library('team-shared-lib@v5') _
import common.Semver // The Semver class comes from src in the lib
release { // The function release comes vars in the lib
tag = Semver.of(1, 12, 3)
}
Faster delivery of improvements
Separation of configuration & logic
Easier discovery of team’s practices
More easily enforce good practices
Abstracts Jenkinsfiles, harder to validate
Requires more team discipline
One global var per job type
One folder in src
& resources
per job type + common
One tag prefix per job type + common
One changelog file per job type + common
Each release tagged with semver and major tags
Global vars look like this:
import jobType.PipelineConfiguration
import static groovy.lang.Closure.DELEGATE_ONLY
def call (final Closure bodyBuilder) {
final PipelineConfiguration configuration = new PipelineConfiguration()
bodyBuilder.resolveStrategy = DELEGATE_ONLY // Resolve fields on the delegate only
bodyBuilder.delegate = configuration // Set the delegate of the closure
bodyBuilder() // Execute the closure to hydrate the delegate
pipeline {
...
}
}
And are used like this:
@Library('team-shared-lib@v5') _
build {
field = 'value'
}
In global shared libs, Groovy can be used ()
Otherwise, a good alternative is Deno
For non-Groovy scripting, load the shared lib resources
class SharedLibLoader {
public static final String RESOURCES_FOLDER = '.git/shared-lib-scripts'
private static final String RESOURCES_INDEX_FILE_PATH = 'index.json'
/** We use base 64 for all files since some are binary files and get screwed with other encodings */
private static final String SAFE_ENCODING = 'Base64'
/**
* Retrieves the shared library resources index and from there, gets all the files from the
* common and requested folders and writes them in (git-ignored) folder
* {@link Utilities#RESOURCES_FOLDER} with pipeline step writeFile
* (which can only copy inside the workspace).
*/
static void initializeSharedLibrary (final def mainScript, final String jobFolder, final String... otherFolders) {
mainScript.echo 'Initializing shared library'
final String indexAsString = mainScript.libraryResource(RESOURCES_INDEX_FILE_PATH)
final String indexPath = "${RESOURCES_FOLDER}/${RESOURCES_INDEX_FILE_PATH}"
mainScript.writeFile file: indexPath, text: indexAsString
final List resources = (List) mainScript.readJSON(file: indexPath)
final Set folders = new HashSet<>()
folders.add('common')
folders.add(jobFolder)
otherFolders.each { folders.add(it) }
writeAllResourcesInTemporaryFolder(mainScript, resources, folders)
}
private static Object writeAllResourcesInTemporaryFolder (final def mainScript, final List resources, final Set jobFolders) {
resources
.findAll { final String resource ->
jobFolders.any { resource.startsWith("${it}/") }
}
.each { final String resource ->
mainScript.echo "Retrieving resource file ${resource}"
final String resourceContent = mainScript.libraryResource resource: resource, encoding: SAFE_ENCODING
mainScript.writeFile(
file: "${RESOURCES_FOLDER}/${resource}",
text: resourceContent,
encoding: SAFE_ENCODING
)
}
}
}
Inspired by official documentation
Create clone of Jenkins instance
Same version
Same plugins
Load configuration files
Watch for errors
Transform gitops-repo
into a Gradle project
Put Job DSL & CasC files in src/main
Use Jenkins harness as test framework
Install Jenkins and plugins in test instance
Write test classes to load files
Generate K8s resources with a script
.
├── build.gradle # Project manifest
├── generated # Generated K8s resources
├── gradle.properties # Project variables
├── jenkins-pvc.yaml # Jenkins controller PVC
├── jenkins-vault-secret.yaml # Vault secret config map
├── jenkins.yaml # Jenkins Helm release
├── pvc # Jenkins cache PVCs
└── src
├── main
│ ├── groovy
│ │ └── jobs # Job DSL files
│ └── resources
│ ├── casc # CasC files
│ └── jobDsl.gdsl # Job DSL syntax file
└── test
└── groovy # Unit tests
Syntax coloration/completion
Local and/or CI validation
Some added complexity
Restart Jenkins (to get start logs for example)
Open https://${JENKINS_DOMAIN}/safeRestart
Replay a job with edited Jenkinsfile
The crême de la crême, your bedside reading
Ask me anything