Jenkins for the noobs

Acquire full autonomy on Casc-via-Helm-and-Flux Jenkins instances

Disclaimer

  • I am self-taught

  • This should still be a strong getting started guide

Reminder about CI

  • Purpose: integrate devs in main branch

  • Acceptance criteria: fast & reliable

  • How to detect bad CI: toil & doubts

Quick overview

What is Jenkins

  • Continuous integration server

  • Open-source

  • Developed in Java

  • Self-hosted

  • Built on plugins, lots and lots of plugins

  • Built on the controller-agent model

Installation with K8s & Flux

  • Instances hosted in a K8s cluster

  • Installed using Jenkins' helm-chart

  • Configured in gitops-repo

  • Deployment by Flux

  • Config-reload transforms config maps in files

  • warning CasC only seems immutable!️

  • warning️ UI changes get overwritten by the CasC

   Kubernetes cluster   
   GitHub   
   Jenkins instance namespace   
   Jenkins controller   
hidden
   Flux operator   
hidden2
   Agent   
   Volume   
   Config maps   
hidden3
   jenkins   
   config-reload   
hidden4
   gitops-repo   
   product-repo   

How to configure Jenkins

  • Global configuration

  • Jobs configuration

  • Pipelines configuration

The global configuration

Configuration-as-code (CasC)

A tool to set global configuration as code

Plugins

  • Config in /spec/values/controller

  • installPlugins base plugins

  • additionalPlugins other plugins

  • Plugins updated at restart

  • overwritePlugin handle conflicts

  • Document plugin purpose

initializeOnce: false # Never update plugins
installLatestPlugins: true # Update plugin to their latest version (not LTS)
installPlugins:
    - configuration-as-code # Configure Jenkins as code https://plugins.jenkins.io/configuration-as-code
    - git # Integration with git https://plugins.jenkins.io/git
    - kubernetes # Run dynamic agents in a K8s cluster https://plugins.jenkins.io/kubernetes
    - prometheus # Let Jenkins provide prometheus metrics https://plugins.jenkins.io/kubernetes
    - workflow-aggregator # Add pipelines to Jenkins https://plugins.jenkins.io/workflow-aggregator
additionalPlugins:
    - ansicolor # Support ANSI escape codes for console output https://plugins.jenkins.io/ansicolor
    - antisamy-markup-formatter # Safe HTML subset to format descriptions https://plugins.jenkins.io/antisamy-markup-formatter
    - authorize-project # Run jobs as any user https://plugins.jenkins.io/authorize-project
    - basic-branch-build-strategies # Add branch strategies to job configurations https://plugins.jenkins.io/basic-branch-build-strategies
    - branch-api # Add configuration options to branch jobs https://plugins.jenkins.io/branch-api
    - build-timestamp # Create build timestamps and expose them in the environment https://plugins.jenkins.io/build-timestamp
    - cloudbees-disk-usage-simple # Add disk usage in administration page https://plugins.jenkins.io/cloudbees-disk-usage-simple

# Use overwritePlugins to work around bugs deep in the dependency tree.
# Example value: [ 'trilead-api:1.0.5' ] to overwrite the plugin trilead-api to version 1.0.5
# De-activate with value: true
overwritePlugins: true

Permissions - part I

How to assign roles & permissions

JCasC:
  securityRealm:
    github:
      githubWebUri: 'https://github.com'
      githubApiUri: 'https://api.github.com'
      clientID: '${github-oauth-client-id-jenkins-myteam:-NotSet}'
      clientSecret: '${github-oauth-secret-jenkins-myteam:-NotSet}'
      oauthScopes: 'read:org,user:email'

  authorizationStrategy:
    roleBased:
      roles:
        global:
          - name: 'administrators'
            description: 'Jenkins Administrators'
            permissions:
              - 'Overall/Administer'
            entries:
              - group: 'MyOrg*ci-masters'
              - user: 'service-user'

Permissions - part II

More details on Jenkins permissions

  • Overall/* for global access

  • Overall/Administer become God

  • Overall/SystemRead view admin pages

  • Overall/Manage non-security-related administration

  • Credentials/* access rights on credentials

  • Job/* access rights on jobs

  • Permissions can be added in sub-parts of Jenkins

  • More information in the doc

Integration with Vault

  • Store secrets in a VaaS instance

  • Config /spec/values/containerEnv

  • CASC_VAULT_URL location of VaaS

  • CASC_VAULT_PATHS included secrets

  • CASC_VAULT_FILE mounted approle credentials

  • The binding of secrets is explained later

containerEnv:
  - name: 'CASC_VAULT_URL'
    value: 'https://vault-vaas.mydomain.com'
  - name: 'CASC_VAULT_PATHS'
    value: 'secret/myteam/jenkins'
  - name: 'CASC_VAULT_ENGINE_VERSION'
    value: '2'
  - name: 'CASC_VAULT_FILE'
    value: '/run/secrets/jcasc_vault/approle'

persistence:
  enabled: true
  existingClaim: 'jenkins-myteam'
  mounts:
    - name: 'vault-approle'
      mountPath: '/run/secrets/jcasc_vault'
      readOnly: true
  volumes:
    - name: 'vault-approle'
      secret:
        secretName: 'jenkins-myteam-vault'

Credentials

  • Config /spec/values/controller/JCasC/configScripts

  • Credentials powered by credentials plugin

  • Vault binding powered by hashicorp-vault-plugin

  • Bash-like substitutions using the Vault ID

  • No push events from Vault warning

credentials:
  system:
    domainCredentials:
      - credentials:
          - usernamePassword:
              scope: 'GLOBAL'
              id: 'nexus-credentials'
              description: 'Used to push artifacts to Nexus as service user myteam-jenkins.'
              username: 'myteam-jenkins'
              password: ${nexus-credentials:-notSet}
          - file:
              scope: 'GLOBAL'
              id: 'json-full-of-secrets'
              description: |
                JSON file with credentials for E2E job. Encode in base64, won't work otherwise
              fileName: 'json-full-of-secrets'
              # The default value is notSet in base64 🪄🪄🪄 ────┐
              secretBytes: ${json-full-of-secrets-base64:-bm90U2V0}
          - basicSSHUserPrivateKey:
              scope: 'GLOBAL'
              id: 'e2e-ssh-key'
              username: 'jenkins-e2e-ssh-key'
              description: 'Private SSH key to connect to the VM hosting the product during E2E tests'
              privateKeySource:
                directEntry:
                  privateKey: ${e2e-ssh-key:-notSet}
          - string:
              scope: 'GLOBAL'
              id: 'e2e-instance-ip'
              description: 'IP address for the instance where the RE is running for the E2E tests'
              secret: ${e2e-instance-ip:-notSet}

The jobs configuration

Job DSL

UI-centric test folder

  • Built to fiddle

  • Has special permissions

- name: '__fiddling__'
  description: 'Fiddling Folder'
  pattern: '^__fiddling__.*'
  permissions:
    - 'Credentials/Create'
    - 'Credentials/Delete'
    - 'Credentials/ManageDomains'
    - 'Credentials/Update'
    - 'Credentials/View'
    - 'Job/Build'
    - 'Job/Cancel'
    - 'Job/Configure'
    - 'Job/Create'
    - 'Job/Discover'
    - 'Job/Move'
    - 'Job/Read'
    - 'Job/Workspace'
  entries:
    - group: 'MyOrg*my-team'
- name: '__fiddling/__'
  description: 'Fiddling Folder'
  pattern: '^__fiddling__/.*'
  permissions:
    - 'Job/Delete'
  entries:
    - group: 'MyOrg*my-team'

Validating job DSL

  • The Job DSL relies on plugins

  • One needs to load the right set of plugins to test

  • The best solution is to reproduce the instance plugin-wise

  • My solution that is implemented here

  • To check after deployment, see the logs

  • warning If there was no CI validation, do check the logs!

Pipelines configuration

Pipeline plugins

  • workflow-* family of plugins

  • Define triggers, parameters, notifiers, reports etc…​

  • Implemented using the Pipeline DSL

  • Plugins can contribute, so the doc is dynamic

  • The full documentation exists!

  • No validation currently

  • One can use the REST API

    curl --request 'POST' \
      --form "jenkinsFile=<${JENKINSFILE_PATH}" \
      --user "${JENKINS_USER}:${JENKINS_TOKEN}" \
      "${JENKINS_URL}/pipeline-model-converter/validate"
  • If broken, the build doesn’t start warning

  • Use the UI-centric test folder to iterate

Pipelines, Groovy, CPS

  • Jenkins uses standard parser and compiler…​

  • But a specific interpreter to resume jobs, CPS

  • Of course, it comes from a plugin, workflow-cps

  • It has significant overhead and limitations!

  • Example of error:

    Scripts not permitted to use staticMethod
    org.codehaus.groovy.runtime.DateGroovyMethods minus java.util.Date
  • More information in the documentation

  • Find the Groovy version used in Jenkins

  • Play with Groovy

Pipeline shape

  • Keep configuration and logic apart

  • Execute the build piece-by-piece with stages

  • Sequential by default, unless using parallel

  • Execute conditionally with when

  • Stages must have a functional reason to be

    • Maintenance: readable, shows what failed

    • Conditional run: push image only if params.SHOULD_RELEASE

    • Iterate: More easily skipped

// Configuration goes here

pipeline {
agent {}                          // Configure build pod
triggers {}                       // Configure triggers
parameters {}                     // Configure build parameters

  stages {                          // Run job
    stage('Validate parameters') {} // Fail fast if parameters are busted
    stage ('Compile') {}
    stage('Test') {
      parallel {
        stage('Run UTs') {
          steps { echo 'UTs OK' }
        }
        stage('Run ITs') {
          steps { echo 'ITs OK' }
        }
      }
    }
    stage('Tag/Commit/Push') {      // State-changing actions only when 99% sure they'll pass
      when {                        // Some stages only run when it makes sense
        expression {
          return params.SHOULD_RELEASE
        }
      }
    }
  }

  post {}                           // Runs after build, use for notifications, cleanup
}

Kubernetes integration - part I

Agent definition

  • Configuration of agent running the stages

  • Defined in /pipeline/agent

  • Use defaultContainer !

  • The pod definition can come from yaml or yamlFile

  • label is deprecated, remove it!

agent {
  kubernetes {
    yaml kubernetesPodDefinition
    defaultContainer defaultContainerName
  }
}

Kubernetes integration - part II

Pod definition

apiVersion: 'v1'
kind: 'Pod'
spec:
  imagePullSecrets:
    - name: 'org-registry' # Credentials to use to pull Docker images (K8s secret)
  containers: # Containers in the pod, usually one is enough
    - name: 'default-container' # Use in /agent/kubernetes/defaultContainer
      image: 'myorg.dockerhub.com/jenkins/asdf-builder:1.0.0'
      tty: true # tty & command used to keep the image up
      command: [ 'cat' ]
      env:
        - name: 'DOCKER_HOST' # Connect to the Docker daemon in next container
          value: 'tcp://localhost:2375'
      resources: # Resources requested, adjust depending on what you build
        requests: { memory: '2G', cpu: '2' }
        limits: { memory: '8G' } # Don't limit the CPU!
      volumeMounts: # Mount volumes in the container (see volumes section below)
        - name: 'asdf' # asdf cache
          mountPath: '/home/jenkins/.asdf/installs'
    - name: 'docker-daemon' # Container that hosts the Docker daemon/socket
      image: 'docker:24.0.2-dind-rootless' # Use the rootless version
      command: [ 'dockerd-entrypoint.sh' ] # Override to add parameters
      args: [ '--tls=false' ] # De-activate TLS (not possible in rootless mode)
      env: # No certificate since it's not used (faster startup)
        - { name: 'DOCKER_TLS_CERTDIR', value: '' }

      securityContext:
        privileged: true # For image bootstrap, switches back to rootless at startup
  volumes: # What Cloud resources the volumes map to
    - name: 'asdf' # PVC's are persistent file systems
      persistentVolumeClaim:
        claimName: 'asdf'

Credentials

Top of the file

final def GITHUB_CREDENTIALS = usernamePassword(
  credentialsId: 'github-credentials', // Jenkins ID from global configuration declaration
  usernameVariable: 'GITHUB_LOGIN', // Environment variable where username is injected
  passwordVariable: 'GITHUB_PASSWORD') // Environment variable where password is injected

In stage

steps {
  withCredentials([ GITHUB_CREDENTIALS ]) {
    sh """\
      bash build.sh \\
        '${GITHUB_LOGIN}' \\
        "\${GITHUB_PASSWORD}"
    """.stripIndent()
  }
}

Generated shell script

bash build.sh \
  'ci-user' \
  "${GITHUB_PASSWORD}"

Parameters

  • Parameterize build with user input

  • Defined in /pipeline/parameters

  • Specified when running the build (UI/API)

  • Default values when triggered by SCM

  • Check out the documentation

  • Plugins can add new types

parameters {
  string( // Text input
    name: 'STRING_PARAM_NAME',
    defaultValue: '',
    description: 'Help text')
  text( // Text area
    name: 'TEXT_PARAM_NAME',
    defaultValue: '',
    description: 'Help text')
  password( // Password input
    name: 'PASSWORD_PARAM_NAME',
    defaultValue: '',
    description: 'Help text')
  booleanParam( // Check-box
    name: 'BOOLEAN_PARAM_NAME',
    defaultValue: false,
    description: 'Help text')
  choice( // Drop-down list, first value is the default
    name: 'CHOICE_PARAM_NAME',
    choices: [ 'choice1', 'choice2' ],
    description: 'Help text')
}

Triggers

triggers {
  cron(env.BRANCH_NAME == 'main' ? '0 5 * * 1' : '')
  parameterizedCron """\
    0 5 * * 1 %PARAM1_NAME=value1;PARAM2_NAME=value2
    0 6 * * 1 %PARAM1_NAME=value1;PARAM2_NAME=value2
  """.stripIndent()
}

Slack

  • Slack URL configured in global configuration

  • Use method slackSend to send Slack messages

  • Colors: good, warning, danger (or hex code)

  • Message: use Slack’s mrkdown syntax

  • The other parameters are not useful

slackSend(
    color: 'success',
    channel: 'my-slack-channel',
    message: "KO `${env.BRANCH_NAME}-${env.GIT_COMMIT.take(7)}` <${env.BUILD_URL}|Open>"
)

Scripts

  • Shell steps add a significant overhead! warning

  • Scripts belong in a separate file

  • Parameters should be hard-wired

  • Shell scripts should be as stupid as possible

  • Repository specificities should not leak in scripts

  • Beware of quoting! Empty and unset parameters differ

  • Shameless plug Bash > /dev/null

In the Jenkinsfile

sh """\
  bash ci/scripts/build.sh \\
    '${params.MAVEN_PROFILE}' \\
    "\${MAVEN_SETTINGS}"
""".stripIndent()

In the shell script

#!/user/bin/env bash
set -euxo                                 # Verbose, fails fast, forbids unset variables

main() (                                  # Main method, sub-shelled
  profile="${1:?Missing Maven profile}"   # Hard, explicit fail in case of error
  settings="${2:?Missing Maven settings}"

  mvn verify \                            # Not install, verify!
    --activate-profiles "${profile}" \    # Long flags, 1 purpose/line
    --settings "${settings}"              # Generic, simple, stupid
)

main "$@"                                 # Execute the method

Shared libs

  • Allow centralization of Jenkinsfile parts

  • Libs are repositories configured in global configuration

  • Referenced with git refs in the Jenkinsfiles

  • Allow putting logic in the lib and configuration in the product

  • Abstract the Jenkinsfiles, harder to validate warning

  • Require more team discipline warning

Shared libs

What for

Shared libs centralize common patterns in Jenkinsfiles

Anatomy of a shared lib

Git repository with the following structure

.
├── resources # Static files that can be used in vars scripts
├── src       # Groovy source files
│   └── common
│       └── Semver.groovy
└── vars      # Groovy scripts that can be used in Jenkinsfiles
    └── myFunction.groovy

Usage - configure shared lib

In Jenkins' configuration, set up your shared lib:

unclassified:
  globallibraries:
    libraries:
      - name: 'team-shared-lib'
        defaultVersion: 'master'
        implicit: false
        includeInChangesets: true
        allowVersionOverride: true
        retriever:
          modernSCM:
            scm:
              git:
                remote: 'https://github.com/MyOrg/shared-lib.git'
                credentialsId: 'github-credentials'

Usage - use shared lib

In app repository’s Jenkinsfile:

@Library('team-shared-lib@v5') _
import common.Semver // The Semver class comes from src in the lib

release { // The function release comes vars in the lib
  tag = Semver.of(1, 12, 3)
}

Pros

  • Faster delivery of improvements

  • Separation of configuration & logic

  • Easier discovery of team’s practices

  • More easily enforce good practices

Cons

  • Abstracts Jenkinsfiles, harder to validate

  • Requires more team discipline

Anatomy of lib

  • One global var per job type

  • One folder in src & resources per job type + common

  • One tag prefix per job type + common

  • One changelog file per job type + common

  • Each release tagged with semver and major tags

Anatomy of jobs

Global vars look like this:

import jobType.PipelineConfiguration

import static groovy.lang.Closure.DELEGATE_ONLY

def call (final Closure bodyBuilder) {
  final PipelineConfiguration configuration = new PipelineConfiguration()
  bodyBuilder.resolveStrategy = DELEGATE_ONLY // Resolve fields on the delegate only
  bodyBuilder.delegate = configuration // Set the delegate of the closure
  bodyBuilder() // Execute the closure to hydrate the delegate

  pipeline {
    ...
  }
}

And are used like this:

@Library('team-shared-lib@v5') _

build {
  field = 'value'
}

Multi-file scripting

  • In global shared libs, Groovy can be used (warning)

  • Otherwise, a good alternative is Deno

  • For non-Groovy scripting, load the shared lib resources

class SharedLibLoader {
  public static final String RESOURCES_FOLDER = '.git/shared-lib-scripts'
  private static final String RESOURCES_INDEX_FILE_PATH = 'index.json'
  
/** We use base 64 for all files since some are binary files and get screwed with other encodings */ private static final String SAFE_ENCODING = 'Base64' /** * Retrieves the shared library resources index and from there, gets all the files from the * common and requested folders and writes them in (git-ignored) folder * {@link Utilities#RESOURCES_FOLDER} with pipeline step writeFile * (which can only copy inside the workspace). */
static void initializeSharedLibrary (final def mainScript, final String jobFolder, final String... otherFolders) { mainScript.echo 'Initializing shared library' final String indexAsString = mainScript.libraryResource(RESOURCES_INDEX_FILE_PATH) final String indexPath = "${RESOURCES_FOLDER}/${RESOURCES_INDEX_FILE_PATH}" mainScript.writeFile file: indexPath, text: indexAsString final List resources = (List) mainScript.readJSON(file: indexPath) final Set folders = new HashSet<>() folders.add('common') folders.add(jobFolder) otherFolders.each { folders.add(it) } writeAllResourcesInTemporaryFolder(mainScript, resources, folders) }
private static Object writeAllResourcesInTemporaryFolder (final def mainScript, final List resources, final Set jobFolders) { resources .findAll { final String resource -> jobFolders.any { resource.startsWith("${it}/") } } .each { final String resource -> mainScript.echo "Retrieving resource file ${resource}" final String resourceContent = mainScript.libraryResource resource: resource, encoding: SAFE_ENCODING mainScript.writeFile( file: "${RESOURCES_FOLDER}/${resource}", text: resourceContent, encoding: SAFE_ENCODING ) } }
}

CasC validation

Basic idea

  • Create clone of Jenkins instance

    • Same version

    • Same plugins

  • Load configuration files

  • Watch for errors

How it’s done

  • Transform gitops-repo into a Gradle project

  • Put Job DSL & CasC files in src/main

  • Use Jenkins harness as test framework

  • Install Jenkins and plugins in test instance

  • Write test classes to load files

  • Generate K8s resources with a script

.
├── build.gradle              # Project manifest
├── generated                 # Generated K8s resources
├── gradle.properties         # Project variables
├── jenkins-pvc.yaml          # Jenkins controller PVC
├── jenkins-vault-secret.yaml # Vault secret config map
├── jenkins.yaml              # Jenkins Helm release
├── pvc                       # Jenkins cache PVCs
└── src
    ├── main
    │   ├── groovy
    │   │   └── jobs          # Job DSL files
    │   └── resources
    │       ├── casc          # CasC files
    │       └── jobDsl.gdsl   # Job DSL syntax file
    └── test
        └── groovy            # Unit tests

Pros

  • Syntax coloration/completion

  • Local and/or CI validation

Cons

  • Some added complexity

Demo

Tips & tricks

Restart Jenkins

Restart Jenkins (to get start logs for example)

Open https://${JENKINS_DOMAIN}/safeRestart

Replay

Replay a job with edited Jenkinsfile

Conclusion

Sources

The crême de la crême, your bedside reading

Q&A

Ask me anything