Concourse Tut Research
Concourse Tut Research
Target Concourse
In the spirit of declaring absolutely everything you do to get absolutely the same
result every time, the fly CLI requires that you specify the target API for
every fly request.
First, alias it with a name tutorial (this name is used by all the tutorial task scripts).
cat ~/.flyrc
When we use the fly command we will target this Concourse API using fly --target
tutorial.
The central concept of concourse is to run tasks. You can run them directly
from the command line as below, or from within pipeline jobs (as per every
other section of the tutorial).
Now clone the concourse tutorial repo, switch to the task-hello-world directory, and
run the command to execute the task_hello_world.yml task.
git clone https://github.com/starkandwayne/concourse-tutorial.git
cd concourse-tutorial/tutorials/basic/task-hello-world
fly -t tutorial execute -c task_hello_world.yml
---
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
run:
path:
echo args: [hello world]
From the same directory in which you previously deployed the Docker Concourse
image (verify by running ls -l and looking for the docker-compose.yml file), start the
local Concourse server.
docker-compose up
Now clone the Concourse Tutorial repo, switch to the task-hello-world directory, and
run the command to execute the task_hello_world.yml task.
Every task in Concourse runs within a "container" (as best available on the target
platform).
The task_hello_world.yml configuration shows that we are running on
a linux platform
using the busybox container image.
You will see it downloading a Docker image busybox.
It will only need to do this once; though will recheck every time that it has the
latest busybox image.
Within this container it will run the command echo hello world.
Miscellaneous
If you're interested in creating new Docker images using Concourse (of course you
are), then there is a future section Create and Use Docker Images.
cd ../task-inputs
fly -t tutorial e -c no_inputs.yml
The task runs ls -al to show the (empty) contents of the working folder inside the
container:
running ls -al
total 8
drwxr-xr-x 2 root root 4096 Feb 27 07:23 .
drwxr-xr-x 3 root root 4096 Feb 27 07:23 ..
Note that above we used the short-hand form of the execute command in this
example, simply e, as the action. Many commands have shortened single character
forms, for example fly s is an alias for fly sync.
In the example task inputs_required.yml we add a single input:
inputs:
- name: some-important-input
Commonly if wanting to run fly execute we will want to pass in the local folder (.).
Use -i name=path option to configure each of the required inputs:
Task Scripts
requirements/dependencies to be processed/tested/compiled
task scripts to be executed to perform complex behavior
A common pattern is for Concourse tasks to run: complex shell scripts rather than
directly invoking commands as we did in the Hello World tutorial (we
ran uname command with arguments -a).
Let's refactor task-hello-world/task_ubuntu_uname.yml into a new task task-
scripts/task_show_uname.yml with a separated task script task-
scripts/task_show_uname.sh
cd ../task-scripts
run:
path: ./task-scripts/task_show_uname.sh
inputs:
- name: task-scripts
The current directory was uploaded to the Concourse task container and placed inside
the task-scripts directory.
Basic Pipeline
1% of tasks that Concourse runs are via fly execute. 99% of tasks that Concourse runs are
within "pipelines".
cd ../basic-pipeline
fly -t tutorial set-pipeline -c pipeline.yml -p hello-world
It will display the concourse pipeline (or any changes) and request confirmation:
Pipeline Resources
But with pipelines we now need to store the task file and task script somewhere outside of
Concourse.
Concourse offers no services for storing/retrieving your data. No git repositories. No
blobstores. No build numbers.
Every input and output must be provided externally. Concourse calls them "Resources".
Example resources are git, s3, and semver respectively.
See the Concourse documentation Resource Types for the list of built-in resource types and
community resource types. Send messages to Slack.
Create a ticket on Pivotal Tracker. It is all possible with Concourse resource types. The
Concourse Tutorials Miscellaneous section also introduces some commonly useful Resource
Types.
be fetched via the s3 resource type from an AWS S3 file; The most common resource type to
store our task files and task scripts is the git resource type. Perhaps your task files could or
the archive resource type to extract them from a remote archive file. Or perhaps the task
files could be pre-baked into the image_resource base Docker image.
But mostly you will use a git resource in your pipeline to pull in your pipeline task files.
This tutorial's source repository is a Git repo, and it contains many task files (and their task
scripts). For example, the original tutorials/basic/task-hello-
world/task_hello_world.yml.
To pull in the Git repository, we edit pipeline-resources/pipeline.yml and add a
top-level section resources:
The Concourse Tutorial verbosely prefixes resource- to resource names, and job- to job
names, to help you identify one versus the other whilst learning. Eventually you will know
one from the other and can remove the extraneous text.
After manually triggering the job via the UI, the output will look like:
The latter two are "steps" in the job's build plan. A build plan is a sequence of steps to
execute. These steps may fetch down or update Resources, or execute Tasks.
The first build plan step fetches down (note the down arrow to the left) a git repository for
these training materials and tutorials. The pipeline named this resource resource-
tutorial and clones the repo into a directory with the same name. This means that later in
the build-plan, we reference files relative to this folder.
The resource resource-tutorial is then used in the build plan for the job:
Any fetched resource can now be an input to any task in the job build plan. As discussed in
lessons Task Inputs and Task Scripts task inputs can be used as task scripts.
The second step runs a user-defined task. The pipeline named the task hello-world. The
task itself is not described in the pipeline. Instead it is described in a
file tutorials/basic/task-hello-world/task_hello_world.yml from the resource-
tutorial input.
The name of resources, and later the name of task outputs, determines the name used to
access them by other tasks (and later, by updated resources).
There is a benefit and a downside to abstracting tasks into YAML files outside of the
pipeline.
One benefit is that the behavior of the task can be kept in sync with the primary input
resource (for example, a software project with tasks for running tests, building binaries, etc).
One downside is that the pipeline.yml no longer explains exactly what commands will
be invoked. Comprehension of pipeline behavior is potentially reduced.
But one benefit of extracting inline tasks into task files is that pipeline.yml files can
get long and it can be hard to read and comprehend all the YAML. Instead, give tasks
long names so that readers can understand what the purpose and expectation of the
task is at a glance.
But one downside of extracting inline tasks into files is that fly set-pipeline is no
longer the only step to updating a pipeline.
From now onwards, any change to your pipeline might require you to do one or both:
fly set-pipeline to update Concourse on a change to the job build plan and/or
input/output resources
git commit and git push your primary resource that contains the task files and
task scripts
If a pipeline is not performing new behaviour then it might be you skipped one of the
two steps above.
Due to the benefits vs downsides of the two approaches - inline task configuration vs
YAML file task configuration - you will see both approaches used in this Concourse
Tutorial and in the wider community of Concourse users.
It was very helpful that the job-hello-world job build included the terminal output from
running git commands to clone the git repo and the output of the running hello-world task.
You can also view this output from the terminal with fly watch:
The --build NUM option allows you to see the output of a specific build number, rather than
the latest build output.
You can see the results of recent builds across all pipelines with fly builds:
Trigger Jobs
In the next lesson we will learn to trigger jobs after changes to an input resource.
By default, including get: my-resource in a build plan does not trigger its job.
To mark a fetched resource as a trigger add trigger: true
jobs: - name: job-demo plan: - get: resource-tutorial trigger: true
If you want a job to trigger every few minutes then there is the time resource.
Now upgrade the hello-world pipeline with the time trigger and unpause it.
The current hello-world pipeline will now keep triggering every 2-3 minutes forever. If you
want to destroy a pipeline - and lose all its build history - then may the power be granted to
you.
You can delete the hello-world pipeline:
fly -t tutorial destroy-pipeline -p hello-world
The resource-app resource will place the inbound files for the input into an alternate path.
By default we have seen that inputs store their contents in a folder of the same name. The
reason for using an alternate path in this example is specific to building & testing Go
language applications and is outside the scope of the section.
outputs:
- name: some-files
The former creates 4 files into its own some-files/ directory. The latter gets a copy of these
files placed in its own task container filesystem at the path some-files/.
The pipeline build plan only shows that two tasks are to be run in a specific order. It does not
indicate that some-files/ is an output of one task and used as an input into the next task.
jobs:
- name: job-pass-files
public: true
plan:
- get: resource-tutorial
- task: create-some-files
config:
...
inputs:
- name: resource-tutorial
outputs:
- name: some-files
run:
path: resource-tutorial/tutorials/basic/task-outputs-to inputs / create_some_
files.sh
- task: show-some-files
config:
...
inputs:
- name: resource-tutorial
- name: some-files
run:
path: resource-tutorial/tutorials/basic/task-outputs-to-inputs/show_files.sh
Note, task create-some-files build output includes the following error:
mkdir: can't create directory 'some-files': File exists
This is a demonstration that if a task includes outputs then those output directories are pre-
created and do not need creating.
Publishing Outputs
So far we have used the git resource to fetch down a git repository, and
used git & time resources as triggers. The git resource can also be used to push a modified
git repository to a remote endpoint (possibly different than where the git repo was originally
cloned from).
Click the "Embed" dropdown, select "Clone via SSH", and copy the git URL:
And modify the resource-git section of pipeline.yml
- name: resource-gist
type: git
source:
uri: [email protected]:e028e491e42b9fb08447a3bafcf884e5.git
branch: master
private_key: |-
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAuvUl9YU...
...
HBstYQubAQy4oAEHu8osRhH...
-----END RSA PRIVATE KEY----
Also paste in your ~/.ssh/id_rsa private key (or which ever you have registered with
github) into the private_key section. Note: Please make sure that the key used here is not
generated using a passphrase. Otherwise, the key will not be accepted and you would get an
error.
Update the pipeline, force Concourse to quickly re-check the new Gist credentials, and then
run the job:
Revisit the Web UI and the orange resource will change to black if it can successfully fetch
the new [email protected]:XXXX.git repo.
After the job-bump-date job completes, refresh your gist:
This pipeline is an example of updating a resource. It has pushed up new git commits to the
git repo (your github gist).
outputs:
- name: updated-gist
The bump-timestamp-file task runs the following bump-timestamp-file.sh script:
- put: resource-gist
params:
repository: updated-gist
The updated-gist output from the task: bump-timestamp-file step becomes
the updated-gist input to the resource-gist resource because their names match (see
the git resource for additional configuration).
Dependencies within Tasks
The bump-timestamp-file.sh script needed the git CLI.
It could have been installed at the top of the script using apt-get update; apt-get install
git or similar, but this would have made the task very slow - each time it ran it would
have reinstalled the CLI.
Instead, the bump-timestamp-file.sh step assumes its base Docker image already
contains the git CLI.
The Docker image being used is described in the image_resources section of the task's
configuration:
The Docker image starkandwayne/concourse is described
at https://github.com/starkandwayne/dockerfiles/ and is common base Docker image used by
many Stark & Wayne pipelines.
Your organisation may wish to curate its own based Docker images to be shared across
pipelines. After finishing the Basics lessons, visit Lesson Create and Use Docker Images for
creating pipelines to create your own Docker images using Concourse.
Tragic Security
If you're feeling ill from copying your private keys into a plain text file (pipeline.yml) and
then seeing them printed to the screen (during fly set-pipeline -c pipeline.yml), then
fear not. We will get to Secret with Credential Manager soon.
Parameterized Pipelines
In the preceding section you were asked to place private credentials and personal git URLs
into the pipeline.yml files. This would make it difficult to share your pipeline.yml with
anyone who had access to the repository. Not everyone needs nor should have access to the
shared secrets.
Concourse pipelines can include ((parameter)) parameters for any value in the pipeline
YAML file. Parameters are all mandatory. There are no default values for parameters.
In the lesson's pipeline.yml there are two parameters:
If we fly set-pipeline but do not provide the parameters, we see an error when the job is
triggered to run:
Shows that the two potentially secret parameters are visible in plain text:
... params: CAT_NAME: garfield DOG_NAME: odie run: path: env
The solution to both of these problems is to use a Concourse Credentials Manager and is
discussed in lesson Secret with Credential Manager.
Actual Pipeline - Passing Resources Between Jobs
Finally, it is time to make an actual pipeline - one job passing results to another job upon
success.
In all previous sections our pipelines have only had a single job. For all their wonderfulness,
they haven't yet felt like actual pipelines. Jobs passing results between jobs. This is where
Concourse shines even brighter.
- name: job-show-date
plan:
- get: resource-tutorial
- get: resource-gist
passed: [job-bump-date]
trigger: true
- task: show-date
config:
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
inputs:
- name: resource-gist
run:
path: cat
args: [resource-gist/bumpme]
Update the pipeline:
cd ../pipeline-jobs
fly -t tutorial sp -p publishing-outputs -c pipeline.yml -l ../publishing-
outputs/credentials.yml
fly -t tutorial trigger-job -w -j publishing-outputs/job-bump-date
The dashboard UI displays the additional job and its trigger/non-trigger resources.
Importantly, it shows our first multi-job pipeline:
The latest resource-gist commit fetched down in job-show-date will be the exact commit
used in the last successful job-bump-date job. If you manually created a new git commit in
your gist and manually ran the job-show-date job it would continue to use the previous
commit it used, and ignore your new commit. This is the power of pipelines.
Secret left
out