Core Concepts & Architecture
Understanding the building blocks of Concourse pipelines: Resources, Jobs, Tasks, and the ATC/Worker architecture
Before diving into complex pipelines, it's essential to understand Concourse's core concepts. Unlike traditional CI tools that rely on plugins and global configuration, Concourse uses a small set of primitives that combine to create powerful workflows.
Architecture Overview
Concourse consists of three main components that work together:
Web Node (ATC)
- • Pipeline scheduling
- • Web UI serving
- • API endpoints
- • User authentication
- • Resource version tracking
Worker Nodes
- • Run task containers
- • Cache resource versions
- • Manage build artifacts
- • Report status to ATC
- • Stateless & scalable
TSA (Gateway)
- • Worker registration via SSH
- • Key-based authentication
- • Secure ATC-worker tunnel
In production, you typically run multiple ATC instances behind a load balancer for high availability. Workers are stateless—they can be added or removed without affecting running pipelines.
Resources
Resources represent external versioned artifacts that your pipeline interacts with. Every input and output flows through resources.
resources:
- name: my-repo
type: git
source:
uri: https://github.com/myorg/myapp.git
branch: main
- name: docker-image
type: registry-image
source:
repository: myorg/myapp
username: ((docker-username))
password: ((docker-password))
- name: deployment-bucket
type: s3
source:
bucket: my-deployment-artifacts
access_key_id: ((aws-access-key))
secret_access_key: ((aws-secret-key))
region_name: us-east-1Common Resource Types
| Type | Purpose |
|---|---|
| git | Git repositories |
| registry-image | Docker/OCI images |
| s3 | AWS S3 buckets |
| time | Periodic triggers |
| semver | Semantic versioning |
| pool | Resource locking |
Resources are versioned—Concourse tracks every version (commit, image digest, file) and displays them in the UI. This provides complete traceability for what version triggered which build.
Resource Types
Resource types define how resources behave. Concourse ships with built-in types, but you can add custom ones:
resource_types:
- name: slack-notification
type: registry-image
source:
repository: cfcommunity/slack-notification-resource
tag: latest
- name: pull-request
type: registry-image
source:
repository: teliaoss/github-pr-resource
resources:
- name: slack-alert
type: slack-notification
source:
url: ((slack-webhook-url))The resource type ecosystem is extensive—check the Concourse Resource Types catalog for community-contributed types.
Jobs
Jobs define what work gets done. Each job has a plan that specifies the sequence of steps:
jobs:
- name: build-and-test
plan:
- get: my-repo
trigger: true
- task: run-tests
file: my-repo/ci/tasks/test.yml
- put: docker-image
params:
image: image/image.tarKey Job Properties
plan: Ordered list of steps to executeserial: Run builds one at a time (default: parallel)max_in_flight: Limit concurrent buildsbuild_log_retention: How long to keep build logson_success/on_failure/on_abort: Hooks for notifications
Tasks
Tasks are the units of work—containerized scripts that do the actual building, testing, and processing:
- task: run-unit-tests
config:
platform: linux
image_resource:
type: registry-image
source:
repository: node
tag: "20-alpine"
inputs:
- name: my-repo
outputs:
- name: test-results
caches:
- path: my-repo/node_modules
run:
path: sh
args:
- -exc
- |
cd my-repo
npm ci
npm test
cp -r coverage ../test-results/Task Properties
platform: Usuallylinux, can bewindowsimage_resource: Container image to run ininputs: Resources/outputs from previous stepsoutputs: Directories to pass to subsequent stepscaches: Persistent directories across buildsrun: The command to execute
Pipelines
Pipelines tie everything together—they're the complete definition of your CI/CD workflow:
resource_types:
- name: slack-notification
type: registry-image
source:
repository: cfcommunity/slack-notification-resource
resources:
- name: source-code
type: git
source:
uri: https://github.com/myorg/myapp.git
branch: main
- name: slack
type: slack-notification
source:
url: ((slack-webhook))
jobs:
- name: test
plan:
- get: source-code
trigger: true
- task: unit-tests
file: source-code/ci/tasks/test.yml
on_failure:
put: slack
params:
text: "Build failed!"Step Types
Jobs contain steps that execute in sequence. Here are the primary step types:
Get Step
Fetches a resource version:
- get: my-repo
trigger: true # Start job when new versions appear
passed: [test] # Only versions that passed the "test" job
params:
depth: 1 # Shallow clone (resource-specific)Put Step
Pushes to a resource:
- put: docker-image
params:
image: build-output/image.tar
get_params:
skip_download: true # Don't fetch the pushed versionTask Step
Executes a containerized task:
- task: build
file: my-repo/ci/tasks/build.yml # External task file
input_mapping:
source: my-repo # Rename input
output_mapping:
artifacts: build-output # Rename output
params:
ENVIRONMENT: production # Environment variablesSet Pipeline Step
Dynamically update pipelines:
- set_pipeline: deploy-pipeline
file: my-repo/ci/pipelines/deploy.yml
vars:
environment: stagingLoad Var Step
Load values from files into variables:
- load_var: version
file: version/version
reveal: true # Show in UI (default: hidden)
- task: deploy
params:
VERSION: ((.:version)) # Use loaded variableThe Inputs/Outputs Model
Understanding how data flows between steps is crucial. Each step runs in an isolated container with only explicitly declared inputs available.
How It Works
- Get steps fetch resources into named directories
- Task outputs become available for subsequent steps
- Put steps can use any available directory
jobs:
- name: build-flow-example
plan:
# Step 1: Fetch code - creates "source" directory
- get: source
resource: my-repo
# Step 2: Build - receives "source", produces "binary"
- task: compile
config:
platform: linux
image_resource:
type: registry-image
source: { repository: golang, tag: "1.21" }
inputs:
- name: source
outputs:
- name: binary
run:
path: sh
args:
- -c
- |
cd source
go build -o ../binary/app ./cmd/app
# Step 3: Package - receives "source" AND "binary"
- task: create-package
config:
platform: linux
image_resource:
type: registry-image
source: { repository: alpine }
inputs:
- name: source
- name: binary
outputs:
- name: package
run:
path: sh
args:
- -c
- |
cp binary/app package/
cp source/config.yml package/
tar -czf package/release.tar.gz -C package app config.yml
# Step 4: Upload package
- put: releases
params:
file: package/release.tar.gzVariables & Secrets
Concourse supports several ways to parameterize pipelines.
Static Variables
Set when deploying the pipeline:
resources:
- name: repo
type: git
source:
uri: ((git-uri))
branch: ((branch))fly -t main set-pipeline -p my-pipeline -c pipeline.yml \
-v git-uri=https://github.com/org/repo.git \
-v branch=mainCredential Managers
For secrets, integrate with a credential manager:
resources:
- name: docker-image
type: registry-image
source:
repository: myorg/app
username: ((docker.username))
password: ((docker.password))Supported credential managers:
- Vault (HashiCorp)
- AWS Secrets Manager
- AWS SSM Parameter Store
- Kubernetes Secrets
- CredHub (Cloud Foundry)
Teams & RBAC
Concourse uses teams for access control. Each team has its own pipelines, isolated from others.
Managing Teams
# Create a team with local users
fly -t main set-team -n staging \
--local-user=staging-admin \
--local-user=developer1
# Create team with GitHub auth
fly -t main set-team -n production \
--github-org=myorg \
--github-team=myorg:platform-team
# List teams
fly -t main teams
# Login to specific team
fly -t staging login -n staging -c http://concourse.example.comTeam Roles
| Role | Permissions |
|---|---|
| owner | Full control (manage team, pipelines, builds) |
| member | Manage pipelines and builds |
| pipeline-operator | Trigger builds, pause/unpause |
| viewer | Read-only access |
Configuration Best Practices
Externalize Task Definitions
Keep task configs in your repository, not inline:
platform: linux
image_resource:
type: registry-image
source:
repository: node
tag: "20"
inputs:
- name: source
run:
path: source/ci/scripts/test.sh- task: test
file: source/ci/tasks/test.ymlUse YAML Anchors for Reuse
YAML anchors reduce duplication:
# Define once
image-config: &node-image
type: registry-image
source:
repository: node
tag: "20-alpine"
jobs:
- name: test
plan:
- task: lint
config:
platform: linux
image_resource: *node-image # Reuse
# ...
- task: unit-test
config:
platform: linux
image_resource: *node-image # Reuse
# ...Organize Pipeline Files
For complex projects:
my-repo/
├── ci/
│ ├── pipeline.yml
│ ├── tasks/
│ │ ├── test.yml
│ │ ├── build.yml
│ │ └── deploy.yml
│ └── scripts/
│ ├── test.sh
│ ├── build.sh
│ └── deploy.shCore Concepts Complete!
You now understand Concourse's architecture and primitives
In Part 3, we'll create a complete CI/CD pipeline from scratch—testing, building, and deploying an application.
