Standing on the shoulders of the Tekton community: Tekton buildpack pipeline

No Comments

In the first article we mastered the Tekton installation, got to know the first API objects and created a first small pipeline. You might want to have a look at this Remarkable – note for a recap.Tekton Building Blocks
Now we will create a practical pipeline that, as usual for a CI system, creates new Docker images when new commits are pushed. The reusability of Tekton components, both tasks and entire pipelines, is a strength that I would like to demonstrate practically in this article.

How do we build?

We can build container images in several ways.

Docker, the company that brought containers to the mainstream, defined writing a Dockerfile which built a container image step by step triggered by a docker build. Writing this starts very simple, but it quickly becomes very complex if you want to ensure a secure, stable build and operation.

I would like to set only two impulses that you might want to think through to get a clue of stuff to be taken care of:

  • Should the CI system really pass a Docker socket to the build containers?
  • Should the root user really be used within the Docker image?

For these topics there are many possible solutions. I will highlight the buildpack approach.

Cloud-Native Buildpacks are a simple solution to create container images. So what to do? You just need a pack build my-image to build the application in the current directory as a my-image and store it in an image repository. Admittedly, the solution is very easy to use, but at the same time a lot happens in the background to ensure an optimal image build. Issues like e.g. caching and security are solved transparently.

Integration in Tekton

We will need a pipeline that runs the required tasks step by step. The pipeline will first clone the source code from a Git repository and place it in a directory. The second step will read the source code in that directory and use buildpack tooling to build the image and push it to a container registry.

In the first article we looked at how to create a pipeline and link tasks but we haven’t worked with files yet.

Workspaces

Directories can be shared between tasks in Tekton using workspaces. Workspaces can leverage various Kubernetes storage types, for example, a PersistentVolumeClaim, a config map, or an emptyDir.

When defining a pipeline, we define the required workspaces. In our case, we define a workspace to store the source code from the Git repository for the runtime of the pipeline. At this point, only the name and the link to the tasks are known. Tasks can expect workspaces which can then be deployed under a specific name by the pipeline.

Here we see a short pipeline that clones the source code from a Git repository into a workspace named source-workspace.

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: git-clone-sample-pipeline
spec:
  params:
    - name: git-clone-url
      type: string
      description: HTTP URL of the git repository to clone
    - name: git-revision
      type: string
      default: "main"
      description: Git revision (branch, tag, commit-id) to clone
  workspaces:
    - name: source-workspace # Directory where application source is located. (REQUIRED)
  tasks:
    - name: fetch-repository # This task fetches a repository using the `git-clone` task you installed
      taskRef:
        name: git-clone
      workspaces:
        - name: output
          workspace: source-workspace
      params:
        - name: url
          value: $(params.git-clone-url)
        - name: revision
          value: $(params.git-revision)
        - name: subdirectory
          value: ""
        - name: deleteExisting
          value: "true"

PipelineRun

As already described in the first article, we create a PipelineRun to start the pipeline and to pass parameters and workspaces.

In the following example, we start the pipeline defined above with the desired values. The Git repository is set to a fixed value and the source-workspace is linked to an existing PersistentVolumeClaim.

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: git-clone-sample-pipeline-
spec:
  serviceAccountName: tekton-service-account
  pipelineRef:
    name: git-clone-sample-pipeline
  workspaces:
    - name: source-workspace
      subPath: source
      persistentVolumeClaim:
        claimName: source-workspace-pvc
  params:
    - name: git-clone-url
      value: https://github.com/marcopaga/feeding-the-ci-process-single-project.git

Now the pipeline starts, clones the repository, and terminates successfully.

Providing credentials for clone and push

For open source projects with a publicly available repository, this is it. For internal use with credentials on a repository, we still need to provide them to the Git process.

While researching this aspect, I found various pieces of information. For me the variant which is described there worked well. In essence, a Secret is created with the credentials and extended with annotations describing what deployment and what server it is to be used for. The key tekton.dev/git-0 indicates that this is to be used for github.com repositories.

apiVersion: v1
kind: Secret
metadata:
  name: github-clone-secret
  annotations:
    tekton.dev/git-0: https://github.com
type: kubernetes.io/basic-auth
stringData:
  username: git-clone-user-name
  password: git-clone-password

This on its own is not enough, because the Secrets to be used must be added to the ServiceAccount of the PipelineRun. As you can see in the following example, the Secret is simply listed as well.

For the container image push, a Secret is also needed. Here it is sufficient to create a normal Docker Registry Secret as described here in the Tekton Docs. It is also important to add this to the ServiceAccount.

In the following you can see a ServiceAccount with the two defined Secrets.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: buildpacks-service-account
secrets:
  - name: docker-registry-secret
  - name: gitlab-clone-secret

Using an existing pipeline and tasks

Now we have already created a pipeline with which we can clone the code. We can now continue with our knowledge and gradually add the next steps.
On the Tekton Hub, we can find tasks for the build. So far, so good! The tasks are well documented and actually we can build them directly.

But what’s that – you can also search for pipelines? If we do that, we find a complete pipeline that is already done and tested.
The buildpacks pipeline is beautifully documented. We find information about the background, the required dependencies, and the installation.
As described in the first article, these are installed and used directly. The existing PipelineRun is adapted following the documentation and everything is ready.

Recap

This reusability allows us to collaborate and share CI components within our own organization and even in the open source ecosystem.
We were able to create a working pipeline with very little time and effort by leveraging community components. Now we have a starting point to make our own extensions, benefit from the community of the Tekton Hub and hopefully even give back our own contributions.
In the next article we will continue with Tekton Triggers on this basis and start a build of this pipeline when pushing it into a repository.

Marco is passionate about developing software. He enjoys working in teams to solve complex problems.
In the past, he has successfully modernized legacy applications to prepare them for new challenges. Taking everyone along on this journey and getting them excited about new technologies – that’s a natural part of his job.

Post by Marco Paga

Continuous Integration

GitHub Actions test a full Tekton CI installation

Continuous Integration

Tekton Triggers in practice

More content about Continuous Integration

Continuous Integration

GitLab security scanning – part 2

Continuous Integration

GitLab security scanning

Comment

Your email address will not be published.