Looking for Senior AWS Serverless Architects & Engineers?
Let's TalkIn this article we are going to cover passing artifacts between build steps using Google Cloud Build. This will be a short and straight to the point article not covering the full range of Google Cloud Build capabilities or the Google Cloud Platform.
If you’re interested in CI/CD pipelines, then please check out our other articles which cover varying topics. Here is a short list.
- CI/CD: Introduction
- Google Cloud Build — Create/Store Docker Images via GitHub Trigger
- Google Cloud Build — Custom Scripts
- CI/CD: Google Cloud Build — Regex Build Triggers
Alright, let’s jump in!
Full file:
Below is the full cloudbuild.yaml file detailing what we will cover in this article. The CI/CD pipeline only consists of two build steps for simplicity and staying focused on passing artifacts between build steps. Enjoy.
Volumes:
Google Cloud Build works by allowing the developer to write a series of steps which define all of the operations to achieve CI or CI/CD. In our case, one of the requirements of our application is that we build jar files and deploy jar files.
Logically these two steps are different enough they should be separated. The benefits of separating these two steps (build and deploy) is that we also have better organization and a much easier path to debugging when things go awry.
What does this look like?
Below you’re seeing the inner workings of how we are handling a CI/CD for one of our clients that has a Java Spring Boot API that we are assisting in the automation and deployment to the cloud.
Notice, the volumes key which is an array. This means we can pass multiple volumes to our build steps which can then be utilized by the next sequential step! Pretty nifty.
In Step #1, we are using a Google Cloud Builder called, mvn or Maven. To utilize a preconfigured docker container built by Google which has all the dependencies required to run maven commands. We use maven to build our jar files and automatically push them up to the cloud if the CI/CD finishes completely.
We are also using a feature called, entrypoint. Which allows us to leverage the maven container by Google while also giving us the ability to run a bash script directly.
The bash script will then do a clean install and create our jar file underneath the /target folder.
This is perfect, exactly what we want. However, we are missing a critical piece. We don’t have a way to pass that jar file to the Step #2, the deploy step. Therefore, we need to attach a volume to Step #1.
Now we have a volume attached called, jar. Not the most creative name, but it serves it’s purpose (passing a jar files between steps). With the volume now setup we can add a line to our build.bash file which will copy the jar file onto the volume.
Perfect, let’s now take a look at Step #2.
If you notice the name of this step is different than Step #1. Here we are leveraging a preconfigured docker container that Google setup called, gcloud. It allows us to automatically sync our Google Cloud project and authenticate that project without needing to pass any credentials. Now we can immediately execute commands against our GCP resources without any setup. For instance, decrypting files using Google KMS. Something that we all commonly do when building CI/CD pipelines. One less thing to worry about.
We also don’t need to worry about installing all the dependencies that gcloud needs to run which is another reason to try Google Cloud Build. As now we’ve leveraged maven and gcloud CLI without any complex setup.
Once again, we are using the entrypoint feature to run our bash script called, deploy.bash. This script will be passed a couple of arguments.
Another way to write this, which you may or may not prefer.
The first argument is the path to our bash script. The 2nd, 3rd, and 4th arguments are environment variables. Environment variables are commonly used to make practically any software more dynamic, but it’s especially important when trying to build reusable automation instead of one off scripts.
We configure environment variables when we create a Google Cloud Build trigger. If you’re unfamiliar with Google Cloud Build Triggers, check out our other article.
Finally, we attach the volume the same way we did in Step #1. By attaching a volume called, jar.
Now we can add a line to our deploy.bash file which will copy the jar file into our target directory. The target directory in our case was the location we were pointing too for the other deployment code. There was an expectation that we had the jar file at ./target/MyApp-1.0.jar. We are choosing to use a local directory versus referencing the external volume, /jar, because locally developers will not have this volume to pull from.
There we go. We just learned how to pass artifacts between Google Cloud Build steps in our CI/CD pipelines. Entirely, leveraging Google Cloud Builders which give us a preconfigured docker container to execute operations on top of without having to worry about all the other headaches involved in most CI/CD solutions currently out there. However, if we ever need to build our own docker containers to run our builds. Google Cloud Build supports that functionality as well. Topic for another article.
Thanks for reading 🎉 🎉