Basic CI/CD on Google Cloud Platform using Cloud Build

Abhishek Chakraborty
9 min readDec 18, 2020

In this article, we will walk through a simple web-app CI/CD pipeline set up on Google Cloud Platform (GCP). This pipeline makes use of Google Cloud Build, a CI/CD service by GCP which allows users to sync their code repository updates (like GitHub using a GCP GitHub App) with build triggers and allows for running specific build steps defined in yaml that use GCP’s fantastic set of cloud builders to run build, testing and deployment processes.

This article is a continuation of two previous articles, which included exploring our initial app set-up and React testing process. As always all code is available on my GitHub. Let’s begin!

Testing the backend Flask API

Noticeably in the previous articles, I did not specifically write code on testing our backend API. For this demo however, it will be necessary to set up some basic testing for this so that our pipeline covers both back and frontend tests (simultaneously bringing us closer towards proper integration tests). We are using pytest for this. Here is code for just two test-cases.

Notes:

  • Notice that our tests will be reading in USERNAME and PASSWORD variables from the environment, so whatever environment we run our test code in must have this set.
  • Our Flask API connects to our Cloud SQL database via the sqlalchemy library. This connection requires certain credentials which as can be seen in our storage.py file, are also read in from our environment. Thus, whichever environment we run our tests in must also have this variables defined.
  • Quick-Tip #1: pytest looks for test files named with “test_…” prefix, so name your files accordingly.
  • Quick-Tip #2: Remember to add in pytest==6.1.1 (or whatever the most up-to-date version is at time of reading) into your requirements.txt and import this in via pip prior to running your tests.
  • Quick-Tip #3: Simply run your tests with the pytest command.

Now that we have these local tests set up for our flask API, we can design a Docker container for specifically running these tests. Using containers for running tests, especially in a CI/CD pipeline is a common and effective practice. As is standard, we will title this ‘Dockerfile.dev’ since in addition to our production flask Dockerfile, this one specifically will contain slightly different build instructions.

Notice how we are additionally copying over our flask tests folder into our container and changing our CMD to run pytest. So this covers our setup for our very basic backend Flask API testing. Obviously, in a more complex proper production application this process would be FAR more comprehensive.

Note: To simultaneously demonstrate a non-containerized method of testing within the pipeline, the Jest/Enzyme tests for our different React components will be run through only the npm gcloud image.

Setting up automated tests on PRs to main

Most dev teams employ some form of a GitHub workflow within their Agile process. This means that for individual features, developers branch off the main/master production branch and add their feature/bug-fix. Developers would also be expected update any existing tests affected by their changes and also write additional tests for their new code (in a test-driven development (TDD) setting, they would write tests prior-to/simultaneously writing their new code!). Sometimes in more large complicated projects, branches are created off of branches and so on!

Eventually, every feature/bug-fix branch must be merged into its origin branch through opening up a pull-request (PR). This pull-request is then code-reviewed (hopefully throughly) by fellow developers. However, humans are prone to errors, especially as projects become large and features become complicated. Merging faulty code can lead to painstaking bugs and in the worst cases can even break your production code. Therefore a key verification tool would be through automatically running all the tests which all the team’s developers have been compiling since the initiation of the project and displaying the result of this on the PR itself.

With all that said, we get to our first build trigger. For our small project, our trigger will run a series of steps defined in a testbuild.yaml file whenever a PR is created to merge a feature branch into main. The only purpose of this trigger will be to run our flask API as well as React component tests. Before creating our trigger, let us define our testbuild.yaml, so we know the exact steps we intend for Cloud Build to follow.

Let’s note what’s happening in each of these steps:

  • First, we build our Dockerfile.dev for our backend flask API tests. We are using the gcloud docker cloud builder. Note how we must use ‘-f ./flask_app/Dockerfile.dev to specify to docker to build our Dockerfile.dev file and not the default Dockerfile. Of course, the build context is also ./flask_app.
  • We then push this newly built docker image to our project’s container image registry. Note here that you may need to provide CloudBuilder additional permissions to do this step. So go ahead into your IAM and update the access of your …@cloudbuild.gserviceaccount.com as you feel is appropriate level of access (please do not just assign the owner level to everything).
  • We now will run this newly pushed image. We do this through bash, and while running we specify all the necessary environmental variables needed by our container to run the test processes properly as we discussed above. This environmental variables will be defined within our Cloud Build trigger itself. Note that all the referred environmental variables begin with a ‘_’ since this is the format in which environmental variables will need to be defined in your build trigger.
  • Next, we are onto setting up our React component tests! First, we will use the gcloud npm cloud builder to install our required npm modules. Note that the ‘ — prefix’ addition allows us to specify ‘./react_app’ as our context for finding the package.json file needed to install the modules. An additional note on this: generally, when you create your base React Project with create-react-app, your package.json will have “test”: “react-scripts test” as the default script for running npm test. You will want to change this to “test”: “react-scripts test — watchAll=false”. This is because the default command expects a user input after running a test, so in an automated system this will cause your tests to never stop running!
"test" : "react-scripts test" -> "test" : "react-scripts test --watchAll=false"
  • Finally, we will run our React tests. Once again, the ‘ — prefix’ addition allows us to specify ‘./react_app’ as our source code context. The ‘a’ addition enables us to run ALL the test-suites we have and finally the ‘ — — coverage’ prints our a useful mini stats chart for our tests

So let’s go ahead and create this trigger. If you have not done so already, click to the left hamburger icon to open up the menu on your GCP cloud console, scroll down and navigate to Cloud Build under Tools. There, enable the API. Remember, there are costs for running processes through Cloud Build. Go onto Triggers and create a trigger. You will have to link your GitHub repository to GitHub App, so go ahead and follow the steps to do so.

Under Build Configuration, choose CloudBuild configuration file and specify your file name and if that yaml file is not located in the default folder, specify yaml file path as well. Most importantly, specify your environmental variables! Create your trigger and when you are ready, open a PR and check if the build is triggered in the History tab! You should see something like this on your GitHub PR and in builds.

Note that these steps work in a manner where if one of the steps in the yaml fails, the build will indeed stop and none of the remaining steps will continue being executed. Below is an example where I purposely pushed some code to cause a single react test to fail:

If the tests all pass, you will see success like the following:

And of course you should also see a green check on the tests on your GitHub PR. With this green check, if you tests are hopefully comprehensive enough, you and your team will have a certain level of reassurance and code confidence in merging your code into the main branch. Of course, tests can never replace through code reviews!

Automated redeployment on merge into main

So now we that we have a process to run tests on the cloud before merging our code into the main branch, we need a process for redeploying our code to production automatically once code is merged. In our first article, you will remember that we ended off by setting up two compute engine VM instances, one running our flask API container called ‘Application-Server’ and another running our nginx container serving our react build files called ‘Web-Server’. This deployment was completely manual. We had built these container images locally and pushed them to our GCP container image registry and manually deployed onto the respective VMs. Now we need this whole process to be done entirely on the cloud automatically upon merge of code into the ‘main’ branch. Sounds scary, but is rather simple with Cloud Build! Let’s start off as usual by looking at our build yaml:

Let’s note what’s happening in these steps:

  • You will notice that up until line 14, we are just redoing our testing steps as covered before. This is a cautionary step. We want to ensure that a final time and our code checks all the boxes prior to being redeployed.
  • Starting line 15, we build out our production flask API container image (building Dockerfile and not Dockerfile.dev!) and push it to our container registry.
  • Starting line 18, we do the same for our react-nginx container image. Note that we are defining a build argument, which is absolutely necessary as it formats our nginx file with the correct proxy IP. Also pushing this off to our container registry.
  • Finally, on lines 25–29 we redeploy these images onto our respective VM instances!

Again note that if any of these steps fail, like for example a test, then the build process will stop and none of the steps below will be continued (so our images will not update on the VM instances). Below are some snapshots of these processes:

Build in process

And also just so you may have an idea, here is the same build in the case when I purposely define a wrong env variable in checking login/logout functionality for the flask API. As you can see, none of the following build steps after the failed on are run.

So after a successful redeployment hopefully you will be able to see your designated changes in your app! So that brings to a close a fun series creating, testing and setting up a CI/CD pipeline for this mock todo web-app. I hope this has been helpful for you! Next up, I want to explore deployment on OpenShift! It is a different sort of challenge as it is an environment pretty different altogether than GCP (for one we are deploying onto a kubernetes cluster). But still most cloud services bear resemblance to one another so it should definitely not be completely alien. Until next time!

--

--