Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

//

Serverless GCP functions – Look Ma, no servers!

14.4.2019 | 6 minutes of reading time

Serverless computing promises to provide easily scalable applications and provide a straightforward programming model that allows developers to focus on features instead of technological details. In this blog post, we develop a simple sample application. We implement it and deploy it as a serverless GCP Cloud Function. We hope this tutorial helps you move beyond the “Hello World” stage. The most relevant parts are highlighted in this article. You can fetch the entire project from GitHub.

Our project is a “dead man’s switch as a service.” In essence, its job is to perform an action by default. The action is defined but is prevented by “checking in” at regular intervals. For example, consider a safety switch that stops a saw blade once the operator releases it. A lot of potential actions that could be triggered in this way come to mind, such as sending emails. For this blog, we limit ourselves to performing an HTTP POST against an arbitrary URL, with a user-specified entity.

First things first: Setting up the Google Cloud project

Before we can dive in too deeply, we need to take care of some setup. The GCP groups resources into projects, and we need to create just such a project before we can continue. The setup is shown below:

With the project itself in place, we can start adding functionality to it. Since we want to be able to interact with the world, we need an HTTP interface. Fortunately, Google makes things easy for us here – Cloud Function can be bound to an http trigger out of the box, without any further configuration. So we select “Cloud Functions” from the resource list. We create a new function and perform some setup. Specifically, we change the runtime to go and reduce the allocated memory to 128 Mb, which is still plenty. Additionally, we deploy the function in the eu-west-1 datacenter (in the “More” tab).

Handlin’ the HTTP

With this setup, we can actually call our function for the first time. Under the “Triggers” tab, we can extract the invocation URL. If we visited it, we would see a “Hello World” message. Since we intend to change this, we move to implement our logic instead.

With this snippet, we drafted a simple API to create, query, and delete triggers. Additionally, we defined a way to assert liveness. All the functions we reference right now can safely be implemented to just write an arbitrary response – for the moment, just having confidence the routing works as expected is sufficient.

For larger-scale serverless GCP services, it may be worthwhile investigating more elaborate frameworks such as buffalo . For this tutorial, we avoid the additional complexity. Since the HTTP trigger expects a handler with the same signature as a http.HandleFunc, pretty much any framework that does not fundamentally alter this convention should work with no major adaptation.

So we upload our sourcecode (with stub implementations of the missing functions and encounter – an error. Several errors, in fact. The first error is relatively easy to fix – we renamed our entry point to HandleHTTP and need to let the Cloud Function runtime know. The second error requires some further thought: Google Cloud Functions require modular go . As soon as we declare our dependencies in go.mod, the build succeeds.

With just a little persistence

Since we want to actually be able to interact with the service, some means of persistence is required. Cloud Functions themselves need to be stateless – we cannot use a local fs or memory. We need some kind of database. Since we are already inside the Google Cloud, we might just use its capabilities. So we consider what is on offer – an SQL database would certainly do the job. BigTable would, too, but we really just need a simple document store. So we choose FireStore in Datastore mode . It is simple to use, well-integrated and made for key lookups.

With this simple bit of initialization, the platform will take care of all the connection and authentication logic for us. However, we still need to configure the datastore itself. For this task, we return to the Google Cloud console and select “Datastore” in the “Storage” category. After confirming we indeed want Firebase to be in datastore mode, we are good to go.

Taking care of business

With all the prelude now taken care of, we can finally implement our business logic. We replace the stub endpoint handlers with actual business code

With this logic in place, we are nearly done – but testing our API reveals a small detail that makes our life somewhat harder: the redirect in createTrigger does not send us to the newly-created entity. Instead, we are greeted with a Google login page. Looking a little closer at our invocation URL solves the mystery: our create URL looked something like this: https://europe-west1-dead-mans-trigger-123456.cloudfunctions.net/api-server/triggers. Note the bold part – the name of our function was encoded as part of our path. While the Cloud Function framework stripped this prefix out, our redirect did not know about it and took the caller right out of our context. We need to determine an invocation base URL.

Fire in the hole

With our API now up and running (try it out!), we turn our focus to actually having the switch fire if they are not checked in. Now some background work needs to be done. We need to find expired triggers, execute their HTTP Post calls, and remove them from our datastore. We need a cron job.

Fortunately, the Cloud Scheduler (found in the “Tools” section of the GCP console) offers just that. We could just create a scheduler, have it invoke our Cloud Function via HTTP, and call it a day. However, in this tutorial, we instead opt to do something a little more general – we create a Pub/Sub topic, and subscribe a second background function to it. This additional level of indirection would allow us to have different consumers all run on the same schedule.

With the Pub/Sub mechanism set up, all that remains is subscribing a function to it. Since each Cloud Function has exactly one trigger, we cannot reuse our API server. Instead we set up a second function and configure it to listen to the Pub/Sub topic. This function does not have a HTTP interface. Thus, it can’t be called by anyone except for our timer topic.

With all infrastructure in place, we can implement the last bit of logic and enjoy our fancy new dead man’s trigger service.

Conclusion

Hopefully, this tutorial has helped you understand the easy and beautiful programming model of the serverless GCP Cloud Functions. The tight integration of Google’s offer allows very rapid development of functionality, with a strong focus on producing scalable solutions. We have aimed to guide you a step past the very basics, and hope to have highlighted some “gotcha” moments.

Do you develop a serverless GCP application? We would love to hear about your own experiences with the platform in the comments.

share post

Likes

0

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.