A Brief Introduction to Serverless Computing

Getting abstract

Over the last decade or two, cloud computing has come to dominate many of the skills and processes needed to develop 'modern' software. This is increasingly true for adjacent fields too, including the world of Data Science (among others). One of the trends in this sweeping move towards 'The Cloud' has been the ever-increasing levels of abstraction with respect to how development teams interact with the infrastructure running their applications.

Arguably at the top of this pyramid of abstraction is the concept of serverless computing, and it is built on the idea that (as the name suggests), developers need not spend time configuring servers and writing boilerplate app code, and should instead dive straight in to writing and deploying the code that 'really' drives business value. This can also make it super easy for developers, Data Scientists and others to deploy simple applications and services with little-to-no experience of configuring the infrastructure needed to deploy 'classic' web apps. If that sounds like it may be useful to you, then great! This post aims to provide you with:

  • A high-level understanding of what serverless computing is.
  • An understanding of the relative merits of serverless computing, and some of its disadvantages.
  • A short guide to setting up your own serverless Python function on Google Cloud.

Let's get started.

The classic approach

To get an idea of the potential benefits of serverless computing, it might be worth taking a step back and looking at the 'classic' workflow for setting up a new server in the cloud for your latest and greatest application. If you've done this before (perhaps with a Flask or Express app), this might seem quite familiar:

  1. Create a new virtual machine (VM) on your chosen cloud provider: choose your system resources (memory, number of CPUs etc.), select the operating system you'd like to use, and maybe also create some credentials to access your new machine remotely.
  2. Configure your new machine to accept public traffic on a specific port.
  3. Set up NGINX (or some other web server) to forward requests from this port to your app.
  4. Configure auto-scaling rules (if needed).
  5. Write your application's framework-specific boilerplate, such as your request handlers and other server code.
  6. Write your business logic.

While this is a tad contrived, the fact remains that to deploy your own application with this 'classic' workflow requires a lot of time and energy to be spent on tasks other than writing your 'core' application code. What's more: you'll be paying for every second of time your server is live, irrespective of how much it is being used. If your app has extended quiet periods (say overnight), this might be burning a hole in your (or your company's) pocket.

Enter serverless

You can probably see where this is going: what happens if you can get away without doing a lot of that (often rote) configuration and just dive straight in to the meaty coding problems? That is the core idea behind serverless computing. Fundamentally, the server is still there in one form or another, but the setup and configuration of that server to expose your application is abstracted away. From a practical perspective, this means:

  • You don't have to manually configure and maintain your VMs.
  • You don't have to write (much) boilerplate framework code.
  • You don't need to configure any firewall rules, port forwarding or auto-scaling.

In essence, a key aim of serverless computing is to dramatically reduce the typical workload of a developer by maximising the time they spend writing the business logic that ultimately drives business value, and less time getting to the point of writing that logic. You, the developer, write a function that captures some valuable business logic and can immediately ship it as a massively scalable service. In theory, it’s a great way to minimise time to value. To take Google's phraseology, serverless computing allows anyone to take their app:

"from prototype to production to planet-scale."

And there’s something else too. You only pay for what you use. If you have low demand for your services, you pay less. That’s not likely to be true for your classic VM deployment discussed above.

Event-driven architecture

So how does it do this?

One of the key enabling technologies for serverless computing are so-called 'micro VMs' such as FireCracker. Technologies such as this allow cloud providers to rapidly 'spin up' new VMs in order to run your code on demand and then quickly spin them down again when they're not in use. This is the origin of the term 'event-driven architecture': your code and the VM it is on is only run when it is triggered by an event (e.g. a request to the function, or some other trigger such as a signal from a scheduler to run the function at a specific time).

As an added bonus, the cloud providers will 'spin up' as many VMs as needed to service the demand for your services, meaning your function/app can scale near-infinitely (though this will get expensive!). Practically, this means that in the minimal case, all you need to do is write a simple function in your chosen language and tell your chosen provider to expose it as a serverless function.

Practically, this means that in the minimal case, all you need to do is write a simple function in your chosen language and tell your chosen provider to expose it as a serverless function.

All you need to do is stick to a couple of rules and conventions, and it'll work out of the box. You'll see how literal this statement is in a moment. Sounds pretty cool, right?

Behind the curtain

This may all seem a little too good to be true. Many such tools and frameworks fall into the trap of making grand claims and instead end up merely shuffling problems around. But in the case of serverless, it really can be that good. At least some of the time. Usually. Provided you’re okay with a few (possibly minor, potentially fatal caveats) at least. Here's some of the main issues to look out for:

  • Cold Start - As you've seen, when not in use, the ‘actual’ server managing your serverless code may spin down. This saves you money, but it also adds a (usually short) lag when you send a request to your serverless function to spin the machine back up. This lag can be on the order of a second or two, especially for some interpreted languages with notoriously slow startup times. After your function has been spun back up, it'll remain highly responsive for several minutes, after which period it will be spun back down if it hasn't been used. If your application cannot tolerate this sort of lag, the serverless functions may be problematic.
  • Limited resources & configurability - One of the benefits of serverless computing is also one of its potential drawbacks: it restricts the configurability of the environment, the operating system and the 'hardware' that you can run your code on. This makes it troublesome if you want to run code that is doing a lot of CPU or memory-intensive operations (common to many Data Science problems). If you need a lot of CPUs and RAM to run your function, you're probably going to need to look elsewhere.
  • Debugging - This one can become problematic. As you don’t have the same level of control over the environment your code is running in, it can be hard to effectively debug some problems, and harder than normal to set up a representative testing environment during development. Additionally, if your application ends up running multiple serverless functions as part of its architecture, it can be come pretty difficult to keep track of what is going on and where as the application's complexity grows.
  • Vendor Lock In - Finally, cloud providers often provide their own frameworks to deploy serverless functions on their platforms. This can make it harder to migrate code from one provider to another.

All this to say, if you've got an app that can tolerate occasional lag during low-demand periods, your workload is relatively light on resource requirements and you don't mind potentially being tied to a specific cloud provider, serverless may well be the ideal solution to your problems.

Time for some code

Still interested? Good. Here's a minimal example of how you can deploy your very own Python function as a Google Cloud Function – Google's serverless computing offering. First up, if you don't already have an account set up with Google Cloud, you should head over and set up a new one. New sign ups get $300 of credit, so this example shouldn't cost you a penny.

Next, install the Google gcloud command line tool. Google's docs give a walkthrough of how to do this on your platform.

Right, time for code. Here's the code you're going to deploy:

def say_hello(request):
	name = request.args.get("name")
	return f"Hello there, {name}!"

This is pretty straightforward. The only thing you really need to understand is that the request argument for this function is a Flask Request Context. As you may expect, this object contains information specific to your request, including arguments passed as part of your URL. In this case, you're pulling out the name argument from the URL. Concretely, if you sent a request to some.domain.name/say-hello?name=Jane this would extract the name argument. You can make arbitrary payloads accessible to your function this way.

Next, in a new directory of your choosing, copy this code into a file named main.py. Google's framework expects a file of this name, so it is best to play ball. If you want to install additional packages, you can include a requirements.txt file in the same directory as your main.py, and these will be installed for you when your function is deployed. You're now all set to deploy your function.

To deploy, you can run:

gcloud functions deploy say-hello --entry-point=say_hello --runtime=python37 --project={project} --allow-unauthenticated --trigger-http

Let's unpack this a little.

  • The command gcloud functions deploy tells gcloud that you want to deploy a function, the next argument say-hello is then the name of your Cloud Function. This will be used as the route for your function too (i.e. it'll be accessible at /say-hello);
  • The --entry-point points to the function in your main.py you would like to expose, and --runtime tells the deployment tool which Cloud Function runtime you'd like to deploy your code into (in this case Python 3.7);
  • The --project argument then specifies the Google Cloud project ID that you wish to deploy the Cloud Function on (make sure to put your own project ID in here!);
  • Finally, the --allow-unauthenticated flag tells the deployment tool to publicly expose the function (you may want to change this in production!), while the --trigger-http request tells it to be triggered on HTTP requests.

Run the command, and after a couple of minutes you should be able to go to a URL like (remember to substitute your own subdomain – you can find it in the console output for the above command):

https://{subdomain}.cloudfunctions.net/say-hello?name=World

And you should get the response:

Hello there, World!

And there you go, you've deployed a Google Cloud Function!