Watch our biggest AI launch event

Announcing Visual Copilot - Figma to production in half the time

Builder.io logo
Contact Sales
Contact Sales

Blog

Home

Resources

Blog

Forum

Github

Login

Signup

×

Visual CMS

Drag-and-drop visual editor and headless CMS for any tech stack

Theme Studio for Shopify

Build and optimize your Shopify-hosted storefront, no coding required

Resources

Blog

Get StartedLogin

‹ Back to blog

serverless

Going Serverless

January 4, 2023

Written By Shyam Seshadri

A scene from the movie "The princess bride"  with the text "you keep using the word serverless - I do not think it means what you think it means".h

Let’s start with the obvious — the name serverless is a lie! There are servers involved in a serverless setup. But why do we call it serverless? That’s because in serverless computing the cloud provider manages the infrastructure and automatically allocates resources to run the user's code, based on demand. This means that the user does not have to worry about setting up, configuring, and maintaining the underlying servers and infrastructure that their code runs on. Instead, they can focus on writing and deploying their code, and let the cloud provider take care of the rest.

Even more fundamentally, it allows developers to just write code and ship it, without having to set up servers, containers, or any other infrastructure that is usually associated with running code in the cloud. This code executes as needed, and the developer doesn’t have to worry about starting, scaling up, or the costs associated with the server running when it is not in use.

Dr. Evil from the movie "Austin Powers" doing air quotes with the text "serverless is not no server. its no server for you to manage".

Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure all offer serverless computing options.

On AWS, the serverless computing platform is called AWS Lambda. It allows users to run their code in response to events; such as changes to data in an Amazon S3 bucket or a message arriving on an Amazon Kinesis stream. AWS Lambda automatically scales and allocates the necessary computing resources to run the user's code, and users only pay for the compute time they use.

On GCP, the serverless computing platform is called Google Cloud Functions. It works similarly to AWS Lambda, in that Google Cloud Functions facilitates running code in response to events and automatically scales to meet the demand.

On Azure, the serverless computing platform is called Azure Functions. Like AWS Lambda and Google Cloud Functions, Azure facilitates running code in response to events and automatically scales to meet demand. Azure Functions also integrates with other Azure services, such as Azure Storage and Azure Event Grid, to provide a complete serverless computing platform.

All of the above, of course, allows users to respond to HTTP events in addition to the options listed above.

2 cartoon images - top 3 buttons with the following text: AWS, GCP, Azure. bottom image of a man wiping off sweat.

As I've stated before, serverless takes away a lot of the hassles that come with wiring up web servers.

Containers, on the other hand, are a way of packaging and running applications in a standardized way, so that you can deploy and run apps on any infrastructure. A container image contains the code and dependencies for an application, as well as the instructions for running it. This makes it easier to run the application in different environments, such as on a developer's local machine, in a test environment, or in production.

a meme with Johnny Depp and a boy sitting on a bench with the text "it works on my machine. Then we'll ship your machine. And that is how docker was born".

So, the main difference between serverless and containers is that serverless computing abstracts away the underlying infrastructure, while containers provide a way to package and run applications on any infrastructure. Giving a different analogy, containers would be you shipping an entire experiment along with all the equipment required to run the experiment, while serverless would be sending the instructions for the experiment and letting the runner of the experiment set up their own lab for it. Serverless computing is typically used for event-driven, short-lived tasks, while containers are more suited for long-running applications.

An image from the movie "Zoolander" with 2 characters and the text"serverless architectures, so hot right now"

Given the above, there are quite a few good reasons why it might make sense for you to evaluate and consider going serverless. Below are just a few, along with our own experience of the same:

Writing and deploying a serverless function with any of the cloud providers usually consists of the following steps:

  1. Write your code.
  2. Run a command to deploy it.
  3. Start using your function.

Granted, there might be additional steps in each of the above for the first time like configuring a CLI or other tasks, but overall, each function is self-contained, and independent. Most of the time, you just ship your code and its dependencies. Now compare it with either spinning up servers — manually or via containers — where you would usually need to set up your container infra, package your code into the container, figure out where to host the image of the container, set up your cloud infra to refer and start instances of the image and so on.

You also have solutions like Serverless which make this work even more straightforward to write the same code and deploy to AWS, GCP, or Azure, which can bring down your active work to a few minutes more than just writing the function code.

Alternatively, as you scale, it is easy to add in the additional DevOps infrastructure as we did at Builder to automate deployment from a CI/CD pipeline, via Github actions and the like, instead of starting with it on day one.

Serverless functions are cheap. GCP charges at the rate of $0.40 per million requests beyond the first two million requests. You are also charged for the compute time it uses which is also dirt cheap (Approx $ 0.000000231 per 100ms, or putting it differently, $0.231 for 1M invocations of a function that takes 100ms to run). Serverless functions are built in a way that they are spun up only when a request is made or expected (we’ll talk more about cold starts later), so you are only paying for the actual time your functions are used, unlike traditional servers which are perpetually running regardless of whether your instance is invoked or not.

Scale is the other innate advantage of serverless functions, as you can set up your functions to scale up (up to limits that you can configure, or technically infinite!) as the load and requests increase. That frees you up from having to manage your instances and upscale/downscale on demand. This is especially great when you have traffic patterns that change during the day. At Builder, this helps us scale with the traffic and handle any sudden spikes gracefully.

At Builder, we have over 100k instances of our serverless functions running at any given point in time, with 15M+ requests daily at the very least (peak loads exceeding 2M requests in an hour), and our cloud bill from cloud functions works out to a few thousands of dollars monthly.

Unlike traditional web servers where you might group different functions together, with serverless, each function that you code and deploy is an independent unit. That means that you can choose to conceptually group all your functions together, or treat them each as an individual micro-service depending on the level of granularity you want to work with. Thus, more often than not, it comes down to the infra your function depends on which defines your service boundaries. So if you choose a non-relational DB to back your serverless functions (we chose Firestore DB, for example), then almost every function becomes an independent micro-service that can be scaled up and down independently, unlike containers which would scale up everything in it.

This allows us to scale up/down, replace or version independent functions instead of full web servers which offers a lot more flexibility in the long run. For instance, we scale our customer-facing APIs independent of the functions backing our web app which would have wildly different traffic patterns and scaling requirements. Serverless gives you that flexibility (again, provided your other infrastructure supports it!).

An image of spiderman sitting in front of a computer in office attire with the text "with great flexibility comes great complexity"

With all of the above, it might be easy to assume that all is hunky dory with using serverless functions, when in fact it is not. Just like everything else, there are considerations when using serverless functions which can give you varying mileage with your use case.

When you first start writing serverless functions, it’s easy to get swept up in the belief that you have infinite scale because of how serverless functions work. But in fact, it can easily hide the fact it can exacerbate your problems if you are not careful. All serverless functions do is remove the function as the bottleneck for your scaling issues. It does not automatically scale the underlying infra that your function depends on. Case in point, if you use an RDBMS instance to serve some data from your serverless function, and you have a sudden spike in traffic, that is going to cause a spike in the number of connections to your database instance, possibly choking it in case it is not sufficiently scaled up to handle the load.

Serverless functions are one piece of the puzzle when it comes to scaling, and it's important to consider it as just that, and not as a be-all-end-all solution for your scaling issues.

We mentioned above that serverless functions are spun up on demand depending on the load, and that you are charged only for the time the functions are running. That introduces us to one problem with serverless functions, which is called a cold start. A cold start is when a function instance has to be initialized to serve a request, which internally would require your function code to be deployed and initialized. This is slower than a function invocation where the serverless function is already provisioned and ready to receive requests.

One common way to handle it is to provision a minimum number of instances for your function which almost all providers allow. That will still not solve cold starts if your functions need to scale up to handle the incoming increase in traffic. In addition, that will increase your cloud bill, as min instances are billed as if your function is running 24/7 (for CPU time, not for invocations).

Another approach is to have periodic triggers to your function to ensure that at any given point in time, there is a function instance available to serve your requests. The reason this is different from the min instance approach is that it gives you much more control over what times you might want to trigger the function. For example, you might decide that you want the instance hot only during your peak load times, and not care too much about it at other times.

an image from the show "Rick & Morty" with the text: "I pinged my lambda function every 15 minutes to make sure it's always warm. Well that just sounds like a server with extra steps"

The flip side of the cold starts and min instance problem is the maximum limit. Serverless functions make it easy for you to scale, but it can also mean unintelligently scaling past what you expect happily (for the provider, not for you) and running up your cloud bill.

GCP as of this writing does not allow you to set a budget / hard limit on spending, so it is important to set up alerts so that you are notified when your billing exceeds certain thresholds. 

At the same time, it is possible (and recommended) that you set a max instance limit on your cloud functions (as well as alerts on the number of instances getting close to the threshold) so that you can cap the unintended scale-up at its root.

Without the above, it is very easy to get a sudden bill that vastly exceeds what you had accounted for.

One final limitation— and major for a lot of folks — for serverless functions is that you are both tied into the provider as well as the limitations of their infrastructure. That is, while technically it is possible to write serverless functions in a cloud-agnostic way, more often than not you end up writing for one specific provider or the other. (Though some recent advances like Cloud Run in GCP allow you to use containers & images instead of functions.)

There can also be limitations on the provider front on what is possible — such as timeouts — which might limit the kind of work you can do in serverless functions. For example, Gen1 cloud functions, which we rely heavily on at Builder, do not allow concurrent invocations of a cloud function. This makes GCP spin up a new instance of a cloud function even if an existing function is just waiting on network I/O.

a scene from "American Chopper" with the father and the son arguing with the text: "We'll be cloud agnostic. egress charges and operational overhead will far outweigh the benefits. but vendor lock-in! you will lose the ability to use any managed services, risking much more downtime than any single cloud cloud! cloud agnostic!"

This post does a quick recap of our learnings from using Serverless and Google Cloud functions in particular. If you are starting on a new project, and can set up your dependencies/infra in a way that they can scale with increasing load, then serverless becomes a viable option for you to consider in your development journey, especially considering how cheap and efficient it can be at scale. At Builder, we have found it increasingly efficient to run significant parts of our workloads and hope that our learnings can ensure you get a quick start to making your decision and even maybe starting a project with serverless functions.

Introducing Visual Copilot: convert Figma designs to high quality code in a single click.

Try Visual Copilot

Share

Twitter
LinkedIn
Facebook
Hand written text that says "A drag and drop headless CMS?"

Introducing Visual Copilot:

A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot
Newsletter

Like our content?

Join Our Newsletter

Continue Reading
AI5 MIN
Introducing Visual Copilot 2.0: Design to Interactive
October 31, 2024
Design Systems8 MIN
Design Systems Explained
October 18, 2024
Visual Editing7 MIN
Visual editing is bridging the gap between developers and designers
October 11, 2024